When: January 18, 2024
Where: Downtown Munich, Bayerischer Rundfunk (BR)
Hosted by: Bayerischer Rundfunk & TUM
Intended Audience: Researchers (professors, postdocs, PhD candidates) with an interest in AI and/or journalism and AI.
Join us in exploring the intersection of technology and journalism. This workshop focuses on getting to know the projects and methods being used in this exciting interdisciplinary field. And, to discover possible collaborations on use cases and methods. We look forward to connecting researchers in foundations, applications, and data! We explicitly encourage junior researchers to apply.
We want to hear from you if you work falls within one of the following topics:
- Language models for text services in journalism
- Dialect language model training
- Data to Text, Data to Graphics projects
- Automated audio fragmentation and tagging
- Face recognition in video content for metadata extraction
- Image or pattern recognition for investigations, e.g., in satellite imagery
Deadline for registration is December 5, 2023: https://collab.dvb.bayern/x/V5IuDQ
Participants will be selected within two weeks after submission deadline.
If you have any questions, please contact Emmelie Korell at email@example.com
We look forward to seeing you in January!
Already: Deep fakes are flooding the web. Some fear that AI could even help develop new viruses. "I still believe that the potential to help humanity through AI is extremely great," says AI researcher Daniel Rückert, a member of the Generative AI Taskforce on the German TV program "Titel Thesen Temperamente." He is primarily researching AI-assisted image analysis in medicine. For example, he envisions that in a few years, every primary care physician will be able to use AI to say, "You may develop a certain cancer in two years. Here's what you can do now to change that still."
Watch the whole segment (in German) here.
“We are currently preparing students for jobs that don’t yet exist, using technologies that haven’t been invented, in order to solve problems, we don’t even know are problems yet.“ Richard Riley (U.S. Secretary of Education under the Administration of President Clinton)
Generative Artificial Intelligence (AI) with its ability to generate synthetic data is considered a revolution in machine learning. The example of ChatGPT shows that such technology can not only automate text creation but also increase human creativity. Despite the current limitations and challenges associated with the use of such technology, the profound public curiosity led to a record-breaking one million users within only five-day after its launch. Since the release of ChatGPT to the public in November 2022, we are observing a consistent reduction in technological cycles of Generative AI.
These technologies hold an unbounded potential to foster and promote adaptive, cooperative, and immersive learning environments tailored to individual learners. Characterized by their ubiquity, adaptability to the learner, cost-effectiveness, they bear the potential to serve as tools for user empowerment in the large scale. Such advancements promise to bring us a big step closer to realizing the UNESCO Education 2030 Agenda, advocating for a human-centered approach to AI, which fostering inclusivity and equity in education. In line with the mission statement "AI for all", the aim is to ensure that this technological revolution benefits everyone, especially in the areas of innovation and knowledge dissemination and is used in a responsible way.
Fostering creativity and critical thinking
Hence, to adequately equip learners for their future professional and personal goals, it is crucial to provide them with competencies in addition to basic knowledge. These competencies should enable learners to compete in an environment where numerous tasks are automated, complex cognitive processes are required, personal responsibility and interpersonal skills are increasing, and interdisciplinary collaboration is the basis for solving complicated societal problems. The mandate for education is therefore to evolve from tasks rooted in routine and impersonality to tasks that are personalized, multifaceted, and creative. We need to develop strategies that promote multiple competencies beyond traditional curricula, with an emphasis on fostering creativity, critical thinking, collaboration, and communication.
We are currently facing an exciting and disruptive change in education. The most important question remains: How can we democratize access to innovation and knowledge, create a more equitable and inclusive academic landscape, and meet the demands of a world in transition? What pressing challenges do we need to address towards this goal? This is where I am also interested in a perspective from the neurosciences - from my colleague Aldo Faisal: How can interdisciplinary approaches, and in particular insights from neuroscience, help to distinguish between the outputs and utterances produced by an AI and those produced by humans?
Global discourse series "One Topic, One Loop"
Four people from four different countries and four different universities discuss a current topic in research and teaching. The series begins with an initial question to which the first person responds and asks the next person another question on the same topic. The series ends with the first person answering the last question and reflecting on all previous answers. The topic of the first season is Large Language Models and their impact on research and teaching.
Find the whole series with Aldo Faisal, Professor of AI & Neuroscience at Imperial College London, Jerry John Kponyo, Associate Professor of Telecomunnications Engineering at Kwame Nkrumah' University of Science and Technology and Sune Lehmann Jørgensen, Professor at the Department of Applied Mathematics and Computer Science at Technical University of Denmark here.
Prof. Dr. Enkelejda Kasneci is the co-lead of the Generative AI Taskforce, here at the TUM Think Tank. She is also heading the Chair of Human-Centered Technologies for Learning at the TUM School of Social Sciences and Technology and is director of the TUM Center for Educational Technologies and a member of the Munich Data Science Institute at TUM. Before being appointed to a professorship at TUM, Enkelejda Kasneci studied informatics and conducted research into human-machine interactions at the University of Tübingen.
Artificial intelligence (AI) has the potential to fundamentally change the scientific landscape. In a state parliament hearing in the Committee for Science and the Arts, Prof. Dr. Enkelejda Kasneci, co-leader of the Generative AI Taskforce at the TUM Think Tank, was asked together with other experts about the opportunities and risks of AI in higher education. The discussion revolved around preparing students and faculty to use AI, the role of AI tools such as ChatGPT in writing, and the need for open and accessible use of AI tools in libraries. Despite some concerns, the experts emphasized the positive impact of AI and advocated for an optimistic view of the future of academia.
The co-leader of the Generative AI Taskforce emphasized that generative AI is steadily advancing and has ever-shorter technological cycles. This, she said, opens up opportunities for active, collaborative and immersive learning environments that are individually tailored to learners' needs, thus paying into the UNESCO Education 2030 Agenda, which calls for a human-centered approach to AI in education to advance inclusion and equity. Under the motto "AI for All," everyone should benefit from the technological revolution and reap its rewards, especially in the form of innovation and knowledge.
Basic competency goals in academic writing will still be maintained and will not be replaced in the long term. However, the introduction of AI writing tools requires adaptation, where integration should be done responsibly. Legal issues were also highlighted during the hearing, such as copyright, privacy, and liability. Universities should carefully consider these issues and take appropriate measures to protect the rights of all stakeholders, Kasneci said.
Although some students and faculty appreciated the efficiency and support of AI writing tools and certainly saw advantages in time savings, generation of ideas, and error detection, there was also some skepticism about automated text generation. Concerns about plagiarism and data protection could also lead to acceptance problems. According to Kasneci, students and faculty have reservations predominantly about the accuracy, reliability, and ethics of AI-generated texts. There could be a sense of loss of control if AI writing tools are perceived as a substitute for traditional writing skills. Therefore, she said, it is important to acknowledge these concerns and provide comprehensive education, training, and guidance to promote student and faculty confidence and acceptance.
In general, the experts agreed that a "calibrated trust" in AI is necessary in the scientific community. This means that students and teachers should be prepared for the use of AI in order to make the most of the opportunities offered by this technology. It was emphasized that AI tools such as ChatGPT can automate writing and increase creativity, allowing students and faculty to focus on more challenging tasks.
Kasneci appealed, "Education needs to move from routine and impersonal tasks to more personal, complex and creative tasks. We need to find ways to enable the promotion of multifaceted competencies beyond curricula and syllabi in higher education, with a strong focus on creativity, critical thinking, collaboration and communication."
She adds, "Overall, we are facing an exciting time of change in education. The question will be how do we make innovation and knowledge accessible to all, enable a more equitable and inclusive education landscape that meets the demands of a disruptively changing world."
It is not only in higher education that there is an urgent need for action. In the Handelsblatt, Kasneci recently called for a " revamping of the curricula". Teaching is "far too fragmented" - with the support of AI, it will be easier to teach "holistically" in the future. To achieve this, however, the education ministers must ensure that all teachers acquire a basic knowledge of AI.
Enkelejda Kasneci is a co-founder of the newly established TUM Center for Educational Technologies, where interdisciplinary research teams investigate the effectiveness of digital tools for learning and teaching and develop new applications. The center will bring these into practice through advanced training and by supporting start-ups.
Each week, we will introduce you to one of the members of the Generative AI Taskforce at the TUM Think Tank. Get to know their take on the latest developments within the field of Generative AI, representing various perspectives and fields.
On our first episode: Meet Georg Groh, from the TUM School of Computation, Information and Technology.
Now out on our YouTube Channel:
In a significant development, the European Parliament has adopted its negotiating position on the AI Act. This paves the way for discussions with EU countries in the Council. Once finalized, the AI Act will be the world's first comprehensive legislation on artificial intelligence. As discussions continue, members of our Generative AI Taskforce are providing valuable insights into the implications and potential of the AI Act.
A common point of public debate is how generative AI will change our workforce. Isabell Welpe, Chair of the Strategy and Organisation at TUM and a member of the TUM Think Tank's Generative AI Taskforce, notes:
As the field advances at an unprecedented pace, regulatory frameworks are trying to keep up. Another member of the Generative AI Taskforce, Christoph Lütge, who holds the Peter Löscher Chair of Business Ethics at TUM and is also the director of the TUM Institute for Ethics in AI, outlines the challenges of regulation:
“The recent adoption of the AI Act by the European Parliament underscores the pressing need to regulate the rapidly advancing field of Artificial Intelligence. The AI Act represents the European Union's ambitious endeavor to establish a regulatory framework for AI. However, the challenge lies in striking a delicate balance between safeguarding fundamental rights, fostering ethical AI development, and avoiding any unintended stifling of innovation."
Christian Djeffal, Assistant Professor of Law, Science and Technology at TUM, shares his perspective on the details of the AI Act in a blog post, after participating in a working meeting organized by the Electronic Transactions Development Agency (ETDA) of Thailand.
He outlines his aspirations for possible improvements to the AI Act:
Dirk Heckmann holds the Chair for Law and Security in Digital Transformation at TUM and als serves as is co-director of the Bavarian Institute for Digital Transformation (bidt). As a member of the taskforce, he appreciates the European legislature's recognition of the urgent need to regulate AI with the world's first comprehensive legal framework for "trustworthy AI", further explaining:
Urs Gasser, Chair for Public Policy, Governance and Innovative Technology at TUM and Dean of the TUM School of Social Sciences and Technology, was invited to write an editorial for “Science” in which he wrote:
An undertaking to which the Generative AI Taskforce has devoted its work to.
On Friday, June 23rd 2023, members of the Generative AI Taskforce participated in a working meeting organized by the Electronic Transactions Development Agency (ETDA) of Thailand. One main focus of the meeting was to learn more about Europe’s approach to AI governance, and in particular to take a close look at the EU AI Act (AIA) and explore how Europe deals with the rise of generative AI. Christian Djeffal, Assistant Professor of Law, Science and Technology and member of our Gen AI Taskforce gave a short input on this issue. In this blog post, he shares his key takeaways.
The AI Act of the European Union could serve as an effective balancing act between risk regulation and innovation promotion. In my view, the recipe for an ideal equilibrium includes:
- Preserving the AI Act's lean, clear approach and avoiding over-regulation of too many topics through this Act. It is worth noting that clear rules can be assets for innovation, minimizing liability risks and enabling even smaller entities to venture into risky areas.
- It is undeniable that regulation imposes costs on developers. However, there are numerous strategies to alleviate these costs. Establishing infrastructures for legal advice and knowledge organization could prove invaluable, especially for start-ups.
- Implementing frameworks like responsible research, human-centered engineering, and integrated research can position legal regulation as an integral part of the innovation journey. This mindset lets developers incorporate legal, ethical, and societal considerations early on, enhancing their products and turning potential challenges into opportunities.
Therefore, the AI Act has potential with its blend of a broad framework and specific sector regulations and enforcement. However, it is the finer details that need refining for it to truly thrive. A key area of focus should be honing the high-risk system's definition. For example, the current outlines in Annex III are so expansive they could include applications that hardly meet the high-risk criteria.
Against this backdrop, AI sandboxes are brimming with potential, serving as a breeding ground for innovation and a practical tool for regulatory learning. Current proposals around the AI Act are mostly about establishing general structures as well as coordination and cooperation among member states. The success of these sandboxes relies heavily on their effective rollout by these states. Interestingly, other European legal developments, like the Data Governance Act—which allows for protected data sharing—might be game-changers, possibly boosting sandboxes to an entirely new level, as they would also allow sharing data under the protection of data protection or intellectual property law.
If I could make a wish concerning the AI Act, I would wish for more participatory elements, particularly in risk management. Engaging users and citizens in identifying and mitigating risks is crucial. Hence, it would be beneficial to incorporate such practices "where appropriate". Analogous provisions already exist in the General Data Protection Regulation and the Digital Services Act. It is a fallacy to believe that only companies, compliance departments, and bodies responsible for algorithmic assessments can fully understand the societal implications of new systems.
Generative AI and its fast developments are baffling experts, policymakers, and the public alike. We have spent the last week at the Berkman Klein Center for Internet & Society at Harvard University, attending the "Co-Designing Generative Futures – A Global Conversation About AI" conference. After having talked with experts, decision-makers, and stakeholders from diverse fields and disciplines around the world, we came back with new questions and “mixed feelings” about the impact of generative AI: it is wicked, disruptive, complex, and ambiguous.
As we conclude this conference week, we are sure that capacity building is the answer to the duality between the incredible possibilities generative AI holds and the threats it poses. As we think about governance and implementations, let's dive into six things you can already do - and which we are fostering at TUM Think Tank - to shape the future of generative AI:
1. Implement ethical guidelines: Promote the development and adoption of ethical frameworks that prioritize transparency, fairness, and accountability in the use of generative AI.
2. Collaborate across disciplines: Foster collaboration between technologists, policymakers, ethicists, and diverse stakeholders to collectively address the challenges and risks associated with generative AI.
3. Research and development: Support research initiatives that focus on responsible AI, including bias mitigation, privacy preservation, and effective detection of generated content.
4. Educate and raise awareness: Share knowledge and raise awareness about the opportunities and challenges of generative AI, empowering individuals and organizations to make informed decisions.
5. Champion diversity and inclusion: Encourage diverse representation and inclusivity in the development and deployment of generative AI systems to mitigate biases and ensure equitable outcomes.
6. Boost positive impact on economy and education: Support businesses to pick up the highly innovative possibilities of generative AI. Foster our education system to not only make use of the technology, but support skill development.
The connections made, insights shared, and ideas generated during this week are great examples of collective capacity building. The conference was hosted in collaboration with the Global Network of Internet & Society Research Centers, BI Norwegian Business School, Instituto de Tecnologia e Sociedade (ITS Rio), and the TUM Think Tank.