Filter activities

“We are currently preparing students for jobs that don’t yet exist, using technologies that haven’t been invented, in order to solve problems, we don’t even know are problems yet.“ Richard Riley (U.S. Secretary of Education under the Administration of President Clinton)

Generative Artificial Intelligence (AI) with its ability to generate synthetic data is considered a revolution in machine learning. The example of ChatGPT shows that such technology can not only automate text creation but also increase human creativity. Despite the current limitations and challenges associated with the use of such technology, the profound public curiosity led to a record-breaking one million users within only five-day after its launch. Since the release of ChatGPT to the public in November 2022, we are observing a consistent reduction in technological cycles of Generative AI.

These technologies hold an unbounded potential to foster and promote adaptive, cooperative, and immersive learning environments tailored to individual learners. Characterized by their ubiquity, adaptability to the learner, cost-effectiveness, they bear the potential to serve as tools for user empowerment in the large scale. Such advancements promise to bring us a big step closer to realizing the UNESCO Education 2030 Agenda, advocating for a human-centered approach to AI, which fostering inclusivity and equity in education. In line with the mission statement "AI for all", the aim is to ensure that this technological revolution benefits everyone, especially in the areas of innovation and knowledge dissemination and is used in a responsible way.

Fostering creativity and critical thinking

Hence, to adequately equip learners for their future professional and personal goals, it is crucial to provide them with competencies in addition to basic knowledge. These competencies should enable learners to compete in an environment where numerous tasks are automated, complex cognitive processes are required, personal responsibility and interpersonal skills are increasing, and interdisciplinary collaboration is the basis for solving complicated societal problems. The mandate for education is therefore to evolve from tasks rooted in routine and impersonality to tasks that are personalized, multifaceted, and creative. We need to develop strategies that promote multiple competencies beyond traditional curricula, with an emphasis on fostering creativity, critical thinking, collaboration, and communication.

We are currently facing an exciting and disruptive change in education. The most important question remains: How can we democratize access to innovation and knowledge, create a more equitable and inclusive academic landscape, and meet the demands of a world in transition? What pressing challenges do we need to address towards this goal? This is where I am also interested in a perspective from the neurosciences - from my colleague Aldo Faisal: How can interdisciplinary approaches, and in particular insights from neuroscience, help to distinguish between the outputs and utterances produced by an AI and those produced by humans?

Global discourse series "One Topic, One Loop"

Four people from four different countries and four different universities discuss a current topic in research and teaching. The series begins with an initial question to which the first person responds and asks the next person another question on the same topic. The series ends with the first person answering the last question and reflecting on all previous answers. The topic of the first season is Large Language Models and their impact on research and teaching.

Find the whole series with Aldo Faisal, Professor of AI & Neuroscience at Imperial College London, Jerry John Kponyo, Associate Professor of Telecomunnications Engineering at Kwame Nkrumah' University of Science and Technology and Sune Lehmann Jørgensen, Professor at the Department of Applied Mathematics and Computer Science at Technical University of Denmark here.

The author

Prof. Dr. Enkelejda Kasneci is the co-lead of the Generative AI Taskforce, here at the TUM Think Tank. She is also heading the Chair of Human-Centered Technologies for Learning at the TUM School of Social Sciences and Technology and is director of the TUM Center for Educational Technologies and a member of the Munich Data Science Institute at TUM. Before being appointed to a professorship at TUM, Enkelejda Kasneci studied informatics and conducted research into human-machine interactions at the University of Tübingen.

Artificial intelligence (AI) has the potential to fundamentally change the scientific landscape. In a state parliament hearing in the Committee for Science and the Arts, Prof. Dr. Enkelejda Kasneci, co-leader of the Generative AI Taskforce at the TUM Think Tank, was asked together with other experts about the opportunities and risks of AI in higher education. The discussion revolved around preparing students and faculty to use AI, the role of AI tools such as ChatGPT in writing, and the need for open and accessible use of AI tools in libraries. Despite some concerns, the experts emphasized the positive impact of AI and advocated for an optimistic view of the future of academia.

The co-leader of the Generative AI Taskforce emphasized that generative AI is steadily advancing and has ever-shorter technological cycles. This, she said, opens up opportunities for active, collaborative and immersive learning environments that are individually tailored to learners' needs, thus paying into the UNESCO Education 2030 Agenda, which calls for a human-centered approach to AI in education to advance inclusion and equity. Under the motto "AI for All," everyone should benefit from the technological revolution and reap its rewards, especially in the form of innovation and knowledge.

Basic competency goals in academic writing will still be maintained and will not be replaced in the long term. However, the introduction of AI writing tools requires adaptation, where integration should be done responsibly. Legal issues were also highlighted during the hearing, such as copyright, privacy, and liability. Universities should carefully consider these issues and take appropriate measures to protect the rights of all stakeholders, Kasneci said.

Although some students and faculty appreciated the efficiency and support of AI writing tools and certainly saw advantages in time savings, generation of ideas, and error detection, there was also some skepticism about automated text generation. Concerns about plagiarism and data protection could also lead to acceptance problems. According to Kasneci, students and faculty have reservations predominantly about the accuracy, reliability, and ethics of AI-generated texts. There could be a sense of loss of control if AI writing tools are perceived as a substitute for traditional writing skills. Therefore, she said, it is important to acknowledge these concerns and provide comprehensive education, training, and guidance to promote student and faculty confidence and acceptance.

In general, the experts agreed that a "calibrated trust" in AI is necessary in the scientific community. This means that students and teachers should be prepared for the use of AI in order to make the most of the opportunities offered by this technology. It was emphasized that AI tools such as ChatGPT can automate writing and increase creativity, allowing students and faculty to focus on more challenging tasks.

Kasneci appealed, "Education needs to move from routine and impersonal tasks to more personal, complex and creative tasks. We need to find ways to enable the promotion of multifaceted competencies beyond curricula and syllabi in higher education, with a strong focus on creativity, critical thinking, collaboration and communication."

She adds, "Overall, we are facing an exciting time of change in education. The question will be how do we make innovation and knowledge accessible to all, enable a more equitable and inclusive education landscape that meets the demands of a disruptively changing world."

It is not only in higher education that there is an urgent need for action. In the Handelsblatt, Kasneci recently called for a " revamping of the curricula". Teaching is "far too fragmented" - with the support of AI, it will be easier to teach "holistically" in the future. To achieve this, however, the education ministers must ensure that all teachers acquire a basic knowledge of AI.

Enkelejda Kasneci is a co-founder of the newly established TUM Center for Educational Technologies, where interdisciplinary research teams investigate the effectiveness of digital tools for learning and teaching and develop new applications. The center will bring these into practice through advanced training and by supporting start-ups.

In a significant development, the European Parliament has adopted its negotiating position on the AI Act. This paves the way for discussions with EU countries in the Council. Once finalized, the AI Act will be the world's first comprehensive legislation on artificial intelligence. As discussions continue, members of our Generative AI Taskforce are providing valuable insights into the implications and potential of the AI Act.

A common point of public debate is how generative AI will change our workforce. Isabell Welpe, Chair of the Strategy and Organisation at TUM and a member of the TUM Think Tank's Generative AI Taskforce, notes:

As the field advances at an unprecedented pace, regulatory frameworks are trying to keep up. Another member of the Generative AI Taskforce, Christoph Lütge, who holds the Peter Löscher Chair of Business Ethics at TUM and is also the director of the TUM Institute for Ethics in AI, outlines the challenges of regulation:

“The recent adoption of the AI Act by the European Parliament underscores the pressing need to regulate the rapidly advancing field of Artificial Intelligence. The AI Act represents the European Union's ambitious endeavor to establish a regulatory framework for AI. However, the challenge lies in striking a delicate balance between safeguarding fundamental rights, fostering ethical AI development, and avoiding any unintended stifling of innovation."

 

Christian Djeffal, Assistant Professor of Law, Science and Technology at TUM, shares his perspective on the details of the AI Act in a blog post, after participating in a working meeting organized by the Electronic Transactions Development Agency (ETDA) of Thailand.

He outlines his aspirations for possible improvements to the AI Act:

 

 

Dirk Heckmann holds the Chair for Law and Security in Digital Transformation at TUM and als serves as is co-director of the Bavarian Institute for Digital Transformation (bidt). As a member of the taskforce, he appreciates the European legislature's recognition of the urgent need to regulate AI with the world's first comprehensive legal framework for "trustworthy AI", further explaining:

 

Urs Gasser, Chair for Public Policy, Governance and Innovative Technology at TUM and Dean of the TUM School of Social Sciences and Technology, was invited to write an editorial for “Science” in which he wrote:

An undertaking to which the Generative AI Taskforce has devoted its work to.

 

Generative AI and its fast developments are baffling experts, policymakers, and the public alike. We have spent the last week at the Berkman Klein Center for Internet & Society at Harvard University, attending the "Co-Designing Generative Futures – A Global Conversation About AI" conference. After having talked with experts, decision-makers, and stakeholders from diverse fields and disciplines around the world, we came back with new questions and “mixed feelings” about the impact of generative AI: it is wicked, disruptive, complex, and ambiguous.

As we conclude this conference week, we are sure that capacity building is the answer to the duality between the incredible possibilities generative AI holds and the threats    it poses. As we think about governance and implementations, let's dive into six things you can already do - and which we are fostering at TUM Think Tank - to shape the future of generative AI:

1. Implement ethical guidelines: Promote the development and adoption of ethical frameworks that prioritize transparency, fairness, and accountability in the use of generative AI.

2. Collaborate across disciplines: Foster collaboration between technologists, policymakers, ethicists, and diverse stakeholders to collectively address the challenges and risks associated with generative AI.

3. Research and development: Support research initiatives that focus on responsible AI, including bias mitigation, privacy preservation, and effective detection of generated content.

4. Educate and raise awareness: Share knowledge and raise awareness about the opportunities and challenges of generative AI, empowering individuals and organizations to make informed decisions.

5. Champion diversity and inclusion: Encourage diverse representation and inclusivity in the development and deployment of generative AI systems to mitigate biases and ensure equitable outcomes.

6. Boost positive impact on economy and education: Support businesses to pick up the highly innovative possibilities of generative AI. Foster our education system to not only make use of the technology, but support skill development.

The connections made, insights shared, and ideas generated during this week are great examples of collective capacity building. The conference was hosted in collaboration with the Global Network of Internet & Society Research Centers, BI Norwegian Business School, Instituto de Tecnologia e Sociedade (ITS Rio), and the TUM Think Tank.

Don't lose track of innovation.

Sign up to our newsletter and follow us on social media.