Filter activities

“We are currently preparing students for jobs that don’t yet exist, using technologies that haven’t been invented, in order to solve problems, we don’t even know are problems yet.“ Richard Riley (U.S. Secretary of Education under the Administration of President Clinton)

Generative Artificial Intelligence (AI) with its ability to generate synthetic data is considered a revolution in machine learning. The example of ChatGPT shows that such technology can not only automate text creation but also increase human creativity. Despite the current limitations and challenges associated with the use of such technology, the profound public curiosity led to a record-breaking one million users within only five-day after its launch. Since the release of ChatGPT to the public in November 2022, we are observing a consistent reduction in technological cycles of Generative AI.

These technologies hold an unbounded potential to foster and promote adaptive, cooperative, and immersive learning environments tailored to individual learners. Characterized by their ubiquity, adaptability to the learner, cost-effectiveness, they bear the potential to serve as tools for user empowerment in the large scale. Such advancements promise to bring us a big step closer to realizing the UNESCO Education 2030 Agenda, advocating for a human-centered approach to AI, which fostering inclusivity and equity in education. In line with the mission statement "AI for all", the aim is to ensure that this technological revolution benefits everyone, especially in the areas of innovation and knowledge dissemination and is used in a responsible way.

Fostering creativity and critical thinking

Hence, to adequately equip learners for their future professional and personal goals, it is crucial to provide them with competencies in addition to basic knowledge. These competencies should enable learners to compete in an environment where numerous tasks are automated, complex cognitive processes are required, personal responsibility and interpersonal skills are increasing, and interdisciplinary collaboration is the basis for solving complicated societal problems. The mandate for education is therefore to evolve from tasks rooted in routine and impersonality to tasks that are personalized, multifaceted, and creative. We need to develop strategies that promote multiple competencies beyond traditional curricula, with an emphasis on fostering creativity, critical thinking, collaboration, and communication.

We are currently facing an exciting and disruptive change in education. The most important question remains: How can we democratize access to innovation and knowledge, create a more equitable and inclusive academic landscape, and meet the demands of a world in transition? What pressing challenges do we need to address towards this goal? This is where I am also interested in a perspective from the neurosciences - from my colleague Aldo Faisal: How can interdisciplinary approaches, and in particular insights from neuroscience, help to distinguish between the outputs and utterances produced by an AI and those produced by humans?

Global discourse series "One Topic, One Loop"

Four people from four different countries and four different universities discuss a current topic in research and teaching. The series begins with an initial question to which the first person responds and asks the next person another question on the same topic. The series ends with the first person answering the last question and reflecting on all previous answers. The topic of the first season is Large Language Models and their impact on research and teaching.

Find the whole series with Aldo Faisal, Professor of AI & Neuroscience at Imperial College London, Jerry John Kponyo, Associate Professor of Telecomunnications Engineering at Kwame Nkrumah' University of Science and Technology and Sune Lehmann Jørgensen, Professor at the Department of Applied Mathematics and Computer Science at Technical University of Denmark here.

The author

Prof. Dr. Enkelejda Kasneci is the co-lead of the Generative AI Taskforce, here at the TUM Think Tank. She is also heading the Chair of Human-Centered Technologies for Learning at the TUM School of Social Sciences and Technology and is director of the TUM Center for Educational Technologies and a member of the Munich Data Science Institute at TUM. Before being appointed to a professorship at TUM, Enkelejda Kasneci studied informatics and conducted research into human-machine interactions at the University of Tübingen.

Artificial intelligence (AI) has the potential to fundamentally change the scientific landscape. In a state parliament hearing in the Committee for Science and the Arts, Prof. Dr. Enkelejda Kasneci, co-leader of the Generative AI Taskforce at the TUM Think Tank, was asked together with other experts about the opportunities and risks of AI in higher education. The discussion revolved around preparing students and faculty to use AI, the role of AI tools such as ChatGPT in writing, and the need for open and accessible use of AI tools in libraries. Despite some concerns, the experts emphasized the positive impact of AI and advocated for an optimistic view of the future of academia.

The co-leader of the Generative AI Taskforce emphasized that generative AI is steadily advancing and has ever-shorter technological cycles. This, she said, opens up opportunities for active, collaborative and immersive learning environments that are individually tailored to learners' needs, thus paying into the UNESCO Education 2030 Agenda, which calls for a human-centered approach to AI in education to advance inclusion and equity. Under the motto "AI for All," everyone should benefit from the technological revolution and reap its rewards, especially in the form of innovation and knowledge.

Basic competency goals in academic writing will still be maintained and will not be replaced in the long term. However, the introduction of AI writing tools requires adaptation, where integration should be done responsibly. Legal issues were also highlighted during the hearing, such as copyright, privacy, and liability. Universities should carefully consider these issues and take appropriate measures to protect the rights of all stakeholders, Kasneci said.

Although some students and faculty appreciated the efficiency and support of AI writing tools and certainly saw advantages in time savings, generation of ideas, and error detection, there was also some skepticism about automated text generation. Concerns about plagiarism and data protection could also lead to acceptance problems. According to Kasneci, students and faculty have reservations predominantly about the accuracy, reliability, and ethics of AI-generated texts. There could be a sense of loss of control if AI writing tools are perceived as a substitute for traditional writing skills. Therefore, she said, it is important to acknowledge these concerns and provide comprehensive education, training, and guidance to promote student and faculty confidence and acceptance.

In general, the experts agreed that a "calibrated trust" in AI is necessary in the scientific community. This means that students and teachers should be prepared for the use of AI in order to make the most of the opportunities offered by this technology. It was emphasized that AI tools such as ChatGPT can automate writing and increase creativity, allowing students and faculty to focus on more challenging tasks.

Kasneci appealed, "Education needs to move from routine and impersonal tasks to more personal, complex and creative tasks. We need to find ways to enable the promotion of multifaceted competencies beyond curricula and syllabi in higher education, with a strong focus on creativity, critical thinking, collaboration and communication."

She adds, "Overall, we are facing an exciting time of change in education. The question will be how do we make innovation and knowledge accessible to all, enable a more equitable and inclusive education landscape that meets the demands of a disruptively changing world."

It is not only in higher education that there is an urgent need for action. In the Handelsblatt, Kasneci recently called for a " revamping of the curricula". Teaching is "far too fragmented" - with the support of AI, it will be easier to teach "holistically" in the future. To achieve this, however, the education ministers must ensure that all teachers acquire a basic knowledge of AI.

Enkelejda Kasneci is a co-founder of the newly established TUM Center for Educational Technologies, where interdisciplinary research teams investigate the effectiveness of digital tools for learning and teaching and develop new applications. The center will bring these into practice through advanced training and by supporting start-ups.

On Friday, June 23rd 2023, members of the Generative AI Taskforce participated in a working meeting organized by the Electronic Transactions Development Agency (ETDA) of Thailand. One main focus of the meeting was to learn more about Europe’s approach to AI governance, and in particular to take a close look at the EU AI Act (AIA) and explore how Europe deals with the rise of generative AI. Christian Djeffal, Assistant Professor of Law, Science and Technology and member of our Gen AI Taskforce gave a short input on this issue. In this blog post, he shares his key takeaways.

The AI Act of the European Union could serve as an effective balancing act between risk regulation and innovation promotion. In my view, the recipe for an ideal equilibrium includes:

Therefore, the AI Act has potential with its blend of a broad framework and specific sector regulations and enforcement. However, it is the finer details that need refining for it to truly thrive. A key area of focus should be honing the high-risk system's definition. For example, the current outlines in Annex III are so expansive they could include applications that hardly meet the high-risk criteria.

Against this backdrop, AI sandboxes are brimming with potential, serving as a breeding ground for innovation and a practical tool for regulatory learning. Current proposals around the AI Act are mostly about establishing general structures as well as coordination and cooperation among member states. The success of these sandboxes relies heavily on their effective rollout by these states. Interestingly, other European legal developments, like the Data Governance Act—which allows for protected data sharing—might be game-changers, possibly boosting sandboxes to an entirely new level, as they would also allow sharing data under the protection of data protection or intellectual property law.

If I could make a wish concerning the AI Act, I would wish for more participatory elements, particularly in risk management. Engaging users and citizens in identifying and mitigating risks is crucial. Hence, it would be beneficial to incorporate such practices "where appropriate". Analogous provisions already exist in the General Data Protection Regulation and the Digital Services Act. It is a fallacy to believe that only companies, compliance departments, and bodies responsible for algorithmic assessments can fully understand the societal implications of new systems.

Don't lose track of innovation.

Sign up to our newsletter and follow us on social media.