Filter activities

In a significant development, the European Parliament has adopted its negotiating position on the AI Act. This paves the way for discussions with EU countries in the Council. Once finalized, the AI Act will be the world's first comprehensive legislation on artificial intelligence. As discussions continue, members of our Generative AI Taskforce are providing valuable insights into the implications and potential of the AI Act.

A common point of public debate is how generative AI will change our workforce. Isabell Welpe, Chair of the Strategy and Organisation at TUM and a member of the TUM Think Tank's Generative AI Taskforce, notes:

As the field advances at an unprecedented pace, regulatory frameworks are trying to keep up. Another member of the Generative AI Taskforce, Christoph Lütge, who holds the Peter Löscher Chair of Business Ethics at TUM and is also the director of the TUM Institute for Ethics in AI, outlines the challenges of regulation:

“The recent adoption of the AI Act by the European Parliament underscores the pressing need to regulate the rapidly advancing field of Artificial Intelligence. The AI Act represents the European Union's ambitious endeavor to establish a regulatory framework for AI. However, the challenge lies in striking a delicate balance between safeguarding fundamental rights, fostering ethical AI development, and avoiding any unintended stifling of innovation."

 

Christian Djeffal, Assistant Professor of Law, Science and Technology at TUM, shares his perspective on the details of the AI Act in a blog post, after participating in a working meeting organized by the Electronic Transactions Development Agency (ETDA) of Thailand.

He outlines his aspirations for possible improvements to the AI Act:

 

 

Dirk Heckmann holds the Chair for Law and Security in Digital Transformation at TUM and als serves as is co-director of the Bavarian Institute for Digital Transformation (bidt). As a member of the taskforce, he appreciates the European legislature's recognition of the urgent need to regulate AI with the world's first comprehensive legal framework for "trustworthy AI", further explaining:

 

Urs Gasser, Chair for Public Policy, Governance and Innovative Technology at TUM and Dean of the TUM School of Social Sciences and Technology, was invited to write an editorial for “Science” in which he wrote:

An undertaking to which the Generative AI Taskforce has devoted its work to.

 

On Friday, June 23rd 2023, members of the Generative AI Taskforce participated in a working meeting organized by the Electronic Transactions Development Agency (ETDA) of Thailand. One main focus of the meeting was to learn more about Europe’s approach to AI governance, and in particular to take a close look at the EU AI Act (AIA) and explore how Europe deals with the rise of generative AI. Christian Djeffal, Assistant Professor of Law, Science and Technology and member of our Gen AI Taskforce gave a short input on this issue. In this blog post, he shares his key takeaways.

The AI Act of the European Union could serve as an effective balancing act between risk regulation and innovation promotion. In my view, the recipe for an ideal equilibrium includes:

Therefore, the AI Act has potential with its blend of a broad framework and specific sector regulations and enforcement. However, it is the finer details that need refining for it to truly thrive. A key area of focus should be honing the high-risk system's definition. For example, the current outlines in Annex III are so expansive they could include applications that hardly meet the high-risk criteria.

Against this backdrop, AI sandboxes are brimming with potential, serving as a breeding ground for innovation and a practical tool for regulatory learning. Current proposals around the AI Act are mostly about establishing general structures as well as coordination and cooperation among member states. The success of these sandboxes relies heavily on their effective rollout by these states. Interestingly, other European legal developments, like the Data Governance Act—which allows for protected data sharing—might be game-changers, possibly boosting sandboxes to an entirely new level, as they would also allow sharing data under the protection of data protection or intellectual property law.

If I could make a wish concerning the AI Act, I would wish for more participatory elements, particularly in risk management. Engaging users and citizens in identifying and mitigating risks is crucial. Hence, it would be beneficial to incorporate such practices "where appropriate". Analogous provisions already exist in the General Data Protection Regulation and the Digital Services Act. It is a fallacy to believe that only companies, compliance departments, and bodies responsible for algorithmic assessments can fully understand the societal implications of new systems.

Don't loose track of innovation.

Sign up to our newsletter and follow us on social media.