TUM Think Tank
Where today's societal challenges meet tomorrow's technological excellence.
On Friday, June 23rd 2023, members of the Generative AI Taskforce participated in a working meeting organized by the Electronic Transactions Development Agency (ETDA) of Thailand. One main focus of the meeting was to learn more about Europe’s approach to AI governance, and in particular to take a close look at the EU AI Act (AIA) and explore how Europe deals with the rise of generative AI. Christian Djeffal, Assistant Professor of Law, Science and Technology and member of our Gen AI Taskforce gave a short input on this issue. In this blog post, he shares his key takeaways.
The AI Act of the European Union could serve as an effective balancing act between risk regulation and innovation promotion. In my view, the recipe for an ideal equilibrium includes:
- Preserving the AI Act's lean, clear approach and avoiding over-regulation of too many topics through this Act. It is worth noting that clear rules can be assets for innovation, minimizing liability risks and enabling even smaller entities to venture into risky areas.
- It is undeniable that regulation imposes costs on developers. However, there are numerous strategies to alleviate these costs. Establishing infrastructures for legal advice and knowledge organization could prove invaluable, especially for start-ups.
- Implementing frameworks like responsible research, human-centered engineering, and integrated research can position legal regulation as an integral part of the innovation journey. This mindset lets developers incorporate legal, ethical, and societal considerations early on, enhancing their products and turning potential challenges into opportunities.
Therefore, the AI Act has potential with its blend of a broad framework and specific sector regulations and enforcement. However, it is the finer details that need refining for it to truly thrive. A key area of focus should be honing the high-risk system's definition. For example, the current outlines in Annex III are so expansive they could include applications that hardly meet the high-risk criteria.
Against this backdrop, AI sandboxes are brimming with potential, serving as a breeding ground for innovation and a practical tool for regulatory learning. Current proposals around the AI Act are mostly about establishing general structures as well as coordination and cooperation among member states. The success of these sandboxes relies heavily on their effective rollout by these states. Interestingly, other European legal developments, like the Data Governance Act—which allows for protected data sharing—might be game-changers, possibly boosting sandboxes to an entirely new level, as they would also allow sharing data under the protection of data protection or intellectual property law.
If I could make a wish concerning the AI Act, I would wish for more participatory elements, particularly in risk management. Engaging users and citizens in identifying and mitigating risks is crucial. Hence, it would be beneficial to incorporate such practices "where appropriate". Analogous provisions already exist in the General Data Protection Regulation and the Digital Services Act. It is a fallacy to believe that only companies, compliance departments, and bodies responsible for algorithmic assessments can fully understand the societal implications of new systems.
TL;DR
The AI Act could serve as an effective balancing act between risk regulation and innovation promotion. However, it is the finer details that need refining for it to truly thrive. A key area of focus should be honing the high-risk system's definition. Against this backdrop, AI sandboxes are brimming with potential, serving as a breeding ground for innovation and a practical tool for regulatory learning.