Filter activities

On February 16, the TUM Think Tank hosted a fireside chat featuring Sir Nick Clegg, President of Global Affairs at Meta. In conversation with Urs Gasser, rector of the Munich School of Politics and Public Policy (HfP), Sir Nick Clegg shared his perspectives and insights on a diverse set of topics at the intersection of artificial intelligence (AI) and innovation, particularly from a European standpoint.

The wide-ranging discussion touched upon the transformative potential of AI technologies, dissecting their implications across various sectors both within Europe and globally. Sir Nick Clegg, a leading figure in the technological landscape, shed light on Europe's unique contributions and regulatory considerations concerning AI and the metaverse.

Here are a few of the key points made during the fireside chat:

1. The countries that will most benefit from AI technology are those that can deploy it quickly and effectively, not necessarily the ones that develop it. Geopolitical discussions about AI are moving from attempts to control access to recognizing the inevitability of its widespread adoption. This shift is exemplified by companies like Meta open-sourcing their large language models (LLMs), indicating a trend towards sharing technology to maximize its use, rather than gatekeeping it.

2. Realizing the full potential of AI technology requires international cooperation, ideally among techno-democracies like the EU, USA, and India. Despite political challenges and varying approaches to technology policy, collaboration on research and policy could significantly advance AI's positive impact, particularly in fields like health and climate change.

 

3. While AI's dual-use nature means it can be used for both beneficial and harmful purposes, particularly in generating realistic misinformation, ongoing efforts by tech companies to identify and label AI-generated content are crucial. Cooperation among major players to establish standards and responsibilities for AI-generated content can empower users to discern and mitigate misinformation.

4. The narrative that technology, including AI, is inherently detrimental to democracy is challenged by historical context and empirical research. Concerns about technology's impact are often exaggerated, and while it's essential to develop ethical guardrails alongside technological advancements, the relationship between technology and societal change is complex and not inherently negative.

5. In discussions about AI, we often sensationalize its dangers, treating scenarios like the Terminator as relevant and fearing AI's replacement of humans. This tendency stems from anthropomorphizing AI, attributing human-like qualities to it, resulting in misplaced concerns. Instead, AI should be viewed as a tool that excels in certain tasks, much like a fast-driving car. Moreover, there's a pattern where new technologies are exaggerated by both proponents and opponents, as seen historically with radio. Currently, AI's capabilities are overestimated, sparking moral panic and defensive regulations, detracting from the core question, which is how to effectively utilize it.

 

6. Companies like Meta, which are used by 4 billion people per day, bear significant responsibility, which they must acknowledge. We need guardrails that are not solely developed by tech companies, but are derived from the collaboration of government and society. It's not ideal that the development of guardrails occurs 20 years after the development of technology, like we see with Social Media. Ideally, this regulation process should happen concurrently.

This is what our attendees had to say on the event:

Sofie Schönborn, PhD Student at HfP:

„I am delighted to witness the diverse array of individuals who have found their way here today. Here, students converge with industry leaders from the technology sector, alongside scholars from TUM and forward-thinking figures from the public sphere. The TUM Think Tank emerges as a vibrant hub, a melting pot of ideas, and a diverse cohort of individuals committed to technology, society, and democracy. This is the place to have the conscious discussions and joint deliberation about societal and political implications of technologies, about responsibility and potential futures ahead of us... and to collaborate to enable human-centric technology co-creation and co-design!”

PhD Student at HfP:

“As a researcher, engaging with leading practitioners in the field has been hugely rewarding. It provides me with direct access to valuable first-hand information and proved helpful to complement empirics for my research during follow-ups with them. Personally, their careers inspire me and I am already looking forward to our next guests at the TUM Think Tank.”

Franziska Golibrzuch, Master Student at HfP:

“It was extremely insightful to listen to such an expert – Sir Nick Clegg gave us the industry perspective whilst having an extensive background in government. Especially in the case of AI and within the current debate about AI regulation, security etc. this has been a great opportunity for us TUM students. All in all, it was a super interesting event which finds many applications to my studies because it puts the intersection of technology and politics in the center of the discussion and sheds, time and again, light on the critical and important overlaps in innovation, society and public policy realm. Even after the fireside chat I had the chance to pose questions, which I appreciate a lot.”

Thank you to the Meta team, for making this fireside chat possible and to everyone who took part and asked thought-provoking questions.

Sir Nick Clegg is President, Global Affairs at Meta. He joined the company, then called Facebook, in 2018 after almost two decades in British and European public life. Prior to being elected to the UK Parliament in 2005, he worked in the European Commission and served for five years as a member of the European Parliament. He became leader of the Liberal Democrat party in 2007 and served as Deputy Prime Minister in the UK's first coalition government since the war, from 2010 to 2015. He has written two best-selling books, Politics: Between the Extremes and How to Stop Brexit (And Make Britain Great Again).

The research project "Using AI to Increase Resilience against Toxicity in Online Entertainment (ToxicAInment)", by Prof. Dr. Yannis Theocharis (Chair of Digital Governance), funded by the Bavarian Research Institute for Digital Transformation (BIDT), explores the spread of extremist, conspiratorial and misleading content on social media, investigating how this content is embedded through entertaining content. It aims to deepen the understanding of the impact of this content on user behavior by combining entertainment theories, visual communication and toxic language with AI methods. This project makes an important contribution to analyzing and combating online toxicity. More information can be found on the project page or in the BIDT press release.

 

With the aim of tackling the issue of harmful discourse online, we joined forces with the Bavarian State Ministry of Justice in order to build a community of practice. In a series of events of the Reboot Social Media Hub, we brought together academics and practitioners working on hate speech and other forms of hateful content online to discuss current problems and potential solutions.

Opening up the dialogue, our panel discussion with Teresa Ott (Hate Speech Officer at the Attorney General’s Office), Anna Wegscheider (lawyer at HateAid), and Svea Windwehr (policy analyst at Google), moderated by Georg Eisenreich, MDL (Bavarian State Minister of Justice) and Urs Gasser (TU Munich), gave around 100 guests that participated in the event deeper insights about the current state of the EU’s Digital Services Act and its impact on prosecutors, platforms, and users.  

Key insights from the discussion

While EU-wide harmonization through the DSA was identified as having great potential, it still faces setbacks when compared to the NetzDG, such as the lack of inclusion of deletion periods or concrete details on enforcement for hate speech violations. It was hence seen critical to explore ways to guarantee that the stronger and more actionable aspects of the NetzDG will still be available even if the DSA and its more ambiguous propositions are put in place. 

In general, the panelists noticed that the internal processes of major platforms regarding content moderation practices are still a "black box" for both law enforcement and victims of online hate. There was a wide consensus that this could and should be improved through expanded cooperation and communication between stakeholders from civil society, agencies, and platform operators. 

A final point was raised regarding public awareness of hate speech. Only 2 out of 10 online offenses are currently reported. To increasingly report and prosecute online hate, awareness of digital violence must be further increased - not only among the population but also in the judiciary and law enforcement. But as the number of reported cases increases, additional resources will then also become necessary for the responsible authorities to prosecute these instances. 

Session 1 focused on the latest insights on hate speech, incivility, and misogyny in online discourse. Based on inputs from Yannis Theocharis, Janina Steinert, and Jürgen Pfeffer (all TU Munich), participants discussed trade-offs between the need for different forms of content moderation vis-à-vis freedom of speech as a fundamental norm. While there was a consensus that a better understanding of the “grey zones” of hate speech, and how to deal with it, is needed, it was clear that some types of online behavior should not be normalized. It was also stressed that online hate is spread by a comparatively few, which are extremely loud and hence gain a lot of traction. This, in turn, has implications regarding whom policies to regulate harmful online content should be directed towards: the few haters or the large masses. 

Session 2 dealt with questions related to the implementation of the Digital Services Act concerning online hate. Following inputs from Till Guttenberger (Bavarian State Ministry of Justice) and Teresa Ott (Hate Speech Officer at the Attorney’s General Office), it was debated how to keep effective measures of the NetzDG alive once the Digital Services Act is enacted. A core topic focused on how future institutions and mechanisms should be designed. Participants were also wondering how to best increase awareness among victims and the public on ways to report hate speech.

Session 3 looked beyond the law to address uncivil discourse online. Christian Djeffal (TU Munich) spoke about content moderation together with users, while Sandra Cortesi (University of Zurich & Harvard University) gave an overview on how to educate children with the skills to navigate online discourses on social media. The big questions focused on finding sweet spots between education and regulation– which is probably not an “either/or” – as well as who is in the best position to create educational content highlighting that all relevant actors need to be on board. 

Partners & organization

The events were jointly organized by the TUM Think Tank, the Professorship for Public Policy, Governance and Innovative Technology, and the Bavarian State Ministry of Justice. Our special thanks go to our panelists Teresa Ott, Svea Windwehr, and Anna Wegscheider for sharing their expertise. We thank all participants for their input and engaged discussions as we are looking forward to continuing this conversation that we started. 

Impressions of the workshop © Thomas Gunnar Kehrt-Reese

Don't lose track of innovation.

Sign up to our newsletter and follow us on social media.