TUM Think Tank
Where today's societal challenges meet tomorrow's technological excellence.
Generative AI is reshaping governance. Quantum computing is redefining security. These technologies are advancing at a pace that challenges even the most agile regulatory systems. While they offer unprecedented opportunities—streamlining public administration, enhancing decision-making, and revolutionizing data security—they also introduce complex dilemmas. Who sets the rules? How do we ensure accountability? And can governance frameworks keep up?
At the Frontier Technologies – Governance Frontiers workshop in Singapore, co-hosted by the Technical University of Munich (TUM), Konrad-Adenauer-Stiftung (KAS), TUM Asia, and the TUM Think Tank, these questions took center stage. Experts from academia, industry, and government came together to discuss the governance frontiers of AI and quantum technology—not as abstract future concerns, but as urgent challenges already shaping policy and society today.
The discussions made one thing clear: governance is not a static framework—it is an evolving process that must adapt to technological change. Different governance models are emerging across the globe, each with distinct implications. The "Brussels Effect" sees stringent EU regulations influencing global standards, setting clear legal boundaries for AI and digital governance. The "Silicon Valley Effect" takes a different approach, prioritizing self-regulation and market-driven innovation. Meanwhile, the "Beijing Effect" represents a state-led model, where centralized oversight plays a dominant role. These paradigms frame the challenge ahead: How do we govern frontier technologies in ways that are effective, ethical, and globally coherent?
Generative AI
When it comes to Generative AI, governments are beginning to integrate AI-driven solutions into public administration, but concerns around bias, accountability, and regulatory fragmentation remain. Singapore’s National AI Strategy, first introduced in 2019 and recently revised, reflects an approach that seeks to balance innovation with safeguards. The AI Verify Toolkit, an open-source platform, has emerged as a key initiative—allowing businesses and regulators to assess AI systems for compliance with ethical and regulatory standards. But is voluntary governance enough? The debate continues between those advocating for industry-led guidelines and those calling for statutory regulation. While Singapore leans toward a hybrid approach—encouraging voluntary adoption with a roadmap for eventual regulation—the EU’s AI Act takes a more prescriptive stance, enforcing strict compliance from the outset.
Beyond regulation, AI also raises questions about digital trust and cultural representation. Large AI models are predominantly trained on Western datasets, risking the marginalization of linguistic and cultural diversity. Singapore is responding by investing in AI projects that preserve dialects and ensure national identity is embedded in AI applications. This signals a broader issue: If AI is shaping the way societies function, who gets to define its values?
Quantum technology
Quantum technology introduces an entirely different governance challenge. While still in its early stages, quantum computing has the potential to disrupt encryption, security, and geopolitical power structures. Singapore’s National Quantum-Safe Network (NQSN) stands as a testbed for quantum encryption, leveraging the country’s centrally managed fiber infrastructure to experiment with quantum-safe communication standards. But governance cannot stop at the national level. The race for quantum supremacy—driven by the US, China, and the EU—raises urgent questions about international cooperation. Should institutions like the United Nations or World Trade Organization play a role in setting quantum governance frameworks? Or will fragmented national strategies dominate, creating regulatory inconsistencies that could impact global security?
As AI and quantum computing continue to evolve, governance must remain proactive, inclusive, and forward-thinking. The workshop underscored the need for cross-regional collaboration and interdisciplinary engagement—ensuring that governance innovation is not just a response to technological change, but an active force in shaping a sustainable, democratic, and equitable future.
We often assume that AI can be trained to be neutral, ethical, and aligned with human values. However, a growing body of research suggests this goal may be impossible, including new findings from the Civic Machines Lab.
At the Participatory AI Research & Practice Symposium (Sciences Po) at the Paris AI Action Summit, the Civic Machines Lab presented research on a fundamental challenge in AI alignment:
The Problem: AI Alignment Overlooks Value Pluralism
AI alignment refers to teaching AI to align with human values—but which values? Society is far from unified, and different demographic groups perceive AI responses differently.
The Civic Machines Lab introduces a new alignment dataset that assesses AI-generated responses across five key dimensions:
- Toxicity
- Emotional awareness
- Sensitivity & openness
- Helpfulness
- Stereotypical bias
Their study, based on 1,095 participants in Germany and the US, revealed stark differences in how people rate AI’s behavior:
- Men rated AI as 18% less toxic than women.
- Older participants (51-60) found AI responses 40.6% less helpful than younger ones.
- Rather Conservative and Black/African American participants rated AI as significantly more sensitive and emotionally aware (+27.9% and +58.2% respectively).
- Country of origin alone had no major effect—ideology, gender, ethnicity and age mattered more.
- Social and political backgrounds shape how people rate the same AI response, complicating universal standards.
- Participants gave conflicting ratings, showing AI alignment is complex and values often overlap.
What This Means: There May Be No “Perfectly Aligned” AI
These findings challenge the idea that AI alignment is simply a technical issue. If different demographic groups disagree on what AI should be, how do we decide what alignment means in the first place?
At the Paris AI Action Summit, experts debated key governance challenges:
- How should participatory AI governance work?
- Who benefits most from AI that aligns with dominant perspectives?
- Can AI ever truly be unbiased, or will it always reflect social and ideological divides?
What’s Next?
The conversation on AI alignment is far from over. If AI is trained to align with dominant perspectives, does it risk marginalizing others? Should AI systems aim for majority agreement, inclusive balance, or something else entirely?
Takeaways from the AI Action Summit in Paris
The AI Action Summit in Paris provided a platform for critical discussions on the intersection of AI, governance, and societal impact. The Participatory AI Research & Practice Symposium at Sciences Po focused on critical discussions about AI development and governance, ensuring the inclusion of diverse and underrepresented voices. Key themes included participatory governance, power dynamics, and accountability in AI.
Takeaway: Participatory perspectives on AI can have transformative effects and support underrepresented voices. Nonetheless, there are serious obstacles in actually implementing these perspectives, because they are often overlooked by policy makers & industry.
The AI & Society House event, organized by Humane Intelligence alongside the Paris AI Action Summit, brought together global leaders from civil society, industry, and government. The discussions addressed pressing challenges in AI ethics, safety, and responsible technology. Panels covered topics such as combating online hate, generative AI and gender-based violence, and the role of journalists in evaluating AI's societal impact.
The Inaugural Conference of the International Association for Safe and Ethical AI focused on building consensus about the dangers of AI, short and long term. It brought together experts in AI safety & AI ethics, as well as policy makers and civil society organizations. Formats for collaboration and common actions were discussed.
Takeaway: Although there are many safety considerations around AI - safety was less discussed during the summit by policy makers.
The AI Verify Foundation and MLCommons brought together leading practitioners to explore the future of AI testing, AI assurance & AI safety. They presented the newly launched AI Luminate benchmark. Interesting discussion with compliance companies, NGOs, and safety scientists.
Takeaway: There is an ecosystem that is interested in developing standards and formalising processes towards safer AI.
The Business Day at the AI summit was a venue where startups and big corporations gathered to exchange on innovation. There were companies from all across Europe and the world. Highlight was the talks of Le Cun and Altman, who shared different views about the future of AI.
Takeaway: There is a tension between open/closed source, and who will shape the future of AI and how. Corporate and geopolitical incentives are increasingly influencing decisions on innovations and growth.
AI Act Main day: France and Europe announced increased funding and focus on AI. There was also a launch of a big AI public infrastructure initiative. Academics, AI Experts, Corporations and Civil Society exchanged on a various range of topics.
Takeaway: Europe tries to catch up with other economies. Less focus was given to safety and more to try to become relevant in market and geopolitical terms
These events emphasized the importance of inclusive, ethical, and responsible AI practices, providing a platform for meaningful contributions and engagement with a diverse community.
Keynotes, panel discussions, mini-workshops, and networking breaks brought together diverse stakeholders from academia, civil society, industry, and government. Together, we explored the latest research on harmful online content and brainstormed actionable strategies to address these pressing challenges.
We focused on two central questions:
1. What do we know about the challenges of hate speech and mis- and disinformation online, and how can we best approach them?
2. What do we know about effective solutions, strategies, and tools to combat these issues?
Some key takeaways of the two-day workshop:
Consistency Matters: Clear and consistent definitions of harmful content, like hate speech, are essential for guiding action. However, finding the best approach to achieve this remains an open challenge.
Focused Interventions: It’s important to differentiate between “harmful but lawful” content and “illegal” content to enable targeted and effective interventions.
Effective Countermeasures: Counterspeech and content moderation are powerful tools but must be implemented thoughtfully. It’s crucial to base intervention strategies on solid evidence to ensure they lead to meaningful impact.
Access to Data: Limited access to platform data remains a barrier, hindering our ability to comprehensively study harmful content, such as misinformation, and its real-world effects.
Technological Trade-offs: While algorithmic changes and other tech-driven solutions can help reduce misinformation, they often come with trade-offs, like reduced access to political news or decreased diversity in discourse.
Adapting Moderation: Looking forward, we need to consider moderation tools that can keep pace with the growing volume and speed of online content production.
The workshop was co-organized in collaboration with the Bavarian Regulatory Authority for New Media (BLM), the Bavarian State Ministry of Justice (StMJ), the Institute for Strategic Dialogue Germany (ISD), das NETTZ, the Bavarian Research Institute for Digital Transformation (bidt), and the Content Moderation Lab at the TUM Think Tank.
How can we pave the way for a bright quantum future?
Last Thursday, PushQuantum started its semester program in collaboration with the Quantum Social Lab. We were pleased to welcome Ilyas Khan to the TUM Think Tank for this special day. The Founder and CPO of Quantinuum did not only join a panel discussion but also spent some quality time with the student’s club members.
One event highlight was the panel discussion, which centered around the future of quantum technologies. Ilyas Khan was joined by Robert Wille, professor at TU Munich and Chief Scientific Officer at the Software Competence Center Hagenberg, and Fabienne Marco, Head of the Quantum Social Lab and Project Lead of QuantWorld.
The panelists shared their perspectives on the future of quantum technologies, drawing from their diverse backgrounds spanning from the quantum computing industry over mathematics and political science to software development.
The panel moderator, Alexander Orlov (PushQuantum), challenged our panelists with questions ranging from the anticipated advancements and major challenges in quantum computing, including education and policy making, to personal experiences. In addition, he elicited rare advice from the panelists on how to accelerate the students’ careers in quantum technology-related fields in an impactful way.
Key takeaways included the importance of learning from past mistakes, finding suitable approaches for education within quantum technologies (and specifically quantum computing), and the need for interdisciplinary work in the field of quantum technology.
When asked about the future of quantum and what will be the crucial factor within further and fast development, Ilyas Khan suggested shifting the focus from coherence times to entropy. ‘I would say that of all the things that matter in the context of possible advantage, I’d encourage you to think of cross-entropy because it can be validated or not. [...] And I think this year, we will incontrovertibly pass that threshold. There is zero question in my mind.’
Talking about education and future talents, the ‘current challenge lies in matching the needs for different parts of the society and a more holistic and interdisciplinary approach within education,’ according to Fabienne Marco.
A common position the panelists had was the importance of changing the narrative within the media from fear to the opportunities connected with quantum technology.
Robert Wille pointed out how important it is to distinguish who is best equipped to solve different challenges: ‘When it comes to quantum computing software, computer scientists have to lead the way.’ Drawing from his personal experience, he also emphasized the importance of sometimes challenging yourself by entering new scientific bubbles to share solutions and avoid reinventing existing solutions in other disciplines.
Thanks to all the panelists for the insightful discussion!
PushQuantum is a Munich-based student club which offers real world-focused education in quantum tech. Within TUM Think Tank's Public Policy Impact Program, PushQuantum focuses on actively contributing to shaping a responsible quantum future - a vision shared with the Quantum Social Lab.
On February 16, the TUM Think Tank hosted a fireside chat featuring Sir Nick Clegg, President of Global Affairs at Meta. In conversation with Urs Gasser, rector of the Munich School of Politics and Public Policy (HfP), Sir Nick Clegg shared his perspectives and insights on a diverse set of topics at the intersection of artificial intelligence (AI) and innovation, particularly from a European standpoint.
The wide-ranging discussion touched upon the transformative potential of AI technologies, dissecting their implications across various sectors both within Europe and globally. Sir Nick Clegg, a leading figure in the technological landscape, shed light on Europe's unique contributions and regulatory considerations concerning AI and the metaverse.
Here are a few of the key points made during the fireside chat:
1. The countries that will most benefit from AI technology are those that can deploy it quickly and effectively, not necessarily the ones that develop it. Geopolitical discussions about AI are moving from attempts to control access to recognizing the inevitability of its widespread adoption. This shift is exemplified by companies like Meta open-sourcing their large language models (LLMs), indicating a trend towards sharing technology to maximize its use, rather than gatekeeping it.
2. Realizing the full potential of AI technology requires international cooperation, ideally among techno-democracies like the EU, USA, and India. Despite political challenges and varying approaches to technology policy, collaboration on research and policy could significantly advance AI's positive impact, particularly in fields like health and climate change.
3. While AI's dual-use nature means it can be used for both beneficial and harmful purposes, particularly in generating realistic misinformation, ongoing efforts by tech companies to identify and label AI-generated content are crucial. Cooperation among major players to establish standards and responsibilities for AI-generated content can empower users to discern and mitigate misinformation.
4. The narrative that technology, including AI, is inherently detrimental to democracy is challenged by historical context and empirical research. Concerns about technology's impact are often exaggerated, and while it's essential to develop ethical guardrails alongside technological advancements, the relationship between technology and societal change is complex and not inherently negative.
5. In discussions about AI, we often sensationalize its dangers, treating scenarios like the Terminator as relevant and fearing AI's replacement of humans. This tendency stems from anthropomorphizing AI, attributing human-like qualities to it, resulting in misplaced concerns. Instead, AI should be viewed as a tool that excels in certain tasks, much like a fast-driving car. Moreover, there's a pattern where new technologies are exaggerated by both proponents and opponents, as seen historically with radio. Currently, AI's capabilities are overestimated, sparking moral panic and defensive regulations, detracting from the core question, which is how to effectively utilize it.
6. Companies like Meta, which are used by 4 billion people per day, bear significant responsibility, which they must acknowledge. We need guardrails that are not solely developed by tech companies, but are derived from the collaboration of government and society. It's not ideal that the development of guardrails occurs 20 years after the development of technology, like we see with Social Media. Ideally, this regulation process should happen concurrently.
This is what our attendees had to say on the event:
Sofie Schönborn, PhD Student at HfP:
„I am delighted to witness the diverse array of individuals who have found their way here today. Here, students converge with industry leaders from the technology sector, alongside scholars from TUM and forward-thinking figures from the public sphere. The TUM Think Tank emerges as a vibrant hub, a melting pot of ideas, and a diverse cohort of individuals committed to technology, society, and democracy. This is the place to have the conscious discussions and joint deliberation about societal and political implications of technologies, about responsibility and potential futures ahead of us... and to collaborate to enable human-centric technology co-creation and co-design!”
PhD Student at HfP:
“As a researcher, engaging with leading practitioners in the field has been hugely rewarding. It provides me with direct access to valuable first-hand information and proved helpful to complement empirics for my research during follow-ups with them. Personally, their careers inspire me and I am already looking forward to our next guests at the TUM Think Tank.”
Franziska Golibrzuch, Master Student at HfP:
“It was extremely insightful to listen to such an expert – Sir Nick Clegg gave us the industry perspective whilst having an extensive background in government. Especially in the case of AI and within the current debate about AI regulation, security etc. this has been a great opportunity for us TUM students. All in all, it was a super interesting event which finds many applications to my studies because it puts the intersection of technology and politics in the center of the discussion and sheds, time and again, light on the critical and important overlaps in innovation, society and public policy realm. Even after the fireside chat I had the chance to pose questions, which I appreciate a lot.”
Thank you to the Meta team, for making this fireside chat possible and to everyone who took part and asked thought-provoking questions.
Sir Nick Clegg is President, Global Affairs at Meta. He joined the company, then called Facebook, in 2018 after almost two decades in British and European public life. Prior to being elected to the UK Parliament in 2005, he worked in the European Commission and served for five years as a member of the European Parliament. He became leader of the Liberal Democrat party in 2007 and served as Deputy Prime Minister in the UK's first coalition government since the war, from 2010 to 2015. He has written two best-selling books, Politics: Between the Extremes and How to Stop Brexit (And Make Britain Great Again).
A two-day workshop bringing together experts in the field
Content moderation and free speech in the digital realm - and how to balance them - are key topics for researchers, philosophers, public officials, NGOs, and, of course, social media platforms and users. At the TUM Think Tank, we had the pleasure of hosting a number of international experts in this field. The group came together for two full days focused on analyzing this pressing issue, exchanging ideas, and presenting empirical research from the perspectives of governance, industry, and political behavior.
From ideological biases in content moderation and the politics of platform regulation to citizens’ preferences on how online harmful speech can be curved and regulated, and the efficacy of labeling content as AI-generated, the workshop covered a wide range of topics, stressing the need for a transnational conversation about content moderation.
Panel discussion
In a thought-provoking panel together with Benjamin Brake (Federal Ministry of Digital Affairs and Transport), Friedrich Enders (TikTok Germany), Andreas Frank (Bavarian Ministry of Justice), and Ruth Appel (Stanford University), we discussed the complexities of defining harmful speech and taking action against it, how platforms are audited and how they balance transparency with user privacy and free expression when it comes to content moderation decisions.
The conversation centered on the division of responsibility for content moderation and the transparency of enforcement from the key stakeholders involved. It was noted that while the German government is responsible for smaller platforms not covered under the Digital Services Act (DSA), the European Commission is responsible for larger ones like X or TikTok.
- While the precise ways in which tech companies should deal with harmful speech based on the definitions and guidelines provided by the DSA are clouded by some vagueness, a common theme in the discussion was the necessity for transparency in content moderation decisions and the need to always take context into consideration. Based on the conversation, vagueness in defining harmful speech can be seen as a flexible way of dealing with it by tech companies and governments. Researchers, on the other hand, pointed out that it can also be problematic, especially when it comes to its precise detection through automated methods.
- In addition, Friedrich Enders shed light on TikTok's content moderation process. The platform uses a combination of AI and human review to quickly remove harmful content. Conscious of the fact that some harmful, e.g. graphic content, may still be in the public interest, such content may remain on the platform for documentary, educational, and counter-speech purposes, but would be ineligible for recommendation to users on TikTok’s For You feed.
- The panel also highlighted the challenge of balancing freedom of expression, user privacy, and user safety, with TikTok stressing their commitment to both principles, with the government strongly advising that the importance of upholding freedom of expression is such that one should always opt for freedom of speech when one is doubtful about how borderlines cases need to be moderated.
The Chair of Digital Governance co-jointly organized the workshop at the Munich School of Politics and Public Policy, the University of Oxford, the Technical University of Munich, and the Reboot Social Media Lab at the TUM Think Tank.
We had the privilege of participating in the Citizens.Lab at the IAA with our project "Mobilität.Leben." Over the past year, the team has looked into the mobility data of over 3000 participants to measure the impact of the 9-Euro-Ticket and its successor, the Deutschlandticket. Here, we share our key takeaways one year into the study:
Driving Change with Tickets
The 9-Euro-Ticket and the Deutschlandticket have played a significant role, especially during the summer months, in encouraging people to ditch their cars and opt for more sustainable public transportation options at least for some of their journeys.
Transition Challenges
However, in between the 9-Euro-Ticket and Deutschlandticket, without attractive public transportation tickets, participants returned almost to their pre-9-Euro-Ticket travel behavior. While every small step towards sustainability is valuable, we do not expect that the Deutschlandticket alone can substantially reduce the carbon emissions from the transportation sector.
Looking Ahead
A realistic cost-benefit analysis of the public transport fare innovations can be made at the end of the year. What's undeniable is that both tickets have already led to a simplification of the fare system, encouraging public transport use and savings in travel cost. It is clear that cheaper public transportation is just one piece in the larger puzzle of the “Mobilitätswende” and so the discussion continues on what additional ways we need to encourage more sustainable modes of transport.
Sparking reflection
The study was well received by the participants, who willingly shared their mobility data from the outset and remained committed to us until the introduction of the Deutschlandticket. Their feedback has been invaluable, and we're thrilled to hear that our tracking app has encouraged reflection on personal mobility habits. The study can also be seen as a signal that citizens like to be included in research projects and are eager to be part of the collective search for possible solutions to the grand societal challenges of our time.
Our study is globally one of the largest studies of its kind. It has not only advanced our understanding of mobility but also captured the attention of the worldwide transportation research community, strengthened Munich’s positioning on the global research map, and set new standards. We are proud that it has sparked public attention in regional and national media and several academic papers, bachelor and master theses – and more research is coming with several doctoral candidates using the data for their studies.
On Friday, June 23rd 2023, members of the Generative AI Taskforce participated in a working meeting organized by the Electronic Transactions Development Agency (ETDA) of Thailand. One main focus of the meeting was to learn more about Europe’s approach to AI governance, and in particular to take a close look at the EU AI Act (AIA) and explore how Europe deals with the rise of generative AI. Christian Djeffal, Assistant Professor of Law, Science and Technology and member of our Gen AI Taskforce gave a short input on this issue. In this blog post, he shares his key takeaways.
The AI Act of the European Union could serve as an effective balancing act between risk regulation and innovation promotion. In my view, the recipe for an ideal equilibrium includes:
- Preserving the AI Act's lean, clear approach and avoiding over-regulation of too many topics through this Act. It is worth noting that clear rules can be assets for innovation, minimizing liability risks and enabling even smaller entities to venture into risky areas.
- It is undeniable that regulation imposes costs on developers. However, there are numerous strategies to alleviate these costs. Establishing infrastructures for legal advice and knowledge organization could prove invaluable, especially for start-ups.
- Implementing frameworks like responsible research, human-centered engineering, and integrated research can position legal regulation as an integral part of the innovation journey. This mindset lets developers incorporate legal, ethical, and societal considerations early on, enhancing their products and turning potential challenges into opportunities.
Therefore, the AI Act has potential with its blend of a broad framework and specific sector regulations and enforcement. However, it is the finer details that need refining for it to truly thrive. A key area of focus should be honing the high-risk system's definition. For example, the current outlines in Annex III are so expansive they could include applications that hardly meet the high-risk criteria.
Against this backdrop, AI sandboxes are brimming with potential, serving as a breeding ground for innovation and a practical tool for regulatory learning. Current proposals around the AI Act are mostly about establishing general structures as well as coordination and cooperation among member states. The success of these sandboxes relies heavily on their effective rollout by these states. Interestingly, other European legal developments, like the Data Governance Act—which allows for protected data sharing—might be game-changers, possibly boosting sandboxes to an entirely new level, as they would also allow sharing data under the protection of data protection or intellectual property law.
If I could make a wish concerning the AI Act, I would wish for more participatory elements, particularly in risk management. Engaging users and citizens in identifying and mitigating risks is crucial. Hence, it would be beneficial to incorporate such practices "where appropriate". Analogous provisions already exist in the General Data Protection Regulation and the Digital Services Act. It is a fallacy to believe that only companies, compliance departments, and bodies responsible for algorithmic assessments can fully understand the societal implications of new systems.
At the TUM Science Hackathon 2023, a team of computer science students took on the challenge of developing a social media platform that provides users with enhanced transparency and control over the content shown in their feed. They discussed what kind of information users need to better understand how content is personalized on social media and drew up ways for users to modify the content shown to them. Based on these ideas, they designed their prototype ‘openLYKE’ – a social media platform which implements additional features for users to tweak the underlying recommendation algorithm.
From May 19 to 21, the TUM: Junge Akademie hosted the TUM Science Hackathon 2023 on trustworthy systems. In nine challenges submitted by partners from TUM and external organizations, students from various disciplines joined forces to develop technologies that are safe, reliable, transparent and deserve users’ trust. The challenges tackled a variety of problems ranging from spacecrafts and crash detection to material science and AI ethics. One of the challenges was submitted by the REMODE project of the Professorship of Law, Science and Technology in the context of the Reboot Social Media Lab of the TUM Think Tank. In their challenge titled “Trustworthy Recommender Systems”, students were asked to develop a prototype of a social media platform with enhanced options for users to control their social media experience by modifying the content shown to them. Building on the new requirements for online platforms laid down in the EU Digital Services Act (2022), the challenge aimed for recommender systems that enable users to better understand and manipulate the main parameters used to personalize content online.
Especially, opaque algorithms and misleading design patterns (dark patterns) were supposed to be avoided. Thereby, the challenge sought to promote trust-by-design and facilitate more responsible recommender systems on social media.
A key takeaway from the Science Hack was the importance of keeping the technical feasibility in mind when developing innovative solutions and features for social media services. While working on their prototype, the students continuously reflected on how their ideas could be implemented into their recommendation algorithm: What kind of data about each post would be needed? How could users’ preferences be translated into algorithmic language? By staying close to the technology, the students managed to successfully design not only the front end (user interface) of their prototype but also the underlying back end (software) for processing data and recommending content.
The challenge “Trustworthy Recommender Systems” was posed by the REMODE project team consisting of Prof. Christian Djeffal (principal investigator), Daan Herpers (research associate) and Lisa Mette (student assistant), who also supervised the student team during the hackathon.
Thank you to the openLYKE team (Adrian Averwald, Finn Dröge, Thomas Florian, Tim Knothe) for participating and the Junge Akademie for organizing the TUM Science Hack 2023.
Generative AI and its fast developments are baffling experts, policymakers, and the public alike. We have spent the last week at the Berkman Klein Center for Internet & Society at Harvard University, attending the "Co-Designing Generative Futures – A Global Conversation About AI" conference. After having talked with experts, decision-makers, and stakeholders from diverse fields and disciplines around the world, we came back with new questions and “mixed feelings” about the impact of generative AI: it is wicked, disruptive, complex, and ambiguous.
As we conclude this conference week, we are sure that capacity building is the answer to the duality between the incredible possibilities generative AI holds and the threats it poses. As we think about governance and implementations, let's dive into six things you can already do - and which we are fostering at TUM Think Tank - to shape the future of generative AI:
1. Implement ethical guidelines: Promote the development and adoption of ethical frameworks that prioritize transparency, fairness, and accountability in the use of generative AI.
2. Collaborate across disciplines: Foster collaboration between technologists, policymakers, ethicists, and diverse stakeholders to collectively address the challenges and risks associated with generative AI.
3. Research and development: Support research initiatives that focus on responsible AI, including bias mitigation, privacy preservation, and effective detection of generated content.
4. Educate and raise awareness: Share knowledge and raise awareness about the opportunities and challenges of generative AI, empowering individuals and organizations to make informed decisions.
5. Champion diversity and inclusion: Encourage diverse representation and inclusivity in the development and deployment of generative AI systems to mitigate biases and ensure equitable outcomes.
6. Boost positive impact on economy and education: Support businesses to pick up the highly innovative possibilities of generative AI. Foster our education system to not only make use of the technology, but support skill development.
The connections made, insights shared, and ideas generated during this week are great examples of collective capacity building. The conference was hosted in collaboration with the Global Network of Internet & Society Research Centers, BI Norwegian Business School, Instituto de Tecnologia e Sociedade (ITS Rio), and the TUM Think Tank.