TUM Think Tank
Where today's societal challenges meet tomorrow's technological excellence.
We often assume that AI can be trained to be neutral, ethical, and aligned with human values. However, a growing body of research suggests this goal may be impossible, including new findings from the Civic Machines Lab.
At the Participatory AI Research & Practice Symposium (Sciences Po) at the Paris AI Action Summit, the Civic Machines Lab presented research on a fundamental challenge in AI alignment:
The Problem: AI Alignment Overlooks Value Pluralism
AI alignment refers to teaching AI to align with human values—but which values? Society is far from unified, and different demographic groups perceive AI responses differently.
The Civic Machines Lab introduces a new alignment dataset that assesses AI-generated responses across five key dimensions:
- Toxicity
- Emotional awareness
- Sensitivity & openness
- Helpfulness
- Stereotypical bias
Their study, based on 1,095 participants in Germany and the US, revealed stark differences in how people rate AI’s behavior:
- Men rated AI as 18% less toxic than women.
- Older participants (51-60) found AI responses 40.6% less helpful than younger ones.
- Rather Conservative and Black/African American participants rated AI as significantly more sensitive and emotionally aware (+27.9% and +58.2% respectively).
- Country of origin alone had no major effect—ideology, gender, ethnicity and age mattered more.
- Social and political backgrounds shape how people rate the same AI response, complicating universal standards.
- Participants gave conflicting ratings, showing AI alignment is complex and values often overlap.
What This Means: There May Be No “Perfectly Aligned” AI
These findings challenge the idea that AI alignment is simply a technical issue. If different demographic groups disagree on what AI should be, how do we decide what alignment means in the first place?
At the Paris AI Action Summit, experts debated key governance challenges:
- How should participatory AI governance work?
- Who benefits most from AI that aligns with dominant perspectives?
- Can AI ever truly be unbiased, or will it always reflect social and ideological divides?
What’s Next?
The conversation on AI alignment is far from over. If AI is trained to align with dominant perspectives, does it risk marginalizing others? Should AI systems aim for majority agreement, inclusive balance, or something else entirely?
Takeaways from the AI Action Summit in Paris
The AI Action Summit in Paris provided a platform for critical discussions on the intersection of AI, governance, and societal impact. The Participatory AI Research & Practice Symposium at Sciences Po focused on critical discussions about AI development and governance, ensuring the inclusion of diverse and underrepresented voices. Key themes included participatory governance, power dynamics, and accountability in AI.
Takeaway: Participatory perspectives on AI can have transformative effects and support underrepresented voices. Nonetheless, there are serious obstacles in actually implementing these perspectives, because they are often overlooked by policy makers & industry.
The AI & Society House event, organized by Humane Intelligence alongside the Paris AI Action Summit, brought together global leaders from civil society, industry, and government. The discussions addressed pressing challenges in AI ethics, safety, and responsible technology. Panels covered topics such as combating online hate, generative AI and gender-based violence, and the role of journalists in evaluating AI's societal impact.
The Inaugural Conference of the International Association for Safe and Ethical AI focused on building consensus about the dangers of AI, short and long term. It brought together experts in AI safety & AI ethics, as well as policy makers and civil society organizations. Formats for collaboration and common actions were discussed.
Takeaway: Although there are many safety considerations around AI - safety was less discussed during the summit by policy makers.
The AI Verify Foundation and MLCommons brought together leading practitioners to explore the future of AI testing, AI assurance & AI safety. They presented the newly launched AI Luminate benchmark. Interesting discussion with compliance companies, NGOs, and safety scientists.
Takeaway: There is an ecosystem that is interested in developing standards and formalising processes towards safer AI.
The Business Day at the AI summit was a venue where startups and big corporations gathered to exchange on innovation. There were companies from all across Europe and the world. Highlight was the talks of Le Cun and Altman, who shared different views about the future of AI.
Takeaway: There is a tension between open/closed source, and who will shape the future of AI and how. Corporate and geopolitical incentives are increasingly influencing decisions on innovations and growth.
AI Act Main day: France and Europe announced increased funding and focus on AI. There was also a launch of a big AI public infrastructure initiative. Academics, AI Experts, Corporations and Civil Society exchanged on a various range of topics.
Takeaway: Europe tries to catch up with other economies. Less focus was given to safety and more to try to become relevant in market and geopolitical terms
These events emphasized the importance of inclusive, ethical, and responsible AI practices, providing a platform for meaningful contributions and engagement with a diverse community.
TL;DR
We often assume that AI can be trained to be neutral, ethical, and aligned with human values. However, a growing body of research suggests this goal may be impossible, including new findings from the Civic Machines Lab.