Advancing evidence-based research on digital speech and platform governance to shape safer, more democratic online spaces.

The Content Moderation Lab explores how digital platforms regulate online speech—and how these choices influence user behavior, public attitudes, and democratic engagement.​

Grounded in real-world data and cross-national perspectives, the Lab bridges research and policy to support more transparent, inclusive, and accountable content moderation.​

You are currently viewing a placeholder content from YouTube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

About

Online speech is central to democratic life, yet what counts as acceptable content is hotly debated. Too often, global discussions focus on laws and platform rules, overlooking the voices of users.​

The Content Moderation Lab exists to re-center that conversation—bringing public opinion, civil society perspectives, and data-driven research into the heart of content governance.​​

Why we exist

The Content Moderation Lab takes a comparative, user-centered, and data-driven approach to studying how platforms govern online speech—and how these decisions shape public opinion, user behavior, and democratic participation.​
We use a mix of methods, including large-scale surveys, experimental research, and digital trace data, to investigate both platform practices and citizen responses across diverse legal and cultural contexts.​

The Lab is a joint initiative between the TUM and Oxford and works in close collaboration with civil society organizations and public institutions, including the Bavarian Ministry of Justice and Meldestelle REspect!​

Through interdisciplinary research and policy engagement, we aim to generate insights that support more transparent, inclusive, and democratically grounded content governance.​

What we do

Research: The lab conducts interdisciplinary research to understand how citizens perceive and engage with content moderation, with a special focus on free speech attitudes. Based across TUM and Oxford, the team produces academic papers and policy reports that bridge theoretical insight with real-world application.​

Data: The lab’s work is grounded in unique data sources, including a global survey on public attitudes and real-world reports of hate speech. Collaborating with civil society partners, the lab generates empirical insights into user experiences, reporting behaviors, and online toxicity.​

Policy: The Content Moderation Lab translates research into actionable policy advice by working with regulators, justice ministries, and platform stakeholders. Through partnerships with institutions like the Bavarian Ministry of Justice, we explore how AI and evidence-based strategies can improve the detection and moderation of harmful content.​

Public Awareness: To foster informed debate, the lab shares its findings through public panels, policy briefs, and multi-stakeholder workshops, ensuring that key insights reach decision-makers, regulators, and civil society actors.​

How to get engaged

Join our events and workshops, explore our latest research, or collaborate as a policymaker, civil society partner, or academic.

The Content Moderation Lab welcomes contributions from students and early-career researchers and actively partner with other Think Tank Labs to explore the societal implications of digital technologies. For more information, please reach out to fernanda.sauca@tum.de

Team

Yannis Theocharis

PI

Spyros Kosmidis

PI

Fernanda Sauca

Lab manager

Friederike​ Quint​

Lab member

Jan​ Zilinsky

Lab member

Related Outputs

  • Publication

    Global Survey shows Rejection of Unrestricted Freedom of Expression

    Most people support restricting harmful content on social media, even in countries leaning toward unrestricted free speech. However, many believe online intolerance and hatred are unavoidable. A global survey by the Content Moderation Lab at the TUM Think Tank and the University of Oxford reveals varying attitudes across ten countries.

    10. Feb 2025
  • Publication

    Best Practices for Deletion of Harmful Content on Social Media

    The Policy Paper expands on the argument for social media platforms to opt for a soft deletion approach to content moderation. Instead of fully erasing harmful posts, soft deletion replaces them with notices explaining why they were removed. This maintains conversation flow and provides transparency. Impacted users should also have the option to choose hard deletion. This approach promotes accountability while preserving context.

    24. Jun 2024
  • Noteworthy

    REMODE Method

    REMODE helps you to redesign social media by involving the people most affected by specific harms resulting from practices like hate speech, beauty filters, or manipulation. It is based on design thinking and optimized for the most effective user participation. REMODE is designed as a progressive way to conduct participatory risk management processes as required by the EU's Digital Services Act.

    30. Oct 2023
  • Publication

    Social Media and Content Moderation

    The chapter "Social Media and Content Moderation: Regulatory Responses to a Key Democratic Question" by Christian Djeffal examines the socio-technical workings of content moderation in social media and discusses possible regulatory responses. The paper shows how public debate is structured by artificial intelligence and how dynamics in social media can threaten both human rights and democratic values.

    30. Sep 2023

Related News

Recap

Reimagining Online Discourse

Keynotes, panel discussions, mini-workshops, and networking breaks brought together diverse stakeholders from academia, civil society, industry, and government for the third edition of the Facilitating Constructive Dialogue Workshop. Together, we explored the latest research on harmful online content and brainstormed actionable strategies to address these pressing challenges.

16. Jan 2025
Noteworthy

Toxic Entertainment?

This project brings together theories of entertainment, visual communication and toxic speech to understand how and why toxicity becomes more permissible when it is masked as entertainment. It deploys AI methods to identify, classify and map toxic entertainment at scale, qualitative methods to study its characteristics, and experimental methods to understand its effects on individual behavior.

09. Jan 2024
Recap

Workshop "Content Moderation and Free Speech on Social Media"

From the ideological biases in content moderation, the politics of platform regulation, and citizens’ preferences for online hate speech regulation, to the efficacy of labeling content as AI-generated, the workshop covered a wide range of topics, stressing the need for transnational conversations about content moderation.

30. Oct 2023