
Empirical Research on Content Moderation, Platforms, and Free Speech
Advancing evidence-based research on digital speech and platform governance to shape safer, more democratic online spaces.
The Content Moderation Lab explores how digital platforms regulate online speech—and how these choices influence user behavior, public attitudes, and democratic engagement.
Grounded in real-world data and cross-national perspectives, the Lab bridges research and policy to support more transparent, inclusive, and accountable content moderation.
You are currently viewing a placeholder content from YouTube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationAbout
Online speech is central to democratic life, yet what counts as acceptable content is hotly debated. Too often, global discussions focus on laws and platform rules, overlooking the voices of users.
The Content Moderation Lab exists to re-center that conversation—bringing public opinion, civil society perspectives, and data-driven research into the heart of content governance.
Why we exist
The Content Moderation Lab takes a comparative, user-centered, and data-driven approach to studying how platforms govern online speech—and how these decisions shape public opinion, user behavior, and democratic participation.
We use a mix of methods, including large-scale surveys, experimental research, and digital trace data, to investigate both platform practices and citizen responses across diverse legal and cultural contexts.
The Lab is a joint initiative between the TUM and Oxford and works in close collaboration with civil society organizations and public institutions, including the Bavarian Ministry of Justice and Meldestelle REspect!
Through interdisciplinary research and policy engagement, we aim to generate insights that support more transparent, inclusive, and democratically grounded content governance.
What we do
Research: The lab conducts interdisciplinary research to understand how citizens perceive and engage with content moderation, with a special focus on free speech attitudes. Based across TUM and Oxford, the team produces academic papers and policy reports that bridge theoretical insight with real-world application.
Data: The lab’s work is grounded in unique data sources, including a global survey on public attitudes and real-world reports of hate speech. Collaborating with civil society partners, the lab generates empirical insights into user experiences, reporting behaviors, and online toxicity.
Policy: The Content Moderation Lab translates research into actionable policy advice by working with regulators, justice ministries, and platform stakeholders. Through partnerships with institutions like the Bavarian Ministry of Justice, we explore how AI and evidence-based strategies can improve the detection and moderation of harmful content.
Public Awareness: To foster informed debate, the lab shares its findings through public panels, policy briefs, and multi-stakeholder workshops, ensuring that key insights reach decision-makers, regulators, and civil society actors.
How to get engaged
Join our events and workshops, explore our latest research, or collaborate as a policymaker, civil society partner, or academic.
The Content Moderation Lab welcomes contributions from students and early-career researchers and actively partner with other Think Tank Labs to explore the societal implications of digital technologies. For more information, please reach out to fernanda.sauca@tum.de