Filter activities

Background

The Content Moderation Lab aims to fill a large research gap in content moderation studies to better inform stakeholders and policymakers. Since much of the research on content moderation has focused on abstract legal questions and the U.S. context, the Lab’s research on global user attitudes and experiences ensures that policy debates are better informed.

A User-Centric Approach to Content Moderation

Academic research within the Content Moderation Lab serves a dual purpose, providing decision-makers within politics and industry with crucial information for crafting, implementing, and enforcing content moderation policies. This will help to ensure that policies are truly responsive to problems faced by the public, and that the solutions take public preferences into account.

Streams and Projects

Research

The research team, strategically located across Oxford and TUM, will analyze a novel dataset and conduct experiments to understand how citizens perceive and engage with content moderation and the moderating role of attitudes towards free speech. Academic papers and policy reports will integrate rigor with practical implications, informing evidence-based content moderation strategies and policies, contributing to a more inclusive and responsive digital landscape.

Data

Throughout this project, research data will be sourced from two primary channels: a cross-national comparative survey and reporting data on hateful speech. Facilitated by a direct partnership with the German civil society organization Meldestelle REspect!, which collects citizen reports of hateful speech and forwards them to the authorities, the project will gain access to critical data that will allow the Lab to answer fundamental questions surrounding both citizens’ understandings of hateful speech and the reporting process. This collaboration represents a novel effort to fill existing research gaps by incorporating realworld instances of toxic speech into our research datasets.

Policy

The envisaged outcomes of the research extend beyond scholarly contributions to actively addressing the challenges faced by online platforms, regulatory authorities, and civil society organizations. By tackling the existing gaps in knowledge surrounding citizen reports of toxic behavior, the project aims to provide a robust foundation for evidence-based decisionmaking in the realm of content moderation, contributing to a safer and more inclusive digital environment. Connecting academic research with policy will be achieved through collaboration, not only with civil society organizations like REspect!, but also through partnerships with institutions such as the Bavarian Ministry of Justice and the public prosecutor for hate speech in Bavaria. Together, the Lab will work on utilizing AI methods for the detection of hate speech, fostering innovation and efficiency in addressing this critical issue.

Public Awareness

The Lab will leverage policy reports, panels, and workshops to transform these findings into practical insights tailored for policymakers and regulators.

 

Discover additional projects hosted by the Chair of Digital Governance.

 

TL;DR

Through unique data science research, the Content Moderation Lab will provide actionable policy advice about content moderation to lawmakers and companies. By focusing on user attitudes and experiences, and including civil society organizations in research and dialogues, our work gives policymakers a more comprehensive understanding of which problems must be solved, and how to do so.

Don't lose track of innovation.

Sign up to our newsletter and follow us on social media.