Filter activities

A two-day workshop bringing together experts in the field

Content moderation and free speech in the digital realm - and how to balance them - are key topics for researchers, philosophers, public officials, NGOs, and, of course, social media platforms and users. At the TUM Think Tank, we had the pleasure of hosting a number of international experts in this field. The group came together for two full days focused on analyzing this pressing issue, exchanging ideas, and presenting empirical research from the perspectives of governance, industry, and political behavior.

From ideological biases in content moderation and the politics of platform regulation to citizens’ preferences on how online harmful speech can be curved and regulated, and the efficacy of labeling content as AI-generated, the workshop covered a wide range of topics, stressing the need for a transnational conversation about content moderation.

Panel discussion

In a thought-provoking panel together with Benjamin Brake (Federal Ministry of Digital Affairs and Transport), Friedrich Enders (TikTok Germany), Andreas Frank (Bavarian Ministry of Justice), and Ruth Appel (Stanford University), we discussed the complexities of defining harmful speech and taking action against it, how platforms are audited and how they balance transparency with user privacy and free expression when it comes to content moderation decisions.

The conversation centered on the division of responsibility for content moderation and the transparency of enforcement from the key stakeholders involved. It was noted that while the German government is responsible for smaller platforms not covered under the Digital Services Act (DSA), the European Commission is responsible for larger ones like X or TikTok.

  • While the precise ways in which tech companies should deal with harmful speech based on the definitions and guidelines provided by the DSA are clouded by some vagueness, a common theme in the discussion was the necessity for transparency in content moderation decisions and the need to always take context into consideration. Based on the conversation, vagueness in defining harmful speech can be seen as a flexible way of dealing with it by tech companies and governments. Researchers, on the other hand, pointed out that it can also be problematic, especially when it comes to its precise detection through automated methods.
  • In addition, Friedrich Enders shed light on TikTok's content moderation process. The platform uses a combination of AI and human review to quickly remove harmful content. Conscious of the fact that some harmful, e.g. graphic content, may still be in the public interest, such content may remain on the platform for documentary, educational, and counter-speech purposes, but would be ineligible for recommendation to users on TikTok’s For You feed.
  • The panel also highlighted the challenge of balancing freedom of expression, user privacy, and user safety, with TikTok stressing their commitment to both principles, with the government strongly advising that the importance of upholding freedom of expression is such that one should always opt for freedom of speech when one is doubtful about how borderlines cases need to be moderated.

The Chair of Digital Governance co-jointly organized the workshop at the Munich School of Politics and Public Policy, the University of Oxford, the Technical University of Munich, and the Reboot Social Media Lab at the TUM Think Tank.

TL;DR

From the ideological biases in content moderation, the politics of platform regulation, and citizens’ preferences for online hate speech regulation, to the efficacy of labeling content as AI-generated, the workshop covered a wide range of topics, stressing the need for transnational conversations about content moderation.

Don't lose track of innovation.

Sign up to our newsletter and follow us on social media.