Filter activities

Part of the Reboot Social Media Lab
Digital platforms are becoming more and more toxic, as they are filled with elements like insults, harmful language or hate speech. Yet, users perceive negative and uncivil posts very differently and react to them in individual ways. The project Platforms for the People uses experimental studies to find out where users draw the line between hateful posts and content which is seen as acceptable.

Proposing solutions based on user preferences

The team around Yannis Theocharis, Franziska Pradel and Jan Zilinsky aim to find out what types of posts are perceived as harmful or uncivil and what triggers demand for different forms of content moderation. To propose solutions informed by users’ own preferences, they ask social media users directly on different platforms and in different countries which posts are (not) admissible and which consequences they want.

Topics, activities & formats

The findings from the Platforms for the People project inform public and political discussions by highlighting the intricate balance between harmful content and its moderation vis-à-vis freedom of speech as a fundamental norm. The team members frequently publish op-eds, advise decision-makers and organize workshops around the topic of content moderation of harmful online content.

TL;DR

The Platforms for the People project explores where people draw the line between hateful content and content protected by freedom of speech. The findings inform us about what content moderation practices should be put in place based on user preferences.

Don't lose track of innovation.

Sign up to our newsletter and follow us on social media.