At the TUM Science Hackathon 2023, a team of computer science students took on the challenge of developing a social media platform that provides users with enhanced transparency and control over the content shown in their feed. They discussed what kind of information users need to better understand how content is personalized on social media and drew up ways for users to modify the content shown to them. Based on these ideas, they designed their prototype ‘openLYKE’ – a social media platform which implements additional features for users to tweak the underlying recommendation algorithm.
From May 19 to 21, the TUM: Junge Akademie hosted the TUM Science Hackathon 2023 on trustworthy systems. In nine challenges submitted by partners from TUM and external organizations, students from various disciplines joined forces to develop technologies that are safe, reliable, transparent and deserve users’ trust. The challenges tackled a variety of problems ranging from spacecrafts and crash detection to material science and AI ethics. One of the challenges was submitted by the REMODE project of the Professorship of Law, Science and Technology in the context of the Reboot Social Media Lab of the TUM Think Tank. In their challenge titled “Trustworthy Recommender Systems”, students were asked to develop a prototype of a social media platform with enhanced options for users to control their social media experience by modifying the content shown to them. Building on the new requirements for online platforms laid down in the EU Digital Services Act (2022), the challenge aimed for recommender systems that enable users to better understand and manipulate the main parameters used to personalize content online.
Especially, opaque algorithms and misleading design patterns (dark patterns) were supposed to be avoided. Thereby, the challenge sought to promote trust-by-design and facilitate more responsible recommender systems on social media.
A key takeaway from the Science Hack was the importance of keeping the technical feasibility in mind when developing innovative solutions and features for social media services. While working on their prototype, the students continuously reflected on how their ideas could be implemented into their recommendation algorithm: What kind of data about each post would be needed? How could users’ preferences be translated into algorithmic language? By staying close to the technology, the students managed to successfully design not only the front end (user interface) of their prototype but also the underlying back end (software) for processing data and recommending content.
The challenge “Trustworthy Recommender Systems” was posed by the REMODE project team consisting of Prof. Christian Djeffal (principal investigator), Daan Herpers (research associate) and Lisa Mette (student assistant), who also supervised the student team during the hackathon.
Thank you to the openLYKE team (Adrian Averwald, Finn Dröge, Thomas Florian, Tim Knothe) for participating and the Junge Akademie for organizing the TUM Science Hack 2023.
With the aim of tackling the issue of harmful discourse online, we joined forces with the Bavarian State Ministry of Justice in order to build a community of practice. In a series of events of the Reboot Social Media Hub, we brought together academics and practitioners working on hate speech and other forms of hateful content online to discuss current problems and potential solutions.
Opening up the dialogue, our panel discussion with Teresa Ott (Hate Speech Officer at the Attorney General’s Office), Anna Wegscheider (lawyer at HateAid), and Svea Windwehr (policy analyst at Google), moderated by Georg Eisenreich, MDL (Bavarian State Minister of Justice) and Urs Gasser (TU Munich), gave around 100 guests that participated in the event deeper insights about the current state of the EU’s Digital Services Act and its impact on prosecutors, platforms, and users.
Key insights from the discussion
While EU-wide harmonization through the DSA was identified as having great potential, it still faces setbacks when compared to the NetzDG, such as the lack of inclusion of deletion periods or concrete details on enforcement for hate speech violations. It was hence seen critical to explore ways to guarantee that the stronger and more actionable aspects of the NetzDG will still be available even if the DSA and its more ambiguous propositions are put in place.
In general, the panelists noticed that the internal processes of major platforms regarding content moderation practices are still a "black box" for both law enforcement and victims of online hate. There was a wide consensus that this could and should be improved through expanded cooperation and communication between stakeholders from civil society, agencies, and platform operators.
A final point was raised regarding public awareness of hate speech. Only 2 out of 10 online offenses are currently reported. To increasingly report and prosecute online hate, awareness of digital violence must be further increased - not only among the population but also in the judiciary and law enforcement. But as the number of reported cases increases, additional resources will then also become necessary for the responsible authorities to prosecute these instances.
Session 1 focused on the latest insights on hate speech, incivility, and misogyny in online discourse. Based on inputs from Yannis Theocharis, Janina Steinert, and Jürgen Pfeffer (all TU Munich), participants discussed trade-offs between the need for different forms of content moderation vis-à-vis freedom of speech as a fundamental norm. While there was a consensus that a better understanding of the “grey zones” of hate speech, and how to deal with it, is needed, it was clear that some types of online behavior should not be normalized. It was also stressed that online hate is spread by a comparatively few, which are extremely loud and hence gain a lot of traction. This, in turn, has implications regarding whom policies to regulate harmful online content should be directed towards: the few haters or the large masses.
Session 2 dealt with questions related to the implementation of the Digital Services Act concerning online hate. Following inputs from Till Guttenberger (Bavarian State Ministry of Justice) and Teresa Ott (Hate Speech Officer at the Attorney’s General Office), it was debated how to keep effective measures of the NetzDG alive once the Digital Services Act is enacted. A core topic focused on how future institutions and mechanisms should be designed. Participants were also wondering how to best increase awareness among victims and the public on ways to report hate speech.
Session 3 looked beyond the law to address uncivil discourse online. Christian Djeffal (TU Munich) spoke about content moderation together with users, while Sandra Cortesi (University of Zurich & Harvard University) gave an overview on how to educate children with the skills to navigate online discourses on social media. The big questions focused on finding sweet spots between education and regulation– which is probably not an “either/or” – as well as who is in the best position to create educational content highlighting that all relevant actors need to be on board.
Partners & organization
The events were jointly organized by the TUM Think Tank, the Professorship for Public Policy, Governance and Innovative Technology, and the Bavarian State Ministry of Justice. Our special thanks go to our panelists Teresa Ott, Svea Windwehr, and Anna Wegscheider for sharing their expertise. We thank all participants for their input and engaged discussions as we are looking forward to continuing this conversation that we started.