A two-day workshop bringing together experts in the field
Content moderation and free speech in the digital realm - and how to balance them - are key topics for researchers, philosophers, public officials, NGOs, and, of course, social media platforms and users. At the TUM Think Tank, we had the pleasure of hosting a number of international experts in this field. The group came together for two full days focused on analyzing this pressing issue, exchanging ideas, and presenting empirical research from the perspectives of governance, industry, and political behavior.
From ideological biases in content moderation and the politics of platform regulation to citizens’ preferences on how online harmful speech can be curved and regulated, and the efficacy of labeling content as AI-generated, the workshop covered a wide range of topics, stressing the need for a transnational conversation about content moderation.
In a thought-provoking panel together with Benjamin Brake (Federal Ministry of Digital Affairs and Transport), Friedrich Enders (TikTok Germany), Andreas Frank (Bavarian Ministry of Justice), and Ruth Appel (Stanford University), we discussed the complexities of defining harmful speech and taking action against it, how platforms are audited and how they balance transparency with user privacy and free expression when it comes to content moderation decisions.
The conversation centered on the division of responsibility for content moderation and the transparency of enforcement from the key stakeholders involved. It was noted that while the German government is responsible for smaller platforms not covered under the Digital Services Act (DSA), the European Commission is responsible for larger ones like X or TikTok.
- While the precise ways in which tech companies should deal with harmful speech based on the definitions and guidelines provided by the DSA are clouded by some vagueness, a common theme in the discussion was the necessity for transparency in content moderation decisions and the need to always take context into consideration. Based on the conversation, vagueness in defining harmful speech can be seen as a flexible way of dealing with it by tech companies and governments. Researchers, on the other hand, pointed out that it can also be problematic, especially when it comes to its precise detection through automated methods.
- In addition, Friedrich Enders shed light on TikTok's content moderation process. The platform uses a combination of AI and human review to quickly remove harmful content. Conscious of the fact that some harmful, e.g. graphic content, may still be in the public interest, such content may remain on the platform for documentary, educational, and counter-speech purposes, but would be ineligible for recommendation to users on TikTok’s For You feed.
- The panel also highlighted the challenge of balancing freedom of expression, user privacy, and user safety, with TikTok stressing their commitment to both principles, with the government strongly advising that the importance of upholding freedom of expression is such that one should always opt for freedom of speech when one is doubtful about how borderlines cases need to be moderated.
The Chair of Digital Governance co-jointly organized the workshop at the Munich School of Politics and Public Policy, the University of Oxford, the Technical University of Munich, and the Reboot Social Media Lab at the TUM Think Tank.
We had the privilege of participating in the Citizens.Lab at the IAA with our project "Mobilität.Leben." Over the past year, the team has looked into the mobility data of over 3000 participants to measure the impact of the 9-Euro-Ticket and its successor, the Deutschlandticket. Here, we share our key takeaways one year into the study:
Driving Change with Tickets
The 9-Euro-Ticket and the Deutschlandticket have played a significant role, especially during the summer months, in encouraging people to ditch their cars and opt for more sustainable public transportation options at least for some of their journeys.
However, in between the 9-Euro-Ticket and Deutschlandticket, without attractive public transportation tickets, participants returned almost to their pre-9-Euro-Ticket travel behavior. While every small step towards sustainability is valuable, we do not expect that the Deutschlandticket alone can substantially reduce the carbon emissions from the transportation sector.
A realistic cost-benefit analysis of the public transport fare innovations can be made at the end of the year. What's undeniable is that both tickets have already led to a simplification of the fare system, encouraging public transport use and savings in travel cost. It is clear that cheaper public transportation is just one piece in the larger puzzle of the “Mobilitätswende” and so the discussion continues on what additional ways we need to encourage more sustainable modes of transport.
The study was well received by the participants, who willingly shared their mobility data from the outset and remained committed to us until the introduction of the Deutschlandticket. Their feedback has been invaluable, and we're thrilled to hear that our tracking app has encouraged reflection on personal mobility habits. The study can also be seen as a signal that citizens like to be included in research projects and are eager to be part of the collective search for possible solutions to the grand societal challenges of our time.
Our study is globally one of the largest studies of its kind. It has not only advanced our understanding of mobility but also captured the attention of the worldwide transportation research community, strengthened Munich’s positioning on the global research map, and set new standards. We are proud that it has sparked public attention in regional and national media and several academic papers, bachelor and master theses – and more research is coming with several doctoral candidates using the data for their studies.
On Friday, June 23rd 2023, members of the Generative AI Taskforce participated in a working meeting organized by the Electronic Transactions Development Agency (ETDA) of Thailand. One main focus of the meeting was to learn more about Europe’s approach to AI governance, and in particular to take a close look at the EU AI Act (AIA) and explore how Europe deals with the rise of generative AI. Christian Djeffal, Assistant Professor of Law, Science and Technology and member of our Gen AI Taskforce gave a short input on this issue. In this blog post, he shares his key takeaways.
The AI Act of the European Union could serve as an effective balancing act between risk regulation and innovation promotion. In my view, the recipe for an ideal equilibrium includes:
- Preserving the AI Act's lean, clear approach and avoiding over-regulation of too many topics through this Act. It is worth noting that clear rules can be assets for innovation, minimizing liability risks and enabling even smaller entities to venture into risky areas.
- It is undeniable that regulation imposes costs on developers. However, there are numerous strategies to alleviate these costs. Establishing infrastructures for legal advice and knowledge organization could prove invaluable, especially for start-ups.
- Implementing frameworks like responsible research, human-centered engineering, and integrated research can position legal regulation as an integral part of the innovation journey. This mindset lets developers incorporate legal, ethical, and societal considerations early on, enhancing their products and turning potential challenges into opportunities.
Therefore, the AI Act has potential with its blend of a broad framework and specific sector regulations and enforcement. However, it is the finer details that need refining for it to truly thrive. A key area of focus should be honing the high-risk system's definition. For example, the current outlines in Annex III are so expansive they could include applications that hardly meet the high-risk criteria.
Against this backdrop, AI sandboxes are brimming with potential, serving as a breeding ground for innovation and a practical tool for regulatory learning. Current proposals around the AI Act are mostly about establishing general structures as well as coordination and cooperation among member states. The success of these sandboxes relies heavily on their effective rollout by these states. Interestingly, other European legal developments, like the Data Governance Act—which allows for protected data sharing—might be game-changers, possibly boosting sandboxes to an entirely new level, as they would also allow sharing data under the protection of data protection or intellectual property law.
If I could make a wish concerning the AI Act, I would wish for more participatory elements, particularly in risk management. Engaging users and citizens in identifying and mitigating risks is crucial. Hence, it would be beneficial to incorporate such practices "where appropriate". Analogous provisions already exist in the General Data Protection Regulation and the Digital Services Act. It is a fallacy to believe that only companies, compliance departments, and bodies responsible for algorithmic assessments can fully understand the societal implications of new systems.
At the TUM Science Hackathon 2023, a team of computer science students took on the challenge of developing a social media platform that provides users with enhanced transparency and control over the content shown in their feed. They discussed what kind of information users need to better understand how content is personalized on social media and drew up ways for users to modify the content shown to them. Based on these ideas, they designed their prototype ‘openLYKE’ – a social media platform which implements additional features for users to tweak the underlying recommendation algorithm.
From May 19 to 21, the TUM: Junge Akademie hosted the TUM Science Hackathon 2023 on trustworthy systems. In nine challenges submitted by partners from TUM and external organizations, students from various disciplines joined forces to develop technologies that are safe, reliable, transparent and deserve users’ trust. The challenges tackled a variety of problems ranging from spacecrafts and crash detection to material science and AI ethics. One of the challenges was submitted by the REMODE project of the Professorship of Law, Science and Technology in the context of the Reboot Social Media Lab of the TUM Think Tank. In their challenge titled “Trustworthy Recommender Systems”, students were asked to develop a prototype of a social media platform with enhanced options for users to control their social media experience by modifying the content shown to them. Building on the new requirements for online platforms laid down in the EU Digital Services Act (2022), the challenge aimed for recommender systems that enable users to better understand and manipulate the main parameters used to personalize content online.
Especially, opaque algorithms and misleading design patterns (dark patterns) were supposed to be avoided. Thereby, the challenge sought to promote trust-by-design and facilitate more responsible recommender systems on social media.
A key takeaway from the Science Hack was the importance of keeping the technical feasibility in mind when developing innovative solutions and features for social media services. While working on their prototype, the students continuously reflected on how their ideas could be implemented into their recommendation algorithm: What kind of data about each post would be needed? How could users’ preferences be translated into algorithmic language? By staying close to the technology, the students managed to successfully design not only the front end (user interface) of their prototype but also the underlying back end (software) for processing data and recommending content.
The challenge “Trustworthy Recommender Systems” was posed by the REMODE project team consisting of Prof. Christian Djeffal (principal investigator), Daan Herpers (research associate) and Lisa Mette (student assistant), who also supervised the student team during the hackathon.
Thank you to the openLYKE team (Adrian Averwald, Finn Dröge, Thomas Florian, Tim Knothe) for participating and the Junge Akademie for organizing the TUM Science Hack 2023.
Generative AI and its fast developments are baffling experts, policymakers, and the public alike. We have spent the last week at the Berkman Klein Center for Internet & Society at Harvard University, attending the "Co-Designing Generative Futures – A Global Conversation About AI" conference. After having talked with experts, decision-makers, and stakeholders from diverse fields and disciplines around the world, we came back with new questions and “mixed feelings” about the impact of generative AI: it is wicked, disruptive, complex, and ambiguous.
As we conclude this conference week, we are sure that capacity building is the answer to the duality between the incredible possibilities generative AI holds and the threats it poses. As we think about governance and implementations, let's dive into six things you can already do - and which we are fostering at TUM Think Tank - to shape the future of generative AI:
1. Implement ethical guidelines: Promote the development and adoption of ethical frameworks that prioritize transparency, fairness, and accountability in the use of generative AI.
2. Collaborate across disciplines: Foster collaboration between technologists, policymakers, ethicists, and diverse stakeholders to collectively address the challenges and risks associated with generative AI.
3. Research and development: Support research initiatives that focus on responsible AI, including bias mitigation, privacy preservation, and effective detection of generated content.
4. Educate and raise awareness: Share knowledge and raise awareness about the opportunities and challenges of generative AI, empowering individuals and organizations to make informed decisions.
5. Champion diversity and inclusion: Encourage diverse representation and inclusivity in the development and deployment of generative AI systems to mitigate biases and ensure equitable outcomes.
6. Boost positive impact on economy and education: Support businesses to pick up the highly innovative possibilities of generative AI. Foster our education system to not only make use of the technology, but support skill development.
The connections made, insights shared, and ideas generated during this week are great examples of collective capacity building. The conference was hosted in collaboration with the Global Network of Internet & Society Research Centers, BI Norwegian Business School, Instituto de Tecnologia e Sociedade (ITS Rio), and the TUM Think Tank.
Stephanie Hare joined us on the evening of 27 February 2023 to present the main topics of her book “Technology is not neutral: A Short Guide to Technology Ethics”. In her book, Stephanie Hare addresses some key questions surrounding modern digital technologies: One focus is how developers of technology but also society at large can seek to maximize the benefits of technologies and applications while minimizing their harms. Read our key takeaways from our discussion here.
Some key take-aways from the discussion
Using a philosophical framework, she utilizes several different fields and approaches to ethics and philosophy to call attention to these issues. For instance, metaphysics points out what the problem needs to be solved, while epistemology helps us to ask about the relevant sources of knowledge to address these questions and problems. Political Philosophy, on the other hand, highlights the question of who holds the power to pursue these solutions, while aesthetics highlights how technologies should be designed and displayed. Ethics, finally, gives us answers to the question of what the inherent values are baked into technology.
Throughout the discussion with Alexander v. Janowski and the audience, we addressed crucial observations on the design of technologies which we can make in our everyday world. Examples included how the size of many smartphones is fitted to larger, typically male hands, similarly to how airbags in vehicles have only been tested on mannequins that resemble the average male body. These observations lent credence to the ethical considerations of who and what entities do and should have control over the design and application of technologies.
Overall, Stephanie Hare hopes that her book “hacks humans and human culture.” by contributing to the effort to inspiring people to see the biases and intentional or unintentional inequalities that technologies will take on from their developers if left unscrutinized.
To learn more about Stephanie Hare, the book, and her other works, visit her website at https://www.harebrain.co.
With the aim of tackling the issue of harmful discourse online, we joined forces with the Bavarian State Ministry of Justice in order to build a community of practice. In a series of events of the Reboot Social Media Hub, we brought together academics and practitioners working on hate speech and other forms of hateful content online to discuss current problems and potential solutions.
Opening up the dialogue, our panel discussion with Teresa Ott (Hate Speech Officer at the Attorney General’s Office), Anna Wegscheider (lawyer at HateAid), and Svea Windwehr (policy analyst at Google), moderated by Georg Eisenreich, MDL (Bavarian State Minister of Justice) and Urs Gasser (TU Munich), gave around 100 guests that participated in the event deeper insights about the current state of the EU’s Digital Services Act and its impact on prosecutors, platforms, and users.
Key insights from the discussion
While EU-wide harmonization through the DSA was identified as having great potential, it still faces setbacks when compared to the NetzDG, such as the lack of inclusion of deletion periods or concrete details on enforcement for hate speech violations. It was hence seen critical to explore ways to guarantee that the stronger and more actionable aspects of the NetzDG will still be available even if the DSA and its more ambiguous propositions are put in place.
In general, the panelists noticed that the internal processes of major platforms regarding content moderation practices are still a "black box" for both law enforcement and victims of online hate. There was a wide consensus that this could and should be improved through expanded cooperation and communication between stakeholders from civil society, agencies, and platform operators.
A final point was raised regarding public awareness of hate speech. Only 2 out of 10 online offenses are currently reported. To increasingly report and prosecute online hate, awareness of digital violence must be further increased - not only among the population but also in the judiciary and law enforcement. But as the number of reported cases increases, additional resources will then also become necessary for the responsible authorities to prosecute these instances.
Session 1 focused on the latest insights on hate speech, incivility, and misogyny in online discourse. Based on inputs from Yannis Theocharis, Janina Steinert, and Jürgen Pfeffer (all TU Munich), participants discussed trade-offs between the need for different forms of content moderation vis-à-vis freedom of speech as a fundamental norm. While there was a consensus that a better understanding of the “grey zones” of hate speech, and how to deal with it, is needed, it was clear that some types of online behavior should not be normalized. It was also stressed that online hate is spread by a comparatively few, which are extremely loud and hence gain a lot of traction. This, in turn, has implications regarding whom policies to regulate harmful online content should be directed towards: the few haters or the large masses.
Session 2 dealt with questions related to the implementation of the Digital Services Act concerning online hate. Following inputs from Till Guttenberger (Bavarian State Ministry of Justice) and Teresa Ott (Hate Speech Officer at the Attorney’s General Office), it was debated how to keep effective measures of the NetzDG alive once the Digital Services Act is enacted. A core topic focused on how future institutions and mechanisms should be designed. Participants were also wondering how to best increase awareness among victims and the public on ways to report hate speech.
Session 3 looked beyond the law to address uncivil discourse online. Christian Djeffal (TU Munich) spoke about content moderation together with users, while Sandra Cortesi (University of Zurich & Harvard University) gave an overview on how to educate children with the skills to navigate online discourses on social media. The big questions focused on finding sweet spots between education and regulation– which is probably not an “either/or” – as well as who is in the best position to create educational content highlighting that all relevant actors need to be on board.
Partners & organization
The events were jointly organized by the TUM Think Tank, the Professorship for Public Policy, Governance and Innovative Technology, and the Bavarian State Ministry of Justice. Our special thanks go to our panelists Teresa Ott, Svea Windwehr, and Anna Wegscheider for sharing their expertise. We thank all participants for their input and engaged discussions as we are looking forward to continuing this conversation that we started.
Exploring the future trajectories of the Metaverse as tomorrow’s digital frontier at a moment in time where technology, business models, and regulatory systems are still malleable, our interactive multi-stakeholder workshop was centered around four cases that affect and involve different user groups.
Key insights from the discussion
XR Spaces and extended reality infrastructures by the XR Hub Bavaria.
Our partner from the XR Hub Bavaria presented a variety of different projects aiming at the creation of a digital infrastructure which can serve as common good technologies. Being a state-funded initiative, the focus of the XR Spaces as a use case for Government-to-Citizen project is on value-driven activities while commercial aspects take the backseat.
Metaverse pilot by the EU Global Gateway Initiative.
The EU communication campaign provides another example for a Government-to-Citizen project. When public actors use Metaverse-applications to engage with citizens, it is a challenge to find the right balance between providing content as information and allowing visitors of the Metaverse to create, interact and change the environment. This leads to two suggestions: How can we educate across societal groups about the capability to interact in virtual spaces? And how far can the state be involved in the provision of Metaverse infrastructure as well as content creation?
Digital twins in manufacturing for Small-Medium Entreprises by Umlaut @Accenture.
An example for Business-to-Business case was provided by the startup Umlaut which was recently acquired by Accenture. There is still a lack of knowledge concerning the high potential of digital twins for the training and education sector, demonstrated in the manufacturing use case. In addition to this, there is still a high inequality to access across the globe as some areas might not have access to the necessary data or still might not have access to the technology.
Virtual Reality speech trainer through Artificial Intelligence by Straightlabs.
The last use case was presented by Straightlabs as an example for a Business-to-Consumer application. The tool shows the immense potential for different areas, especially capacity building and training, of application of immersive technology, but also the complexity caused by the degree of involvement of personal data and the human itself in a hard to explain and complex technology.
The Metaverse workshop brought together stakeholders from academia, startups, business, government, administration, and media to jointly explore and experience the promises and pitfalls based on selected Metaverse applications from different areas.
How can we make better use of the economic and social potential of data without losing sight of possible negative aspects? On 2. February 2023, we organized a panel discussion on the current state of the data policy in Germany.
Key insights from the discussion
The event was kicked-off by Moritz Hennemann (University of Passau) who put the efforts of German data policy into a larger European context. He stressed that data policy is one of the crucial cross-cutting issues of our time – as, for instance, weather data is crucial for planning your weekend travels to monitoring climate change or flight traffic and includes military purposes. He further noted that data usage always comes with trade-offs between various norms and decisions, e.g., between the economic usage of data and privacy rights. One way forward for an effective data policy according to Moritz Hennemann is to think in sectoral fields of applications and create sectoral data spaces which facilitate and foster the usage and sharing of data. Based on these experimentations, shared criteria and measures for data spaces can then be developed.
Based on this input the panel started with a discussion of the envisioned data institute to be set up by the German government. Andreas Peichl as member of the founding commission of the Data Institute gave an overview of the goals and structure of the data institute to be implemented. While the panelists agreed that the data institute is the right step in the right direction, it was stressed that the institute will need an agile structure and enough financial backing. Moreover, the panelists highlighted that its success will largely depend on the selection of effective areas of applications and use cases.
Another strand of the discussion focused on the question of what effective data policies for a common good need in the next 5 to 10 years. Here, Benjamin Adjei stressed the existing gaps in Bavaria which lacks appropriate strategies, laws, and infrastructures for an effective data policy. According to Amélie Heldt the state can play a crucial role here, e.g., by creating open data repositories that can be utilized by startups as well as actors from academia, civil society or the public sector. Moreover, she advocated for the creation of sandboxes and spaces for experimentation to create positive uses cases.
The last part of the discussion centered on the (perceived) tradeoffs between data protection and data usage. Here, the panelists agreed that data protection is frequently misused to shield off and block access to and usage of data. Amélie Heldt also stressed that the GDPR is crucial as it creates trust among citizens, whereas Andreas Peichl presented some examples for different treatments of GDPR requests depending on the local context. Benjamin Adjei criticized the simple “black-or-white” thinking when it comes to data protection versus data usage.
The panel discussion centered on strategies and narratives to facilitate data usage and sharing. The panel debated the added economic and societal benefits, while also addressing its related challenges for citizens, business and regulators touching upon topics linked to the data institute, data for common good and the necessary prerequisites for effective data usage for public interest.
The panelists strongly agreed that we need a shift of narratives and direction focusing more on a positive vision of data usage and what good can come out of data-driven projects for society at large. To this end, however, we need to invest more financial resources and build up organizational and infrastructural capacities that put us in the position of using data for the public interest.
Partners & organization
The panel discussion was part of the series "Governance by & of Technology" which was hosted in 2022 / 2023 at the TUM Think Tank. The public event attracted a broad audience who joined the three panelists Amélie Heldt (Digital Policy Officer at the Federal Chancellery), Benjamin Adjei (Member of the Bavarian State Parliament and Digital Policy Spokesperson for Bündnis 90 / Die Grünen), and Andreas Peichl (Ludwig Maximilian University of Munich, Ifo Institute & Member of the Founding Commission of the Data Institute). The event was moderated by Sofie Schönborn (TU Munich).