Filter activities

Meet the Fellows

Meet the new Friedrich Schiedel Fellows who work on building new bridges between social sciences, technology, and other disciplines. Their interdisciplinary research projects, under the motto "Human-Centered Innovation for Technology in Society" focus on how technologies can be developed responsibly, human-centered, and democratically, serving the public good.

Auxane Boch: Psychology Impact Assessment for Interactional Systems: Defining the Evaluation Scope (PSAIS)

This research project aims to address the lack of frameworks for systematically assessing the diverse psychological impacts of AI. By adopting a participatory approach and considering cultural values, the project seeks to develop a multi-cultural mapping framework for evaluating the psychological impact of AI systems. The research will involve workshops and consultations with stakeholders from various sectors and regions to define evaluation criteria. The project will contribute to the development of concrete recommendations for action by providing a culturally-informed framework that can guide the responsible development and application of AI technologies. The impact of the project extends to academic disciplines, partner institutions, societal stakeholder groups, and policy actors. It will foster interdisciplinary knowledge exchange, stimulate discussions on standardization, and contribute to the reduction of potential inequities arising from technology adoption. The project aims to be a change agent by actively contributing to the implementation of the recommendations and ensuring user well-being and trust in AI systems.

Efe Bozkir: Echoes of Privacy: Exploring User Privacy Decision-Making Processes towards Large Language Model-based Agents in Immersive Realities

User privacy concerns and preferences have been researched extensively in the context of various technologies, such as smart speakers, IoT devices, and augmented reality glasses, to facilitate better privacy decision-making and human-centered solutions. With the emergence of generative artificial intelligence (AI), large language models (LLMs) have started being integrated into our daily routines, where models are tuned with vast amounts of data, including sensitive information. The possibility of embedding these models in immersive settings brings a plethora of questions from privacy and usability point of view. In this project, through several user studies, including crowdsourcing ones, we will explore privacy concerns and preferences towards LLM-powered and speech-based chat agents for immersive settings and inference likelihood of alarming user attributes. The findings will help understand the privacy implications of such settings, design informed consent procedures that support users in immersive spaces that include LLMs, and facilitate privacy-aware technical solutions.

Baris C. Cantürk: Future Finance Law Hub (“F2L_Hub”)

The Future Finance Law Hub (“F2L_Hub”) is a project aimed at becoming a policy-maker hub at the intersection of IT law and commercial law, established at TUM, with the primary area of influencing Germany and the European Union in the medium to long term. Central to its modus operandi is the aggregation of prominent stakeholders from academia and industry across multidisciplinary domains, including law, finance, and IT, to generate significant outputs.
As delineated in the foregoing summary, F2L_Hub embodies a mission of considerable magnitude, delineating medium and long-term objectives aimed at institutionalizing a tradition in this field. Thus, this fellowship is poised to function as the catalyzing force behind the establishment of F2L_Hub, leveraging both financial resources and access to an excellent academic environment. Through this esteemed fellowship, a noteworthy organizational milestone will be attained, facilitating the procurement of requisite funding and partnerships vital for the execution of ancillary processes.

Daryna Dementieva: Harmful Speech Proactive Moderation

Offensive speech remains a pervasive issue despite ongoing efforts, as underscored by recent EU regulations aimed at mitigating digital violence. Existing approaches primarily rely on binary solutions, such as outright blocking or banning, yet fail to address the complex nature of hate speech. In this work, we want to advocate for a more comprehensive approach that aims to assess and classify offensive speech within several new categories: (i) hate speech that can be prevented from publishing by recommending a detoxified version; (ii) hate speech that necessitated counter speech initiatives to persuade the speaker; (iii) hate speech that should be indeed blocked or banned, and (iv) instances mandating further human intervention.

 

Mennatullah Hendawy: Setting up the Future with Sustainable Choices: GenAI Support in Resolving Multi- Stakeholder Conflicts in Sustainable Critical Metals & Minerals Development

This project outlines an innovative approach to resolve multi-stakeholder conflicts in the sustainable development of critical metals and minerals essential for decarbonization efforts. Recognizing the complexities and sustainability challenges within the supply chains of these materials, particularly those sourced from the Global South / emerging economies, the project proposes a digital platform leveraging reactive machine AI (RM-AI) and generative AI (Gen-AI) with human-in-the-loop functionalities. This platform is designed to facilitate transparent and inclusive discussions among public/community representatives, government, and industry stakeholders, ensuring a balanced consideration of environmental, economic, and social sustainability targets. Through co-developing a concept for an interactive, game- based decision-making tool powered by Gen-AI, the project aims to identify common interests, model sustainability trade-offs, and find consensus solutions that align with the societal goals of reducing inequality and promoting economic growth with decent work conditions. The project's integration of RM and Gen AI aims to bridge the gap between technical and non-technical decision-makers, enhancing stakeholder engagement and trust in AI-driven processes, thereby aligning closely with the fellowship’s mission of human- centered innovation and interdisciplinary collaboration for the public good.

Franziska M. Poszler: Research-based theater: An innovative method for communicating and co-shaping AI ethics research & development

This project will implement a creative approach to conducting, educating, and communicating AI ethics research through the lens of the arts (i.e., research-based theater). The core idea revolves around conducting qualitative interviews and user studies on the impact of AI systems on human ethical decision-making. It focuses specifically on exploring the potential opportunities and risks of employing these systems as aids for ethical decision-making, along with their broader societal impacts and recommended system requirements. Generated scientific findings will be translated into a theater script and (immersive) performance. This performance seeks to effectively educate civil society on up-to-date research in an engaging manner and facilitate joint discussions (e.g., on necessary and preferred system requirements or restrictions). The insights from these discussions, in turn, are intended to inform the scientific community, thereby facilitating a human-centered development and use of AI systems as moral dialogue partners or advisors. Overall, this project should serve as a proof of concept for innovative teaching, science communication and co-design in AI ethics research, laying the groundwork for similar projects in the future.

More information on the project can be found here:https://www.ieai.sot.tum.de/research/moralplai/

Malte Toetzke: Developing the Google Maps for the Climate Transition

I envision to develop the Google Maps for the climate transition. Business leaders and policy makers need more comprehensive and timely evidence to accelerate industrial development of climate-tech effectively. With recent advances in Artificial Intelligence (AI), it is now possible to develop models that generate such evidence at large scale and in near real-time. In the project, I will analyze the global network of organizations collaborating on climate-tech innovation. The network is based on processing the social media posts of organizations via large language models (LLMs). It includes key public and private actors and spans various types of climate technologies (e.g., solar, hydrogen, electric vehicles) and types of collaborations (R&D collaborations, demonstration projects, equity investments). I will use the fellowship to conduct in-depth analyses generating valuable insights for managers and policy-makers on facilitating innovation clusters. Furthermore, I plan to operationalize the information retrieval and processing enabling analyses in real-time.

Chiara Ullstein: Participatory Auditing (in cooperation with Audit.EU)

The EU AI Act posits that providers (management and developers) of high-risk AI systems have to undergo conformity assessment. The conformity assessment encompasses several measures that are supposed to corroborate that a system is legally compliant, technically robust, and ethically sound, and can be considered ‘trustworthy AI’. The project ‘participatory auditing’ aims to contribute to the project Audit.EU (1) by exploring how companies can leverage their learnings from established compliance practices such as for the GDPR and (2) by proposing participation as an approach to source AI Act compliance-relevant information from suitable stakeholders to increase inclusivity and mitigate risks of discrimination. Participation is considered to enhance the process of achieving compliance through a comprehensive testing and feedback process. Based on learnings from established compliance measures, a framework for performing auditing in a participatory manner and in accordance with the EU AI Act will be developed and evaluated. The primary goal of the framework is to serve developer teams as a guideline.

Niklas Wais: Law & AI: Navigating the Intersection

Most areas of law that should in principle be relevant for AI currently leave many intersectional questions unanswered. The reason for these open questions is that jurisprudence cannot pursue its task of incorporating AI into the existing dogmatics because it lacks sufficient technological understanding. At the same time, developers lack knowledge of the law and therefore only base their design decisions on performance, but not compliance with e.g. data protection or anti-discrimination law. Although students from various professional backgrounds want to learn more about the underlying interface issues, truly interdisciplinary educational material is missing. My project will address this and transform the rare specialist expertise that currently only exists at TUM into a freely available online course. By fostering interdisciplinary collaboration between law and technology and sharing cutting-edge knowledge as effectively as possible, the project seeks to promote the responsible use of AI for the benefit of society.

How can we make better use of the economic and social potential of data without losing sight of possible negative aspects? On 2. February 2023, we organized a panel discussion on the current state of the data policy in Germany.

Key insights from the discussion

The event was kicked-off by Moritz Hennemann (University of Passau) who put the efforts of German data policy into a larger European context. He stressed that data policy is one of the crucial cross-cutting issues of our time – as, for instance, weather data is crucial for planning your weekend travels to monitoring climate change or flight traffic and includes military purposes. He further noted that data usage always comes with trade-offs between various norms and decisions, e.g., between the economic usage of data and privacy rights. One way forward for an effective data policy according to Moritz Hennemann is to think in sectoral fields of applications and create sectoral data spaces which facilitate and foster the usage and sharing of data. Based on these experimentations, shared criteria and measures for data spaces can then be developed.

Based on this input the panel started with a discussion of the envisioned data institute to be set up by the German government. Andreas Peichl as member of the founding commission of the Data Institute gave an overview of the goals and structure of the data institute to be implemented. While the panelists agreed that the data institute is the right step in the right direction, it was stressed that the institute will need an agile structure and enough financial backing. Moreover, the panelists highlighted that its success will largely depend on the selection of effective areas of applications and use cases.

Another strand of the discussion focused on the question of what effective data policies for a common good need in the next 5 to 10 years. Here, Benjamin Adjei stressed the existing gaps in Bavaria which lacks appropriate strategies, laws, and infrastructures for an effective data policy. According to Amélie Heldt the state can play a crucial role here, e.g., by creating open data repositories that can be utilized by startups as well as actors from academia, civil society or the public sector. Moreover, she advocated for the creation of sandboxes and spaces for experimentation to create positive uses cases.

The last part of the discussion centered on the (perceived) tradeoffs between data protection and data usage. Here, the panelists agreed that data protection is frequently misused to shield off and block access to and usage of data. Amélie Heldt also stressed that the GDPR is crucial as it creates trust among citizens, whereas Andreas Peichl presented some examples for different treatments of GDPR requests depending on the local context. Benjamin Adjei criticized the simple “black-or-white” thinking when it comes to data protection versus data usage.

The panel discussion centered on strategies and narratives to facilitate data usage and sharing. The panel debated the added economic and societal benefits, while also addressing its related challenges for citizens, business and regulators touching upon topics linked to the data institute, data for common good and the necessary prerequisites for effective data usage for public interest.

The panelists strongly agreed that we need a shift of narratives and direction focusing more on a positive vision of data usage and what good can come out of data-driven projects for society at large. To this end, however, we need to invest more financial resources and build up organizational and infrastructural capacities that put us in the position of using data for the public interest.

Partners & organization

The panel discussion was part of the series "Governance by & of Technology" which was hosted in 2022 / 2023 at the TUM Think Tank. The public event attracted a broad audience who joined the three panelists Amélie Heldt (Digital Policy Officer at the Federal Chancellery), Benjamin Adjei (Member of the Bavarian State Parliament and Digital Policy Spokesperson for Bündnis 90 / Die Grünen), and Andreas Peichl (Ludwig Maximilian University of Munich, Ifo Institute & Member of the Founding Commission of the Data Institute). The event was moderated by Sofie Schönborn (TU Munich).

Don't lose track of innovation.

Sign up to our newsletter and follow us on social media.