Filter activities

Meet the Fellows

Meet the new Friedrich Schiedel Fellows who work on building new bridges between social sciences, technology, and other disciplines. Their interdisciplinary research projects, under the motto "Human-Centered Innovation for Technology in Society" focus on how technologies can be developed responsibly, human-centered, and democratically, serving the public good.

Auxane Boch: Psychology Impact Assessment for Interactional Systems: Defining the Evaluation Scope (PSAIS)

This research project aims to address the lack of frameworks for systematically assessing the diverse psychological impacts of AI. By adopting a participatory approach and considering cultural values, the project seeks to develop a multi-cultural mapping framework for evaluating the psychological impact of AI systems. The research will involve workshops and consultations with stakeholders from various sectors and regions to define evaluation criteria. The project will contribute to the development of concrete recommendations for action by providing a culturally-informed framework that can guide the responsible development and application of AI technologies. The impact of the project extends to academic disciplines, partner institutions, societal stakeholder groups, and policy actors. It will foster interdisciplinary knowledge exchange, stimulate discussions on standardization, and contribute to the reduction of potential inequities arising from technology adoption. The project aims to be a change agent by actively contributing to the implementation of the recommendations and ensuring user well-being and trust in AI systems.

Efe Bozkir: Echoes of Privacy: Exploring User Privacy Decision-Making Processes towards Large Language Model-based Agents in Immersive Realities

User privacy concerns and preferences have been researched extensively in the context of various technologies, such as smart speakers, IoT devices, and augmented reality glasses, to facilitate better privacy decision-making and human-centered solutions. With the emergence of generative artificial intelligence (AI), large language models (LLMs) have started being integrated into our daily routines, where models are tuned with vast amounts of data, including sensitive information. The possibility of embedding these models in immersive settings brings a plethora of questions from privacy and usability point of view. In this project, through several user studies, including crowdsourcing ones, we will explore privacy concerns and preferences towards LLM-powered and speech-based chat agents for immersive settings and inference likelihood of alarming user attributes. The findings will help understand the privacy implications of such settings, design informed consent procedures that support users in immersive spaces that include LLMs, and facilitate privacy-aware technical solutions.

Baris C. Cantürk: Future Finance Law Hub (“F2L_Hub”)

The Future Finance Law Hub (“F2L_Hub”) is a project aimed at becoming a policy-maker hub at the intersection of IT law and commercial law, established at TUM, with the primary area of influencing Germany and the European Union in the medium to long term. Central to its modus operandi is the aggregation of prominent stakeholders from academia and industry across multidisciplinary domains, including law, finance, and IT, to generate significant outputs.
As delineated in the foregoing summary, F2L_Hub embodies a mission of considerable magnitude, delineating medium and long-term objectives aimed at institutionalizing a tradition in this field. Thus, this fellowship is poised to function as the catalyzing force behind the establishment of F2L_Hub, leveraging both financial resources and access to an excellent academic environment. Through this esteemed fellowship, a noteworthy organizational milestone will be attained, facilitating the procurement of requisite funding and partnerships vital for the execution of ancillary processes.

Daryna Dementieva: Harmful Speech Proactive Moderation

Offensive speech remains a pervasive issue despite ongoing efforts, as underscored by recent EU regulations aimed at mitigating digital violence. Existing approaches primarily rely on binary solutions, such as outright blocking or banning, yet fail to address the complex nature of hate speech. In this work, we want to advocate for a more comprehensive approach that aims to assess and classify offensive speech within several new categories: (i) hate speech that can be prevented from publishing by recommending a detoxified version; (ii) hate speech that necessitated counter speech initiatives to persuade the speaker; (iii) hate speech that should be indeed blocked or banned, and (iv) instances mandating further human intervention.

 

Mennatullah Hendawy: Setting up the Future with Sustainable Choices: GenAI Support in Resolving Multi- Stakeholder Conflicts in Sustainable Critical Metals & Minerals Development

This project outlines an innovative approach to resolve multi-stakeholder conflicts in the sustainable development of critical metals and minerals essential for decarbonization efforts. Recognizing the complexities and sustainability challenges within the supply chains of these materials, particularly those sourced from the Global South / emerging economies, the project proposes a digital platform leveraging reactive machine AI (RM-AI) and generative AI (Gen-AI) with human-in-the-loop functionalities. This platform is designed to facilitate transparent and inclusive discussions among public/community representatives, government, and industry stakeholders, ensuring a balanced consideration of environmental, economic, and social sustainability targets. Through co-developing a concept for an interactive, game- based decision-making tool powered by Gen-AI, the project aims to identify common interests, model sustainability trade-offs, and find consensus solutions that align with the societal goals of reducing inequality and promoting economic growth with decent work conditions. The project's integration of RM and Gen AI aims to bridge the gap between technical and non-technical decision-makers, enhancing stakeholder engagement and trust in AI-driven processes, thereby aligning closely with the fellowship’s mission of human- centered innovation and interdisciplinary collaboration for the public good.

Franziska M. Poszler: Research-based theater: An innovative method for communicating and co-shaping AI ethics research & development

This project will implement a creative approach to conducting, educating, and communicating AI ethics research through the lens of the arts (i.e., research-based theater). The core idea revolves around conducting qualitative interviews and user studies on the impact of AI systems on human ethical decision-making. It focuses specifically on exploring the potential opportunities and risks of employing these systems as aids for ethical decision-making, along with their broader societal impacts and recommended system requirements. Generated scientific findings will be translated into a theater script and (immersive) performance. This performance seeks to effectively educate civil society on up-to-date research in an engaging manner and facilitate joint discussions (e.g., on necessary and preferred system requirements or restrictions). The insights from these discussions, in turn, are intended to inform the scientific community, thereby facilitating a human-centered development and use of AI systems as moral dialogue partners or advisors. Overall, this project should serve as a proof of concept for innovative teaching, science communication and co-design in AI ethics research, laying the groundwork for similar projects in the future.

More information on the project can be found here:https://www.ieai.sot.tum.de/research/moralplai/

Malte Toetzke: Developing the Google Maps for the Climate Transition

I envision to develop the Google Maps for the climate transition. Business leaders and policy makers need more comprehensive and timely evidence to accelerate industrial development of climate-tech effectively. With recent advances in Artificial Intelligence (AI), it is now possible to develop models that generate such evidence at large scale and in near real-time. In the project, I will analyze the global network of organizations collaborating on climate-tech innovation. The network is based on processing the social media posts of organizations via large language models (LLMs). It includes key public and private actors and spans various types of climate technologies (e.g., solar, hydrogen, electric vehicles) and types of collaborations (R&D collaborations, demonstration projects, equity investments). I will use the fellowship to conduct in-depth analyses generating valuable insights for managers and policy-makers on facilitating innovation clusters. Furthermore, I plan to operationalize the information retrieval and processing enabling analyses in real-time.

Chiara Ullstein: Participatory Auditing (in cooperation with Audit.EU)

The EU AI Act posits that providers (management and developers) of high-risk AI systems have to undergo conformity assessment. The conformity assessment encompasses several measures that are supposed to corroborate that a system is legally compliant, technically robust, and ethically sound, and can be considered ‘trustworthy AI’. The project ‘participatory auditing’ aims to contribute to the project Audit.EU (1) by exploring how companies can leverage their learnings from established compliance practices such as for the GDPR and (2) by proposing participation as an approach to source AI Act compliance-relevant information from suitable stakeholders to increase inclusivity and mitigate risks of discrimination. Participation is considered to enhance the process of achieving compliance through a comprehensive testing and feedback process. Based on learnings from established compliance measures, a framework for performing auditing in a participatory manner and in accordance with the EU AI Act will be developed and evaluated. The primary goal of the framework is to serve developer teams as a guideline.

Niklas Wais: Law & AI: Navigating the Intersection

Most areas of law that should in principle be relevant for AI currently leave many intersectional questions unanswered. The reason for these open questions is that jurisprudence cannot pursue its task of incorporating AI into the existing dogmatics because it lacks sufficient technological understanding. At the same time, developers lack knowledge of the law and therefore only base their design decisions on performance, but not compliance with e.g. data protection or anti-discrimination law. Although students from various professional backgrounds want to learn more about the underlying interface issues, truly interdisciplinary educational material is missing. My project will address this and transform the rare specialist expertise that currently only exists at TUM into a freely available online course. By fostering interdisciplinary collaboration between law and technology and sharing cutting-edge knowledge as effectively as possible, the project seeks to promote the responsible use of AI for the benefit of society.

As partners of the "Festival der Zukunft" we will explore the topic: Future Island at the Munich-based Deutsche Museum. Meet us there:

GovTech in Action: Overcoming Digitalization Challenges in Germany

Germany's public sector faces significant challenges in its digital transformation. This panel discussion will explore how GovTech can provide solutions to these persistent issues. We will delve into the potential of AI to enhance public services and discuss the importance of government collaboration with start-ups. The panel will also address the major hurdles, including regulatory constraints, data privacy concerns, and the cultural shift needed within governmental organizations. Join us to uncover how innovative technologies can drive the digital future of Germany's public sector. Moderated by our Managing Director, Markus Siewert.

Panelists:

  • Sandra Pavleka, acatech – Deutsche Akademie der Technikwissenschaften
  • Vanessa Theel, Co-Founder & CRO at SUMM AI
  • Lars Zimmermann, Co-Founder and Board Member of the GovTech Campus Deutschland

Thursday, 27 June 2024 - 3 - 4 PM 
Dome Stage - Festival der Zukunft - Deutsches Museum
More Information here.

 

Discover our Labs & Projects at the Family Day - Free Entrance on Saturday, 29 June 

At our walk-in booth, you will get an insight into the work of the TUM Think Tank. Here we dive into the future together, peer through the latest technologies into a world of tomorrow and yesterday, and discuss, experiment and explore together how we can shape and use technologies responsibly.

Join us at the first floor of the main building! Bring your family and friends and enjoy a glimpse into the future with us!

The research project "Using AI to Increase Resilience against Toxicity in Online Entertainment (ToxicAInment)", by Prof. Dr. Yannis Theocharis (Chair of Digital Governance), funded by the Bavarian Research Institute for Digital Transformation (BIDT), explores the spread of extremist, conspiratorial and misleading content on social media, investigating how this content is embedded through entertaining content. It aims to deepen the understanding of the impact of this content on user behavior by combining entertainment theories, visual communication and toxic language with AI methods. This project makes an important contribution to analyzing and combating online toxicity. More information can be found on the project page or in the BIDT press release.

 

In a significant development, the European Parliament has adopted its negotiating position on the AI Act. This paves the way for discussions with EU countries in the Council. Once finalized, the AI Act will be the world's first comprehensive legislation on artificial intelligence. As discussions continue, members of our Generative AI Taskforce are providing valuable insights into the implications and potential of the AI Act.

A common point of public debate is how generative AI will change our workforce. Isabell Welpe, Chair of the Strategy and Organisation at TUM and a member of the TUM Think Tank's Generative AI Taskforce, notes:

As the field advances at an unprecedented pace, regulatory frameworks are trying to keep up. Another member of the Generative AI Taskforce, Christoph Lütge, who holds the Peter Löscher Chair of Business Ethics at TUM and is also the director of the TUM Institute for Ethics in AI, outlines the challenges of regulation:

“The recent adoption of the AI Act by the European Parliament underscores the pressing need to regulate the rapidly advancing field of Artificial Intelligence. The AI Act represents the European Union's ambitious endeavor to establish a regulatory framework for AI. However, the challenge lies in striking a delicate balance between safeguarding fundamental rights, fostering ethical AI development, and avoiding any unintended stifling of innovation."

 

Christian Djeffal, Assistant Professor of Law, Science and Technology at TUM, shares his perspective on the details of the AI Act in a blog post, after participating in a working meeting organized by the Electronic Transactions Development Agency (ETDA) of Thailand.

He outlines his aspirations for possible improvements to the AI Act:

 

 

Dirk Heckmann holds the Chair for Law and Security in Digital Transformation at TUM and als serves as is co-director of the Bavarian Institute for Digital Transformation (bidt). As a member of the taskforce, he appreciates the European legislature's recognition of the urgent need to regulate AI with the world's first comprehensive legal framework for "trustworthy AI", further explaining:

 

Urs Gasser, Chair for Public Policy, Governance and Innovative Technology at TUM and Dean of the TUM School of Social Sciences and Technology, was invited to write an editorial for “Science” in which he wrote:

An undertaking to which the Generative AI Taskforce has devoted its work to.

 

On Friday, June 23rd 2023, members of the Generative AI Taskforce participated in a working meeting organized by the Electronic Transactions Development Agency (ETDA) of Thailand. One main focus of the meeting was to learn more about Europe’s approach to AI governance, and in particular to take a close look at the EU AI Act (AIA) and explore how Europe deals with the rise of generative AI. Christian Djeffal, Assistant Professor of Law, Science and Technology and member of our Gen AI Taskforce gave a short input on this issue. In this blog post, he shares his key takeaways.

The AI Act of the European Union could serve as an effective balancing act between risk regulation and innovation promotion. In my view, the recipe for an ideal equilibrium includes:

Therefore, the AI Act has potential with its blend of a broad framework and specific sector regulations and enforcement. However, it is the finer details that need refining for it to truly thrive. A key area of focus should be honing the high-risk system's definition. For example, the current outlines in Annex III are so expansive they could include applications that hardly meet the high-risk criteria.

Against this backdrop, AI sandboxes are brimming with potential, serving as a breeding ground for innovation and a practical tool for regulatory learning. Current proposals around the AI Act are mostly about establishing general structures as well as coordination and cooperation among member states. The success of these sandboxes relies heavily on their effective rollout by these states. Interestingly, other European legal developments, like the Data Governance Act—which allows for protected data sharing—might be game-changers, possibly boosting sandboxes to an entirely new level, as they would also allow sharing data under the protection of data protection or intellectual property law.

If I could make a wish concerning the AI Act, I would wish for more participatory elements, particularly in risk management. Engaging users and citizens in identifying and mitigating risks is crucial. Hence, it would be beneficial to incorporate such practices "where appropriate". Analogous provisions already exist in the General Data Protection Regulation and the Digital Services Act. It is a fallacy to believe that only companies, compliance departments, and bodies responsible for algorithmic assessments can fully understand the societal implications of new systems.

Generative AI and its fast developments are baffling experts, policymakers, and the public alike. We have spent the last week at the Berkman Klein Center for Internet & Society at Harvard University, attending the "Co-Designing Generative Futures – A Global Conversation About AI" conference. After having talked with experts, decision-makers, and stakeholders from diverse fields and disciplines around the world, we came back with new questions and “mixed feelings” about the impact of generative AI: it is wicked, disruptive, complex, and ambiguous.

As we conclude this conference week, we are sure that capacity building is the answer to the duality between the incredible possibilities generative AI holds and the threats    it poses. As we think about governance and implementations, let's dive into six things you can already do - and which we are fostering at TUM Think Tank - to shape the future of generative AI:

1. Implement ethical guidelines: Promote the development and adoption of ethical frameworks that prioritize transparency, fairness, and accountability in the use of generative AI.

2. Collaborate across disciplines: Foster collaboration between technologists, policymakers, ethicists, and diverse stakeholders to collectively address the challenges and risks associated with generative AI.

3. Research and development: Support research initiatives that focus on responsible AI, including bias mitigation, privacy preservation, and effective detection of generated content.

4. Educate and raise awareness: Share knowledge and raise awareness about the opportunities and challenges of generative AI, empowering individuals and organizations to make informed decisions.

5. Champion diversity and inclusion: Encourage diverse representation and inclusivity in the development and deployment of generative AI systems to mitigate biases and ensure equitable outcomes.

6. Boost positive impact on economy and education: Support businesses to pick up the highly innovative possibilities of generative AI. Foster our education system to not only make use of the technology, but support skill development.

The connections made, insights shared, and ideas generated during this week are great examples of collective capacity building. The conference was hosted in collaboration with the Global Network of Internet & Society Research Centers, BI Norwegian Business School, Instituto de Tecnologia e Sociedade (ITS Rio), and the TUM Think Tank.

Stephanie Hare joined us on the evening of 27 February 2023 to present the main topics of her book “Technology is not neutral: A Short Guide to Technology Ethics”. In her book, Stephanie Hare addresses some key questions surrounding modern digital technologies: One focus is how developers of technology but also society at large can seek to maximize the benefits of technologies and applications while minimizing their harms. Read our key takeaways from our discussion here. 

Some key take-aways from the discussion

Using a philosophical framework, she utilizes several different fields and approaches to ethics and philosophy to call attention to these issues. For instance, metaphysics points out what the problem needs to be solved, while epistemology helps us to ask about the relevant sources of knowledge to address these questions and problems. Political Philosophy, on the other hand, highlights the question of who holds the power to pursue these solutions, while aesthetics highlights how technologies should be designed and displayed. Ethics, finally, gives us answers to the question of what the inherent values are baked into technology. 

Throughout the discussion with Alexander v. Janowski and the audience, we addressed crucial observations on the design of technologies which we can make in our everyday world. Examples included how the size of many smartphones is fitted to larger, typically male hands, similarly to how airbags in vehicles have only been tested on mannequins that resemble the average male body. These observations lent credence to the ethical considerations of who and what entities do and should have control over the design and application of technologies. 

Overall, Stephanie Hare hopes that her book “hacks humans and human culture.” by contributing to the effort to inspiring people to see the biases and intentional or unintentional inequalities that technologies will take on from their developers if left unscrutinized.  

To learn more about Stephanie Hare, the book, and her other works, visit her website at https://www.harebrain.co  

Don't lose track of innovation.

Sign up to our newsletter and follow us on social media.