Filter activities

Data Studies and Ethical Data Work – Two positions available

The Ethical Data Initiative at the chair of Philosophy and History of Science and Technology (est. September 2024) of Prof. Sabina Leonelli is a global coordinating hub for research on data ethics and related education and policy activities. We are looking forward to collaborate with a highly motivated postdoctoral researcher who shares our deep interest in exploring educational, social, and governance issues that emerge in the context of working with data (both within and beyond the realm of scientific research), as well as possible future scenarios and applications particularly in relation to Artificial Intelligence.
We are looking for fellows with a background in policy, governance or data science and a strong interest in data studies and practical applications.

We are a very interdisciplinary team and world-leading expertise in the philosophy, history and social studies of data, data ethics and the governance of data and AI. We invite the researcher taking on this position under the chair of Philosophy and History of Science and Technology to become a core contributor to the Ethical Data Initiative.

About the Ethical Data Initiative at the TUM Think Tank

The Ethical Data Initiative is a global coordinating hub for research on data ethics and related education and policy activities. It brings together a network of relevant partners with the aim to scale up available resources to foster just, ethical and responsible data production, trading, processing and use around the world. We are particularly interested in developing training resources and governance models for under-resourced parts of society, including research institutions as well as small and medium enterprises, civil society organizations, social services, public administrations and other public bodies – which do crucial data work across the globe yet do not typically have in-house resources to develop skills in responsible data management.

What we offer:


We look forward to receiving your application by 1st of April 2024!
We will review applications on a continuous basis thus, they should be submitted as early as possible via e-mail to:

Learn more and apply here.

The Urban Digitainability Lab aims to integrate digitalization and sustainability in urban spaces in order to shape sustainable urban services of general interest. It addresses the challenge that, despite the existing calls for such integration, a common understanding and effective concepts for combining sustainability and digitalization only exist in rudimentary form. The project plans to address this shortcoming by creating a community of practice, training opportunities and a catalog of criteria for measuring success. In addition, the exchange between science and practice is to be strengthened in order to develop innovative and sustainable solutions in the areas of "mobility", "housing" and "health".

We are excited to be building the team and are looking for three positions.

Find the role descriptions and more information on the application process (in German) through the button below.

Job Offers

Apply by April 2 and become part of the vibrant TUM Think Tank community. We are looking forward to meeting you!

The Stiftung Mercator supports this project at the TUM Think Tank in their section „Digitalisierte Gesellschaft“ for the duration of  three years.

Become a Friedrich Schiedel Fellow and Embark with Us on a Journey of Human-Centered Innovation for Technology in Society!

Beyond traditional disciplinary boundaries, we seek fellows from diverse backgrounds who are dedicated to address societal challenges by developing approaches to advance the  public good. Together with our fellows, we want to explore how we can employ emerging technologies for the benefit of society, and ensure that emerging technologies are developed responsibly, human-centered, and democratically.

With their translational research at the intersection of technology and society, accepted fellows will be agents of interdisciplinary collaboration, building new and bolstering existing bridges between social sciences and natural sciences, engineering, life sciences, economics, health sciences, and medicine at TUM.

Core Objectives of the Fellowship Program

Empowering Innovators: Join a cohort of forward-thinkers, action-oriented researchers and trailblazers. Our fellowship program is designed to provide you with resources and a community needed to turn your groundbreaking ideas into reality.


Designed to echo Friedrich Schiedel's commitment as a visionary entrepreneur who left an indelible mark on corporate social responsibility, Friedrich Schiedel Fellowship for Technology in Society aims to fortify translational and actionable research in the ethical, social, political, economic, and legal realms of science and technology.

Why Apply for the Fellowship Program

Hosted by TUM School of Social Sciences and Technology and the TUM Think Tank

Friedrich Schiedel Fellows for Technology in Society are integral members of the TUM Think Tank community. We expect fellows to actively engage in their projects at TUM and its respective schools and to become vibrant contributors to our fellowship community.

Aiming to build bridges between the various TUM Schools, potential fellows need two professors (one from the TUM SOT and one from a different TUM School) who support the application and will work with the fellow on the proposed interdisciplinary project.

Applications Now Open – Don't Miss Your Chance to Shape Technologies for the Public Interest!

Apply now and become a part of a vibrant community! Applications are open until 28 March 2024. Please send your complete application to

Seize the chance to work on technologies for the public interest and be a Friedrich Schiedel Fellow for Technology in Society.

FAQ Fellowship
Application form

The research project "Using AI to Increase Resilience against Toxicity in Online Entertainment (ToxicAInment)", by Prof. Dr. Yannis Theocharis (Chair of Digital Governance), funded by the Bavarian Research Institute for Digital Transformation (BIDT), explores the spread of extremist, conspiratorial and misleading content on social media, investigating how this content is embedded through entertaining content. It aims to deepen the understanding of the impact of this content on user behavior by combining entertainment theories, visual communication and toxic language with AI methods. This project makes an important contribution to analyzing and combating online toxicity. More information can be found on the project page or in the BIDT press release.


After an intense three-day negotiation marathon, negotiators from the Council presidency and the European Parliament have successfully reached a provisional agreement on the proposal concerning standardized regulations for artificial intelligence (AI), known as the Artificial Intelligence Act. The objective of the draft regulation is to guarantee the safety of AI systems introduced to the European market and employed within the EU, with a commitment to upholding fundamental rights and EU values. This groundbreaking proposal additionally seeks to foster increased investment and innovation in the field of AI within Europe. Following this provisional agreement, efforts will persist at a technical level in the upcoming weeks to conclude the specifics of the new regulation. Once this work is completed, the presidency will present the compromise text to the representatives of the member states for endorsement. The comprehensive text will require confirmation from both institutions and undergo legal-linguistic revision before formal adoption by the co-legislators.

We asked members of our community for their insights on the AI Act. This is what they think:

Samson Esaias: Associate Professor of Law at BI Norwegian Business School, Faculty Associate at the Berkman Klein Center of Internet and Society at Harvard University 

"Since the Commission's April 2021 proposal, consensus has emerged within the bloc's legislative bodies on a risk-based approach to AI regulation, along with innovation-supporting measures like sandboxes. Debates have focused on sensitive issues such as biometric identification for law enforcement and regulation of General Purpose AI (GPAI). The Parliament advocated for stronger fundamental rights safeguards, while the Council favoured broader exemptions for the use of biometric identification for law enforcement. The Parliament's GPAI regulatory proposal also encountered resistance from the Council, partly because it focuses on the technology itself instead of the associated risks. Nonetheless, from the press-releases, the Parliament seems to have secured significant wins, including bans on biometric identification using sensitive data, internet photo scraping for facial recognition, restrictions on predictive policing, and mandatory rights impact assessments for high-risk systems. Similarly, despite strong resistance from some member states, the latest draft also includes important obligations on GPAI and systemic foundational models, though criteria for the latter may be overly stringent. This focus on GPAI regulation, initially absent from the Commission's draft, highlights the shift in priorities over the past two years. The question that remains is whether these additions will stay relevant in two years when the legislation comes into effect or if they will highlight the need for a rethink on how to regulate such rapidly evolving technologies."


Urs Gasser: Professor of Public Policy, Governance and Innovative Technology and serves as Rector of the Hochschule für Politik (HfP) and Dean of the TUM School of Social Sciences and Technology, Leader of the TUM Generative AI Taskforce

"The agreement on the AI Act marks an important milestone. Above all, it is a powerful political signal to the global community, showing that the EU lawmaking bodies are functional and can come up with meaningful guardrails in a complex and fast-moving normative field, with human rights and democratic values as lodestars. Whether the AI Act as a complex legal and regulatory intervention – at times resembling a Rorschach test – will live up to the high hopes expressed by its leading proponents remains to be seen. AI governance as a normative field comes with many unknowns. Perhaps the biggest challenge ahead is to learn continuously and manage both legal path dependencies and unintended consequences that often come with such ambitious legislative projects, as the recent history of law, technology, and society teaches us."


Noha Lea Halim: Doctoral Student and Research Assistant at the Professorship for Governance, Public Policy & Innovative Technologies at the TUM School of Governance, Assistant in the TUM Generative AI Taskforce

"The EU via its AI Act agreement brings forward a global benchmark regulatory proposal, representing most ambitious framework to date. By recognizing the need to not only regulate the economic but also the societal impacts of AI, it sets the tone for how AI might unravel in the future and implies far-reaching effects for the research and development of AI systems, in Europe and beyond.
The 12-24 months implementation phase will give a glimpse towards the long-term capability of the regulation to adapt to the technology’s disruptive potential as well as the Union’s ability for capacity building to bring the proposal to life.

The landmark proposal only marks the beginning of addressing AI’s future challenges, moving forward there will be many more to come."


Timo Minssen: Law Professor, CeBIL Director, University of Copenhagen and Global Visiting Professor at TUM in Spring 2024

"While much controversy remains, reaching an agreement on the EU AI Act was crucial since the time to decide where to regulate (or not regulate) is now. AI is evolving so rapidly, and it's already used by millions of citizens in many areas and stages of life on a daily basis. Both the risks and the opportunities are real, and we must address them swiftly if we want to keep control and reap the benefits from this technology in sustainable, safe and fair ways.

Given the complexity of the topic and the need for trade-offs, the delays accompanying the negotiations of the AIA were not surprising. Many of the AIA’s more restrictive provisions, appear to have been slowly watered down. Mostly due to industry interests and competitive concerns, regulatory thresholds have been slowly lowered and more traditional value-based boundaries have been stretched out. For better or worse, we can also see similar developments in the drafting of such guidelines the Chinese interim regulations on generative AI, as well as in Western countries such as in the US and UK.

It is clear that these changing policy perspectives illustrate how AI challenges our traditional values and concepts which signifies the enormous stakes of the task at hand. The impact not only on businesses, welfare, industry, innovation, and the knowledge commons, but also on individuals’ access to - and protection from - powerful technology are massive. Calls for banning or more regulation of specific applications in the EU due to ethical and value-based legal concerns and precautionary approaches, had - and will have - to be balanced against both competitive disadvantages and health risks due to missed opportunities by setting overly high regulatory thresholds. It can therefore be assumed that the significance of so-called regulatory sandboxes will grow, although this is a concept in need of further clarification.

It also seems to me that while rigorously protecting human rights and fundamental values, the AIA has with its risk category-based based aporoach generally taken an regulatory ”as open as possible, and as closed (i.e. regulated) as necessary” position. In particular with regard to lower risk AI systems, this could be good news for many innovators and SMEs developers, though some might have preferred even less rules. On the other hand, those concerned about the risks, or established companies with powerful IP portfolios and regulatory departments, might have preferred an ”as closed as possible, and as open as necessary” approach with high regulatory thresholds, be it to prevent risks - or (!) newcomers and competition.

In that regard it is also important to bear in mind that, essentially, the current debates that we see mostly concern high-level rules. What matters in daily life is how these are implemented where the action happens, as well as if the rules that we set are enforceable, feasible and if compliance can be monitored. If the choice is between having a jungle of extremely detailed rules with insufficient means for enforcement, and having less but very robust and enforceable rule, I definitely prefer the 2nd choice to increase the respect for the rules."

Armando Guio Espanol: Affiliate at the Berkman Klein Center for Internet & Society at Harvard University

"The EU AI Act is a regulation that not only will have an impact in Europe, but around the world. The adoption of this regulation will lead many countries to decide on their own rules regarding the development and implementation of this technology. It will be essential to follow the work that will come in the next months and the implementation process that will emerge in the European Union. In 2024 we will observe several countries join Europe in the official adoption of new regulations for AI systems. For example, several Latin-American countries are discussing now regulatory proposals for AI. The EU AI Act will have a considerable effect in mobilizing and accelerating many of these discussions. It will be also interesting to see how this policy fragmentation will impact the way this technology is being used and deployed, and the response of some of the biggest companies to this process."


Dirk Heckmann: Law Professor at the TUM School of Social Sciences and Technology, Direktor, Generative AI Taskforce

Whether the AI Act will prove to be effective regulation for AI will only be shown in legal practice. Political satisfaction does not replace legal certainty.





When: January 18, 2024
Where: Downtown Munich, Bayerischer Rundfunk (BR)
Hosted by: Bayerischer Rundfunk & TUM
Intended Audience: Researchers (professors, postdocs, PhD candidates) with an interest in AI and/or journalism and AI.

Join us in exploring the intersection of technology and journalism. This workshop focuses on getting to know the projects and methods being used in this exciting interdisciplinary field. And, to discover possible collaborations on use cases and methods. We look forward to connecting researchers in foundations, applications, and data! We explicitly encourage junior researchers to apply.

We want to hear from you if you work falls within one of the following topics:

Deadline for registration is December 5, 2023:

Participants will be selected within two weeks after submission deadline.

If you have any questions, please contact Emmelie Korell at

We look forward to seeing you in January!

“I am very pleased that, together with Microsoft, we have succeeded in developing an AI chat system tailored to the requirements of the Fraunhofer-Gesellschaft — and we did so in a very short time,” says Prof. Ingo Weber, director of Digital Transformation and ICT Infrastructure at the Fraunhofer-Gesellschaft. “We have found that many colleagues wish to use chat-based AI applications for their work and for research. However, public solutions available so far are problematic for work-related purposes, especially when it comes to data protection, confidentiality and information security.”

Read the full report here.

Weber was also recently invited to the prestigious Dagstuhl seminar on the topic of software architecture and machine learning. Read more on that here.

Prof. Dr. Ingo Weber

In a significant development, the European Parliament has adopted its negotiating position on the AI Act. This paves the way for discussions with EU countries in the Council. Once finalized, the AI Act will be the world's first comprehensive legislation on artificial intelligence. As discussions continue, members of our Generative AI Taskforce are providing valuable insights into the implications and potential of the AI Act.

A common point of public debate is how generative AI will change our workforce. Isabell Welpe, Chair of the Strategy and Organisation at TUM and a member of the TUM Think Tank's Generative AI Taskforce, notes:

As the field advances at an unprecedented pace, regulatory frameworks are trying to keep up. Another member of the Generative AI Taskforce, Christoph Lütge, who holds the Peter Löscher Chair of Business Ethics at TUM and is also the director of the TUM Institute for Ethics in AI, outlines the challenges of regulation:

“The recent adoption of the AI Act by the European Parliament underscores the pressing need to regulate the rapidly advancing field of Artificial Intelligence. The AI Act represents the European Union's ambitious endeavor to establish a regulatory framework for AI. However, the challenge lies in striking a delicate balance between safeguarding fundamental rights, fostering ethical AI development, and avoiding any unintended stifling of innovation."


Christian Djeffal, Assistant Professor of Law, Science and Technology at TUM, shares his perspective on the details of the AI Act in a blog post, after participating in a working meeting organized by the Electronic Transactions Development Agency (ETDA) of Thailand.

He outlines his aspirations for possible improvements to the AI Act:



Dirk Heckmann holds the Chair for Law and Security in Digital Transformation at TUM and als serves as is co-director of the Bavarian Institute for Digital Transformation (bidt). As a member of the taskforce, he appreciates the European legislature's recognition of the urgent need to regulate AI with the world's first comprehensive legal framework for "trustworthy AI", further explaining:


Urs Gasser, Chair for Public Policy, Governance and Innovative Technology at TUM and Dean of the TUM School of Social Sciences and Technology, was invited to write an editorial for “Science” in which he wrote:

An undertaking to which the Generative AI Taskforce has devoted its work to.


Data is changing how we live and engage with and within our societies and our economies. As our digital footprints grow, how do we re-imagine ourselves in the digital world? How will we be able to determine the data-driven decisions that impact us?

Members of the  Munich School of Politics and Public Policy  (Hochschule für Politik, HfP) are participants in the International Digital Self-Determination Network, a collaborative multi-stakeholder effort that is defining and advancing the concept of digital self-determination. The concept aims to propose ways to create and engage in trustworthy data spaces and ensure human-centric approaches when living in a data-driven society. It aims to enable and bolster the digital self-determination of individuals.

As part of this collaboration, members of the  HfP contributed to a  co-organized conference on digital self-determination in June 2022 in collaboration with the Directorate of International Law (DIL) of the Swiss Federal Department of Foreign Affairs. At the conference, which took place in Lucerne, Switzerland, four studios presenting and exploring use cases were set up and learnings from these use cases were analyzed, compared, and discussed. HfP Rector Prof. Dr. Urs Gasser shared the collected insights with the audience and acted as a co-research partner, providing support during the conference. He organized and conducted several meetings with the DIL and the stakeholders of the studios to discuss progress made in the studios to ensure a common methodological approach and contributed to translating these findings into a draft report.

The network's research and outreach efforts were continuously supported by the team of the Professorship of Public Policy, Governance and Innovative Technologies, offering guidance, support, and mentoring to the expanding global network.

The other founding partners of the network are the Directorate of International Law, Swiss Federal Department of Foreign Affairs; the Centre for Artificial Intelligence and Data Governance at Singapore Management University; the Berkman Klein Center for Internet & Society at Harvard University; and The GovLab at New York University.

Don't lose track of innovation.

Sign up to our newsletter and follow us on social media.