Filter activities

Determining the impact of Large Language Models (LLMs) such as ChatGPT on our societies is definitely not an easy task. Experts around the world, including those at TUM Think Tank, are working hard to determine responsible pathways.

But what happens when we include the public in the debate?

The Responsible ChatGPT World Café, hosted by three TUM students - Adriana Mardueño, Julian Bräuer, and Anna Ackermann - set out to answer that question. On August 1st, participants from different backgrounds gathered at the TUM Think Tank and debated about ChatGPT in work, education, and personal contexts.

Although a prevailing optimism was evident among the participants, the dialogue ventured into the specific matters. Questions arose: Is it responsible to use chatbots for better flirting on dating apps? Are LLMs more biased than human teachers? How will industries and workers be affected?

In the end, the bottom line was: "it's complex and it depends". ChatGPT and other models will be extremely impactful, however, the course of their influence lies within our collective agency. Effective regulation assumes paramount importance, and individuals depend on policymakers to shape innovation.
The event also showed that public participation must be part of the solution. Through a short survey before and after the event, it became clear that the participants developed more nuanced and differentiated perspectives. Being confronted with diverse viewpoints helps break out of bubbles and find holistic solutions.

To further harness the insights and enthusiasm generated by this event, we have established a real-time online discussion forum. Here, everyone is invited to engage in the ongoing discourse, contributing their thoughts and insights (anonymously), in our collective pursuit of responsible solutions. Join the Responsible ChatGPT World Café here, and let your voice be a part of the unfolding narrative of responsible innovation.


Don't lose track of innovation.

Sign up to our newsletter and follow us on social media.