Thoughts and questions on agents, agency and institutions
A research journal by Nicklas Lundblad
This is a research journal for the project that I am undertaking at the TUM Think Tank, about agents, agency and institutions. I am hoping to explore this subject broadly, thinking through various aspects and sharing the work through here.
But first let me explain why I think this is such an interesting area. Agentic AI has now been robustly identified as one of the core trends that we will need to understand more in detail over the coming years. There are questions ranging from liability to whether agents should be regarded as moral patients, and there is no shortage of research, writing and debate – so why engage with such a saturated field?
The reason is that I am interested in what makes agents agents in the first place – I am interested in agency. I find agency a fascinating subject, and am convinced that we might have cheated in our project to develop artificial intelligence by not starting with agency.
We are, to quote David Krakauer who directs the Santa Fe-institute, teleonomic matter – or stuff that wants stuff.
The research journal is published every Friday!
Read the journal editions here
-
Note 1: Stuff that wants stuff
Agentic AI has now been robustly identified as one of the core trends that we will need to understand more in detail over the coming years. There are questions ranging from liability to whether agents should be regarded as moral patients, and there is no shortage of research, writing and debate – so why engage with such a saturated field?
-
Note 2: Artificial Apprentices
Imagine you are training a junior colleague in the craft you're engaged in – what is the best way of ensuring that they develop fast, and how can you help them develop skills? Do you tell them everything you know about your area of expertise? Or do you show them how to deal with specific cases? It seems obvious that you should do the latter, right? And that is why we have the notion of apprenticeship .
-
Note 3: After the web – the mesh
Just as WAP makes the web “work” on early phones, MCP makes the web of software “work” for early agents. And like WAP, it reveals the awkwardness of translation. Agents must mimic developers by issuing API-style calls and interpreting structured responses, rather than engaging with digital environments natively. For now, MCP is essential scaffolding—it allows the emerging agent ecosystem to function across today's fragmented landscape. But if the pattern holds, it will likely fade once systems become agent-native, meaning that data, applications, and environments are built to speak directly to intelligent models rather than through a compatibility layer. In that sense, MCP stands to agents as WAP once did to mobile phones: a necessary bridge to an unfamiliar world that will disappear when the new medium learns to walk on its own.
-
Note 4: Delegation and duration
Let’s say that we stop calling agents agents, perhaps because we believe that agency is a much harder problem than has hitherto been recognized. True agency - to desire things, to want them - seems to be a deeply evolutionary quality in a system and exploring that may be its own project. We could then speak of artificial delegates instead - a much better, and arguably much clearer term. If we do, we quickly realize that the key dimensions that policy makers will be interested in are scope and duration. To what extent do you delegate, and for how long?
-
Note 5: Agents, worlds and hands
Through the hand, the brain learns the affordances of the world—how objects resist, yield, or break—and refines its models accordingly.
"Are world models a necessary ingredient for flexible, goal-directed behavior, or is model-free learning sufficient? We provide a formal answer to this question, showing that any agent capable of generalizing to multi-step goal-directed tasks must have learned a predictive model of its environment. We show that this model can be extracted from the agent's policy, and that increasing the agent's performance or the complexity of the goals it can achieve requires learning increasingly accurate world models."
-
Note 6: Orchestrating intelligence
The fact that splitting an AI into multiple personalities sometimes gives better results than just asking the AI itself is deliciously weird, until you realize that this is sort of how our intelligence works as well - both individually and collectively.
-
Note 7:Intelligence and its vital flaws
As we design artificial agents (or delegates) one of the interesting challenges that we will have to sort out is to figure out how to decide mechanisms that ensure that the agent does not act in certain circumstances.
-
Note 8: Killing agents
As we put more agency into the world, we also have to make sure that we have the means to terminate that agency, or interrupt it in different ways. The way we often conceptualize this is through making sure that the agent checks in with the user before it does something that is hard to reverse or has monetary, or perhaps legal, consequences. This model assumes that we can design such interruptions and that the agent will abide by them. There are, of course, also other models that we can use to ensure that we can control agency.
-
Note 9: Design-principles for dyads
As we moved from models to agents, we still retained the notion of a single, cohesive unit of intelligence to some degree. The agent is still an individual. Modern philosophy, of course, has abandoned the idea that we are individuals (we are more like dividuals), but here - as elsewhere in the field of artificial intelligence - we are laboring with metaphors of yesteryear. The monad still reigns supreme in our understanding of intelligence, as does the idea that it is something in our heads, and not in our relationships.