Stuff that wants stuff
Note 1 from the research journal by Nicklas Lundblad: thoughts and questions on agents, agency and institutions
Note 1.
This is the first note in what I aim to use as a research journal for the project that I am undertaking at the TUM Think Tank, about agents, agency and institutions. I am hoping to explore this subject broadly, thinking through various aspects and sharing the work through here.
But first let me explain why I think this is such an interesting area. Agentic AI has now been robustly identified as one of the core trends that we will need to understand more in detail over the coming years. There are questions ranging from liability to whether agents should be regarded as moral patients, and there is no shortage of research, writing and debate – so why engage with such a saturated field?
The reason is that I am interested in what makes agents agents in the first place – I am interested in agency. I find agency a fascinating subject, and am convinced that we might have cheated in our project to develop artificial intelligence by not starting with agency.
We are, to quote David Krakauer who directs the Santa Fe-institute, teleonomic matter – or stuff that wants stuff.
So far we have been members of a very limited class of stuff that wants stuff, but we now face the very real possibility that we will be able to build stuff that wants stuff that is different from us – machines that have not evolved, are not embedded in ecosystems and that lack many of the vulnerabilities that we have.
And in doing so, we may build machines that are intelligent – and it may well be the case that this is the only way we can build something that is intelligent in the same way we are intelligent.
In a short history of intelligence, we could argue that evolution starts with agency – the ability to act – as a way to solve sub-generational selection problems. A bug that has evolved a specific camouflage benefits from this only if it moves to a surface where the camouflage works, and so agency adds to the evolved defense – and over time that agency is turned in on itself, perhaps in the strange loop suggested by Douglas Hofstadter, and turns to intelligence.
In this story, intelligence is agency shaping agency in a recursive loop, and that means that any attempt to recreate our intelligence may need to mimic this particular arrangement – rather than just be based on deeply predictive models. In fact, you could argue that reinforcement learning is agency acting on agency in some sense, but in specific, rather than general, cases.
Agency is general – we want things – and this want, this will, is foundational to who we are.
Today's agents are agents in a much more limited sense – they are delegates, with a mandate or a mission, and they can be instantiated and deleted at will. Something that truly wants things cannot – and this may well be a reason not to create any such systems at all! They may even not be that useful, as they could easily be said to be too hard to control.
If we were to add such agents – with real agency – to our world, a lot might change. Many of our institutions are premised on a scarcity of agency – there are only so many people whose wants are coordinated in a market, for example – and if we assume abundance of agency those institutions may well crack under the added complexity.
We would, then, need to rethink our institutions.
All of this is terribly speculative, of course, but that is in the nature of beginnings: we begin this research journal with a broad set of questions around agents and agency, and look to drill down on specifics in the coming weeks and months. This project will be a success if we can find frameworks, insights and models to think through the kinds of changes that agents, and delegates, of different kinds might bring about – and what our policy responses to this should be.
Stay tuned for more!