
Artificial apprentices
Note 2 from the research journal by Nicklas Lundblad: thoughts and questions on agents, agency and institutions
Note 2.
Imagine you are training a junior colleague in the craft you're engaged in – what is the best way of ensuring that they develop fast, and how can you help them develop skills? Do you tell them everything you know about your area of expertise? Or do you show them how to deal with specific cases? It seems obvious that you should do the latter, right? And that is why we have the notion of apprenticeship.
A recent paper noted this, and suggested that there is something we could call the “Agent Efficiency Principle”. Their formulation:
“[M]achine autonomy emerges not from data abundance but from strategic curation of high-quality agentic demonstrations.”
(See Xiao, Y., Jiang, M., Sun, J., Li, K., Lin, J., Zhuang, Y., Zeng, J., Xia, S., Hua, Q., Li, X. and Cai, X., 2025. LIMI: Less is More for Agency. arXiv preprint arXiv:2509.17567.)
Or - put in other terms - we should not attempt to theoretically instruct agents, but to ensure that we apprentice them to the best in their field. This notion of artificial apprentices is actually much more accurate than the idea of artificial agents, since what the bots do is not to act on their own, but they act on our behalf in ways that we have shown them.
This, then suggests that we should think about building apprentice architectures with humans and AI. What would be the main components of such architectures?
- First, humans would have to reflect on their own practices in ways that are accessible to the agent. To sit down, reflect and share how something was done is a classical way of instructing an apprentice.
- Second, we need to figure out a way to ask strategic questions of the AI, where it is supposed to answer them not as we would want the questions to be answered, but in ways to encourage the AI to change its behaviour. This is interesting because the questioning in an apprenticeship appears very different from the one that we engage in with AI today. Chatbots try to predict some combination of what is most likely to be a good reply to a question and what they think we want to hear; but the socratic questioning of an apprentice is meant to help the apprentice adjust their model of the world and the tasks to be performed in a craft. This requires building AI that can differentiate between the two modalities of questioning.
- Third, we need to have formal check, certifications and tests of the agent’s capabilities. Just as an apprentice will perform some kind of test to achieve their master’s status, an agent should show that it is able to solve a challenge that falls squarely within the domain of the craft we are training it in, but outside of the distribution of what it has been trained on. This will be difficult, to be sure – but is a good test of how deeply the new skills have embedded in the agent.
The point the authors in the aforementioned paper make - that training agents requires curated examples - only takes us halfway through to artificial apprenticeships, but it is important in that it helps us reject the notion that agents are trained in the same ways as models. Their distinction between doing and knowing is old, but helpful:
“This fundamental capability marks the dawn of the “Age of AI Agency”, driven by a critical industry shift: the urgent need for AI systems that don’t just think, but work. While current AI excels at reasoning and generating responses, industries demand autonomous agents that can execute tasks, operate tools, and drive real-world outcomes. As agentic intelligence becomes the defining characteristic separating cognitive systems from productive workers, efficiently cultivating machine autonomy becomes paramount.”
The distinction between thinking and working suggests an interesting way to frame the agentic capability frontier.