I’m picking up on a shift in the narrative around AI agents. It’s subtle, but it’s there. For the past year or so, the dominant story has been about autonomy. Agents that act on our behalf, automate our workflows, and handle tasks end-to-end without our involvement. There’s been talk of 2025 as “the year of the agent”, a moment when AI systems would begin to replace human effort at scale. But what I see emerging instead is something quieter, and potentially more transformative: a move away from autonomy and toward human alignment. Not agents that replace us, but systems that collaborate with us. Not full delegation, but intent alignment through interaction.
The Autonomy Narrative
Autonomy has been the dominant framing. Agents were imagined as machines that could act independently, completing tasks without human input. Essentially replacing us in certain workflows. The appeal is obvious. You tell the machine what to do, and it gets it done. The engineering challenge has been to make these agents robust enough to handle edge cases, interpret instructions correctly, and recover when things go wrong. But this vision also leans heavily on the idea that human involvement is a bottleneck and something to be removed.
But I think this framing is starting to break. What I’m seeing instead is a narrative slowly turning away from full autonomy and towards something more nuanced: the idea that machines can now handle more complex tasks, but not all tasks. And not in isolation. The hard part isn’t just getting the machine to do the thing. It’s getting it to do the thing in a way that makes sense to us, in context, as part of an ongoing process. This turns the problem from an engineering problem into a design problem.
A Shift Toward Collaboration
This shift isn’t loud. It’s not dominating headlines. But it’s meaningful. It is closer to something we’ve seen before: augmentation instead of automation. But even augmentation doesn’t quite capture it. There’s a difference between tools that make us better at what we do and systems that can interpret intent, generate meaningful results, and adapt in conversation with us. It’s less like using a better tool and more like working with a new kind of collaborator.
And that changes the kind of problem we’re dealing with. It’s no longer just about getting the technology to work. It’s about how we work with it. This isn’t just an engineering problem. It’s a design problem. But not interface design in the traditional sense. This is closer to interaction design as relationship design. How we build patterns of engagement, feedback, and co-responsibility. The machines are becoming more capable of producing outcomes that make sense to us directly. That opens up a different kind of design space, one that feels closer to how we design workflows between people.
Systems That Make Sense in Use
This kind of interaction, where the system responds not just to commands, but to context, to intent, to ongoing feedback, starts to resemble how we work with other people. It’s not that the machine understands us in any deep human sense, but that it can interpret enough of our intent to stay in sync with what we’re trying to do. That’s new. And it opens the door to rethinking how we design for human–machine collaboration. Not as a question of interface layout or control, but of coordination, mutual adjustment, and shared activity.
It’s not completely new, of course. There’s been work on AI co-creation for years. Especially in the arts, and more recently in software development. But I think this way of thinking needs to move beyond those domains. If we’re serious about “agents” as the next step in AI, we need to stop imagining them as little autonomous workers and start thinking of them as collaborators. Partners in a process. Not general intelligence, not independent actors. But systems that become useful through interaction.
Intent Alignment Through Interaction
If anything, the move toward “autonomous agents” has masked how much interpretive labor is still required to make these systems actually do what we want. What’s happening now is that more of that interpretive work is being folded into the system itself. Not perfectly, but increasingly well. That’s why I think this moment is not about achieving autonomy, but about deepening collaboration. It’s about aligning intent through interaction.
If this is where things are headed, then the real question isn’t “how autonomous can we make agents?” but rather “how do we want to interoperate with them?” What kinds of interactions support alignment? What kinds of feedback loops actually help the system understand what we mean? These are not just technical challenges, but questions for interaction design.
From Capability to Sensibility
And perhaps most importantly, they’re questions of sensibility. Because not everything can or should be handed off to a machine. There are forms of judgment, care, attention, and context-awareness that aren’t easily captured in prompts or goals. Machines can accomplish a surprising number of tasks now, but that doesn’t mean they can step fully into the human roles those tasks once sat within. That’s why I think this shift is important: it’s a move away from pretending machines can replace us, and toward exploring how they can work with us in meaningful ways.
So let’s not think about autonomy when thinking about agents. Think about collaboration. Think about designing for co-creation, for systems that stay in the loop, interpret intent, and contribute to the work, without ever stepping outside the relationship. And I think this is the world we’re already entering. If you’re not already co-creating with AI, you probably should be. Because that’s not just the future. That’s the shift that’s happening right now.