Mattias Rost

Associate professor in Interaction Design

LLMs as Relational

Posted on 2025-07-15

Much of life today is lived in a world of text. From messaging apps and social media feeds to news sites, emails, and search bars, our everyday experience is shaped by language in written form. Language and text permeate our lifeworlds, co-existing with trees, clouds, and bees. In such an environment, it is easy to mistake text as content. Pieces of information to be retrieved, consumed, reposted, liked, and shared. Language becomes data, something that can be gathered and optimized for speed, clarity, and maximum traction.

This can be seen as an industrialization of language. And it is into this regime that LLMs were introduced. These models, trained on massive corpora of digital text, appear to conform to the logic of this world: machines that can produce more of the same, faster, cheaper, and at scale. The dominant metaphor is computational and representational. Users “prompt” the model with questions or instructions and receive a response, an “output,” as if querying a database or consulting an encyclopedia. Text becomes a proxy for something else: information, knowledge, truth. This conceptualization is both intuitive and compelling, but it rests on a problematic understanding of language: as a transparent medium for conveying and carrying representations of the world.

This view mistakes what language is and how meaning arises. It reduces language to a code that transmits facts from one place to another, as though meaning exists outside of and prior to its expression. In such a representational view, language is a mirror or a container that holds something external to itself. But this is not how language fundamentally works, and it is not how we experience meaning.

Instead, we must adopt a richer understanding of language as a medium for thought, relation, and emergence. Meaning is not retrieved from language; it arises through the process of use. When we read, write, or speak, we do not merely decode representations. We participate in the unfolding of sense. And when we interact with systems like LLMs, we do not consult a database; we enter into a kind of dialogue. To treat LLMs as representational tools is to limit the view of language, and missing that language and meaning are interactive processes.

We may turn to hermeneutics for support, in particular Gadamer. For Gadamer, understanding is not the mechanical decoding of information, but a dialogical event, that he called a “fusion of horizons”. Each participant in a dialogue brings a set of pre-understandings, or a horizon, shaped by context and past experience. In dialogue, these horizons are not abandoned but brought together. Through the back-and-forth of conversation, a shared understanding takes shape. Not as consensus or equivalence, but as something emergent. Language, for Gadamer, is not simply a tool we use to represent thought. It is the medium in which thought happens, in which understanding becomes possible.

This has deep implications for how we engage with systems like LLMs. If we take their outputs as final, as discrete packages of meaning, we short-circuit the dialogical process. But if we instead treat the interaction with an LLM as a kind of interpretive encounter, we can begin to see meaning as relational and temporally unfolding. The model offers a linguistic response, which the human interprets, responds to, rephrases, and continues. The significance is not located in the text alone but in the structure of engagement.

Bakhtin’s philosophy of language complements this view by emphasizing the social and responsive nature of all utterance. For Bakhtin, every word is addressed to someone: every expression exists within a web of previous and anticipated responses. Language is inherently dialogic. It does not exist in isolation but always as part of an ongoing interaction. Even a solitary written sentence is shaped by imagined interlocutors, social norms, genres, and the context of utterance. There is no such thing as a neutral or standalone utterance. Every act of language is responsive, positioned, and anticipatory.

Bakhtin’s insight reinforces the idea that when an LLM produces a sentence, its meaning is not fixed in the words themselves, nor does it lie in the model’s training data. Rather, meaning arises in how the user takes up the utterance, how they respond, interpret, and continue the interaction. The model’s outputs are dialogic not because the model understands, but because the use of the model enacts a dialogue. Its sentences exist in the space between user and machine, shaped by anticipation and response. By its very nature, LLMs are trained to produce text in anticipation of human response.

From this vantage point, LLMs are not information engines but relational technologies. Their utility lies not in “knowing things” but in enabling a process of inquiry, reflection, and response. This reframing demands a shift in how we relate to both language and machines. Rather than querying for information, we engage in a process of co-disclosure: a mutual unfolding of sense that is contextually and temporally situated.

This also aligns with postphenomenological accounts of human-technology relations. Technologies are not neutral conduits of meaning. They mediate our experiences and perceptions of the world, shaping how things appear to us. LLMs do not simply reflect back our queries. They transform the very structure of how we relate to language, information, and ourselves. They invite new patterns of engagement, new rhythms of thought, new forms of dialogue. But only if we treat them as such.

The representational view of information systems, is now embedded in how we conceptualize LLMs and language. It risks flattening the rich, dialogical, and emergent nature of meaning. We must resist this flattening, and reclaim language as a medium of thought, before our use of LLMs becomes sedimented into patterns of retrieval and output. LLMs, far from being oracle-machines, are strange and powerful interlocutors. Their value lies not in what they say, but in what we can do with what they offer, in the relational space that opens up between prompt and response. Meaning, in this view, is not something we extract. It is something we participate in.

From Autonomy to Intent Alignment

Posted on 2025-06-20

I’m picking up on a shift in the narrative around AI agents. It’s subtle, but it’s there. For the past year or so, the dominant story has been about autonomy. Agents that act on our behalf, automate our workflows, and handle tasks end-to-end without our involvement. There’s been talk of 2025 as “the year of the agent”, a moment when AI systems would begin to replace human effort at scale. But what I see emerging instead is something quieter, and potentially more transformative: a move away from autonomy and toward human alignment. Not agents that replace us, but systems that collaborate with us. Not full delegation, but intent alignment through interaction.

The Autonomy Narrative

Autonomy has been the dominant framing. Agents were imagined as machines that could act independently, completing tasks without human input. Essentially replacing us in certain workflows. The appeal is obvious. You tell the machine what to do, and it gets it done. The engineering challenge has been to make these agents robust enough to handle edge cases, interpret instructions correctly, and recover when things go wrong. But this vision also leans heavily on the idea that human involvement is a bottleneck and something to be removed.

But I think this framing is starting to break. What I’m seeing instead is a narrative slowly turning away from full autonomy and towards something more nuanced: the idea that machines can now handle more complex tasks, but not all tasks. And not in isolation. The hard part isn’t just getting the machine to do the thing. It’s getting it to do the thing in a way that makes sense to us, in context, as part of an ongoing process. This turns the problem from an engineering problem into a design problem.

A Shift Toward Collaboration

This shift isn’t loud. It’s not dominating headlines. But it’s meaningful. It is closer to something we’ve seen before: augmentation instead of automation. But even augmentation doesn’t quite capture it. There’s a difference between tools that make us better at what we do and systems that can interpret intent, generate meaningful results, and adapt in conversation with us. It’s less like using a better tool and more like working with a new kind of collaborator.

And that changes the kind of problem we’re dealing with. It’s no longer just about getting the technology to work. It’s about how we work with it. This isn’t just an engineering problem. It’s a design problem. But not interface design in the traditional sense. This is closer to interaction design as relationship design. How we build patterns of engagement, feedback, and co-responsibility. The machines are becoming more capable of producing outcomes that make sense to us directly. That opens up a different kind of design space, one that feels closer to how we design workflows between people.

Systems That Make Sense in Use

This kind of interaction, where the system responds not just to commands, but to context, to intent, to ongoing feedback, starts to resemble how we work with other people. It’s not that the machine understands us in any deep human sense, but that it can interpret enough of our intent to stay in sync with what we’re trying to do. That’s new. And it opens the door to rethinking how we design for human–machine collaboration. Not as a question of interface layout or control, but of coordination, mutual adjustment, and shared activity.

It’s not completely new, of course. There’s been work on AI co-creation for years. Especially in the arts, and more recently in software development. But I think this way of thinking needs to move beyond those domains. If we’re serious about “agents” as the next step in AI, we need to stop imagining them as little autonomous workers and start thinking of them as collaborators. Partners in a process. Not general intelligence, not independent actors. But systems that become useful through interaction.

Intent Alignment Through Interaction

If anything, the move toward “autonomous agents” has masked how much interpretive labor is still required to make these systems actually do what we want. What’s happening now is that more of that interpretive work is being folded into the system itself. Not perfectly, but increasingly well. That’s why I think this moment is not about achieving autonomy, but about deepening collaboration. It’s about aligning intent through interaction.

If this is where things are headed, then the real question isn’t “how autonomous can we make agents?” but rather “how do we want to interoperate with them?” What kinds of interactions support alignment? What kinds of feedback loops actually help the system understand what we mean? These are not just technical challenges, but questions for interaction design.

From Capability to Sensibility

And perhaps most importantly, they’re questions of sensibility. Because not everything can or should be handed off to a machine. There are forms of judgment, care, attention, and context-awareness that aren’t easily captured in prompts or goals. Machines can accomplish a surprising number of tasks now, but that doesn’t mean they can step fully into the human roles those tasks once sat within. That’s why I think this shift is important: it’s a move away from pretending machines can replace us, and toward exploring how they can work with us in meaningful ways.

So let’s not think about autonomy when thinking about agents. Think about collaboration. Think about designing for co-creation, for systems that stay in the loop, interpret intent, and contribute to the work, without ever stepping outside the relationship. And I think this is the world we’re already entering. If you’re not already co-creating with AI, you probably should be. Because that’s not just the future. That’s the shift that’s happening right now.

Beyond Generation - Why “Interpretive AI” Is a Better Name for LLMs

Posted on 2025-05-27

The phrase Generative AI (or GenAI) has rapidly become the go-to term for the class of machine learning systems capable of producing media: text, images, music, and video. It is catchy, easy to remember, and broadly descriptive. However, when applied to large language models (LLMs), the term does more harm than good.

LLMs like GPT-4o, Claude, and others do not simply generate plausible text. They interpret. They respond. They situate their outputs in relation to complex and evolving input contexts. A better name, I argue, is Interpretive AI.

What Makes LLMs Different?

Generative image or music models typically synthesize outputs based on a latent space learned from training data. The act of generation is largely one-directional. These systems do not interpret a user’s sketch or melody in order to generate an image or song. They generate based on a prompt.

By contrast, LLMs are dialogic. They process language in a context-sensitive manner. Each token is generated not in isolation but in relation to the user’s input and the model’s own previous outputs. The system must keep track of what has been said, what is being asked, and what expectations are implied. It does not simply produce. It engages.

Interpretation as Core Operation

This engagement is not trivial. For a model to maintain coherence across a conversation, to stay on topic, answer questions, offer relevant elaborations, and handle ambiguity, it must perform a form of ongoing interpretation. This interpretation is not semantic in the human sense, but it is functional. The model must infer structure, intent, relevance, and tone in order to continue the interaction in a way that humans find meaningful.

In this light, calling it merely generative fails to account for its interpretive doing. These models work by continuously negotiating meaning in a co-constructed context with the user.

Why Terminology Matters

Terminology shapes understanding. When we call something generative, we focus on its outputs. When we call it interpretive, we shift focus to its relational, responsive, and situated behavior. This is not a minor semantic difference. It influences:

  • How we design interfaces
  • How we set expectations for use
  • How we regulate AI systems
  • How we think about responsibility and agency

Interpretive AI: A Clearer Frame

The term Interpretive AI highlights that these systems do more than generate statistically plausible outputs. Their responses are shaped by how they condition each token on a dynamic context, consisting of previous inputs, prior outputs, and linguistic structure. This ongoing adjustment allows them to maintain relevance, coherence, and tone in ways that appear attuned to the evolving interaction, even though they lack understanding in the human sense.

This shift in framing opens up a richer vocabulary for discussing how such systems behave, how they misinterpret, and how they mediate human communication.

Philosophical Considerations

I propose this term intentionally provocatively. Interpretation traditionally implies human understanding, grounded in a condition of being-in-the-world. LLMs do not understand in this sense. But meaning and understanding are not located within the system. Instead, they emerge in relation, unfolding between human and model. In this view, interpretation is a co-operative, situated process, enacted through interaction. Through its ongoing token generation, the LLM opens up a space of possibilities, participating in a kind of relational meaning-making. Interpretive AI is therefore not a claim about internal cognition, but about the model’s relational role in negotiating understanding as a dialogical partner.

Docent Lecture

Posted on 2025-03-20

This morning I gave my lecture for my docent application at the faculty of science and technology at university of gothenburg.

I explained that a computer is a simple device that can take input and produce output. Using it, we can make them do things. They can be cumbersome to get to do certain things, but once they can do them it is often trivial to replicate.

I went through four examples from my research, where I have had to make the computer do: buttons, maps, steps counts, and screen time.

With AI, this may come to change. Computers are great at instructing themselves, and capable of understanding us on our own terms. What does this mean for the future? Will there be developers? Will there be apps? Will there be computers as we know them? Or will we just have machines, and they will do things we ask them to do.

I took this opportunity to talk about where I see things going next, and what I think could be an interesting future worth exploring.

Watch the video on youtube here: https://youtu.be/5In2Zn5hx_w

Talk about ChatGPT

Posted on 2023-03-24

I recently made a public talk at a local meetup, based on the previous blog post. It was recorded and can be watched below.

I start off by describing ChatGPT as a wheel of fortune where every spin generates a new word based on a probability distribution conditioned on the given prompt.

I then talk about how new technology tends to be used to do the things we already do in the ways we already do them (in terms of Marshall McLuhan). I explain this based on the notion of bounded rationality.

I then show how people have been trying to use ChatGPT as an alternative to existing technology, and how it often fails as an existing technology.

I finish by showing how we tend to anthropomorphise ChatGPT because the interface is chat, and that it again is a bad idea to treat it as something with human traits.

To conclude I explain that the way we try and use it is normal, and that it will take some time before this technology find its own use. And when it does, it will be become incredibly powerful. We should therefore start using it now and experiment and explore what this is, and how it may enable us to do new things, in new ways.

Watch the video on youtube here: https://www.youtube.com/watch?v=lOfsDaZsh1o