Mattias Rost

Associate professor in Interaction Design

Presenting "Co-Disclosing the Computer" at CHI 2026

Posted on 2026-04-08

Next week I will present my paper Co-Disclosing the Computer: LLM-Mediated Computing through Reflective Conversation at CHI 2026 in Barcelona, one of the leading international conferences in human-computer interaction.

The paper explores what happens when computers are no longer primarily encountered as fixed applications with predefined interfaces, but as systems that can be shaped dynamically through interaction with large language models. We argue that this points toward a different way of understanding computing: not mainly as navigating ready-made software, but as entering into a reflective conversation where intentions are expressed, interpreted, reformulated, and made computationally actionable.

In the paper, I describe this as LLM-mediated computing. Rather than seeing the interface as something fully given in advance, the computer’s functionality emerges in the interaction between human, language model, and machine. This opens up new possibilities, but also raises important design questions about steerability, repair, responsibility, and what kinds of actions the system makes available.

At CHI, I will talk about this shift and discuss how concepts from interaction design and postphenomenology can help us better understand it. My hope is that the work contributes to a broader discussion about what computing may become as generative AI increasingly moves from being an add-on inside applications to becoming part of how computational action itself is configured.

I’m looking forward to presenting the paper and to the discussions that follow.

Proto-interpretation in ACM AI Letters

Posted on 2026-01-30

I’m happy to share that my paper “Proto-Interpretation: The Temporality of Large Language Model Inference” is now available as Just Accepted in ACM AI Letters. The paper can be accessed via the ACM Digital Library.

Large language models are often described as next-token predictors. This characterization accurately reflects their training objective, but says surprisingly little about what actually happens during inference. In practice, we tend to treat an LLM’s final output as a static artifact: a sentence, an answer, a solution. This perspective, however, obscures an important dimension of model behavior.

In this paper, I propose a different lens: proto-interpretation — a way of understanding LLM inference as a temporally unfolding interpretive process, rather than as a single act of prediction.

So what is proto-interpretation?

Proto-interpretation foregrounds the temporal structure of inference. During generation, an LLM does not move directly from input to output. Instead:

  • Multiple possible continuations are implicitly available early in generation.
  • These alternatives coexist and compete as probability mass is redistributed across successive steps.
  • Each generated token constitutes a partial commitment, progressively constraining future possibilities.

In this way, interpretive structure emerges through time, as some potential continuations are reinforced while others are pruned away. Seen this way, an LLM’s output is not the execution of a pre-existing interpretation, but the end point of a temporally structured selection process.

Why this matters

Viewing inference through proto-interpretation has several implications:

  • Evaluation: Final outputs are only snapshots of a deeper process. What matters is not just what was produced, but how the model arrived there.
  • Competence and meaning: Model behavior should not be read as evidence of stored understanding or internal semantic representations, but as the outcome of an unfolding probabilistic process.
  • Theory and philosophy of AI: Proto-interpretation offers a way to talk about meaning-related phenomena in LLMs without attributing agency, intention, or semantic understanding, while still taking inference dynamics seriously.

The paper illustrates this perspective using a minimal ambiguity case, designed to isolate the temporal dynamics through which commitment emerges during inference.

Relation to broader work

Proto-interpretation connects to ongoing conceptual work on the nature of LLM inference and interpretation, and more broadly to research on LLM-mediated computing. It complements - but is distinct from - accounts that frame LLMs primarily in terms of representations, internal states, or autonomous reasoning. Instead, it emphasizes process over product, and temporal unfolding over static explanation.

I see this paper as a conceptual building block rather than a conclusion—one that helps clarify what it means to take the temporal dynamics of LLM inference seriously, both technically and philosophically.

The Second Awakening of Computing

Posted on 2026-01-08

In the 1980s, computers arrived in homes.

Teenagers connected beige boxes to the family TV. You typed commands, copied code from magazines, loaded games from tapes or floppy disks, and experimented to see what would happen. The machines were slow and limited, but they were open. You were not just using a computer. You were exploring what it could become.

That moment marked the first awakening of computing as a lived, everyday practice.

Over time, this openness narrowed. Graphical interfaces made computers usable for many more people, but they also turned them into stable products. Interaction replaced instruction. Software became something you operated rather than shaped. The computer became powerful, reliable, and largely fixed.

Today, something familiar is happening again.

Programmers are now working with coding agents. These are conversational systems that can write code, run it, and execute commands on a computer. They can open browsers, interact with websites, read files, test software, and observe the results of their actions. In effect, they can do almost anything a human user can do at a keyboard.

What makes this different is how new capabilities come into being.

When a limitation is encountered, the response is no longer to switch tools, install new software, or postpone the problem. Instead, the programmer explains the situation to the agent in natural language. The agent is instructed to create something new to support the work that is already underway.

That “something” might be a small script, a command-line tool, or a connection to an external service. It might be a way of tracking progress, waiting for input, coordinating multiple tasks, or visualising what is going on. The key point is that these tools are not designed upfront. They are created in the moment, in response to a concrete need.

Once created, the agent can use these tools itself. Capabilities that were improvised a few minutes ago become reusable parts of the ongoing workflow. If they stop being useful, they are ignored or discarded. Nothing forces them to persist.

Toolmaking happens inside the flow of work.

As programmers collaborate with coding agents this way, they also invent new ways of working with the agents themselves. They create shared task boards, status indicators, conventions for signalling when an agent is blocked, and simple representations that make the agent’s activity visible. These structures are rarely planned in advance. They emerge gradually, through trial and error, as people notice friction and respond to it.

What is taking shape is not just faster programming, but a different relationship to the computer. The computer is no longer a fixed environment that you operate. It becomes a malleable system that can be reshaped as needs arise.

In this sense, today’s coding agents strongly resemble the home computers of the 1980s. Once again, people are sitting in front of machines whose full capabilities are not yet known. Once again, exploration happens through experimentation, improvisation, and curiosity.

The difference is that exploration now happens through language rather than command lines or source code.

This is the second awakening of computing.

The first brought computers into everyday life.
The second is making them open again.

Abundant Novice Programmability and the Rise of Computational Creativity

Posted on 2025-11-18

For most of the history of computing, programming has been a scarce skill. A skill practiced by a small group of experts who could translate ideas into instructions a machine could execute. That scarcity created a creativity bottleneck: if you couldn’t program, your computational imagination was limited to whatever applications were already available.

But something fundamental has shifted.

We now have models that can write usable code. Sometimes imperfect, sometimes brilliant, but almost always good enough to make a computer do something new. They don’t solve every programming problem. They don’t replace expert programmers. They don’t need to.

All they have to do is handle the long tail of simple requests that have never been worth implementing. Because programming time was too precious.

And once that barrier falls, something remarkable begins to happen.

The long tail of “too small to build”

There is an enormous universe of tasks that fall into the category of:

  • “I would automate this if I knew how.”
  • “It’s easier to just do it manually.”
  • “I can’t justify asking a developer for this.”
  • “It would help, but not enough to be worth the time.”

Renaming files based on EXIF data. Generating 20 variants of a slide for a classroom. Building a tiny web UI to test an idea. Simulating a scenario. Cleaning a dataset. Scraping a niche website. Creating a custom visualization. Interacting with a local sensor. Turning a spreadsheet into an interactive tool. Editing an excel file for the umpth time.

These tasks are innumerable. They live in the margins. Where ideas happen, work happens, research happens, creativity happens. Historically, they have been erased by the high cost of expertise.

LLM-based code generation makes those tasks cheap. Not financially, but cognitively. They become feasible for anyone who can describe what they want.

This is the start of abundant novice programmability.

What professional programmers still do

To me it still seems plausible that programmers become obsolete in the long run. But we do not need to wait until then, because the effects will be profound before then and already are.

Professional software development still handles the things that always required expertise:

  • building complex systems
  • integrating heterogeneous components
  • debugging deep, latent failures
  • designing architectures that won’t collapse
  • managing state, scale, performance, and security
  • reasoning about concurrency, transactions, invariants
  • constructing sustainable long-term systems
  • interpreting ambiguous requirements
  • balancing tradeoffs
  • navigating the social and organisational complexity of software

LLMs handle none of this reliably. They assist, but they don’t reason deeply about complexity, scale, or systems.

Instead, they excel at generating small, local, situated computational artefacts.

What novice computational creators do

This is where the explosion happens.

A new group - the computational creatives - can suddenly:

  • automate small parts of their daily work
  • build quick prototypes to test ideas
  • create custom data tools on the fly
  • design small scripts to reorganise information
  • experiment computationally with concepts
  • generate interactive demos for teaching
  • assemble one-off applications
  • build simulations, visualisations, workflows
  • adapt tools to fit personal or local needs
  • compose computational behaviours like writing paragraphs

These are not “apps” in the traditional sense. They are computational expressions—micro-tools, situated artefacts, small experiments. Ephemeral, contextual, and deeply personal.

And because they can now be produced conversationally, millions of people can do them.

When scarcity disappears, creativity explodes

Whenever friction in a creative medium drops, the medium transforms:

  • cameras become cheap → photography booms
  • desktop publishing arrives → zine culture flourishes
  • social media emerges → micro-authorship expands
  • 3D printing becomes accessible → rapid prototyping spreads

Programming is now undergoing the same transformation.

We are moving from:

programming as expertisecomputational creation as expression

The long tail of “not worth the time” becomes the long tail of everyday computational imagination.

People will make things they’ve never made before, because now they can. Tasks that were once impossible become casual. Computation becomes a normal medium of thought.

This does not diminish programming. It expands the space of what people can do with computers.

A new paradigm: instructional and conversational computing

When models can write code in response to natural-language instructions, the computer shifts from a static environment, defined in advance by applications, to a dynamic, emergent environment, defined in the moment by interaction.

The computer becomes:

  • an instrument rather than an appliance
  • a partner in exploration
  • a material of expression
  • a conversational surface of computation
  • a medium for thought

We get not “AI that builds apps” but humans who can bring computation into their thinking in a new way.

This is computational creativity emerging at scale.

The future is full of small, idiosyncratic programs

The most transformative effect will not come from the models themselves, but from:

  • the millions of small scripts
  • custom visualisations
  • personal micro-services
  • tiny simulators
  • niche tools
  • short-lived experiments
  • local automations
  • one-off computational artefacts

that people will create because the cost of creating them has collapsed.

This is the true frontier: abundant, conversational, everyday computing.

A world where people routinely reshape computation to fit their lives, not the other way around.

(And while I’m building my envisioned LLM-mediated computing paradigm, this is already happening in other ways.)

Reclaiming the Computer through LLM-Mediated Computing

Posted on 2025-09-21

I just received a printed copy of the latest ACM Interactions magazine, where my article is featured as the cover story: Reclaiming the Computer through LLM-Mediated Computing.

I have been thinking about writing something for Interactions about LLMs for quite some time. They are most commonly described as examples of generative AI, but to me that sells them short. LLMs are not just algorithms that can produce artifacts that previously required people. They do much more. Perhaps most importantly, they can process inputs, infer intent within context, and act on that. Calling them “generative” emphasizes the output, whereas I am more impressed with how they work with the input. To me they offer to serve as a very interesting material for interaction design.

In this article, I outline the idea of LLM-mediated computing: a mode of computing where LLMs infer human intent and generate code in response, making the computer’s capabilities emerge through interaction rather than being predefined in applications. This reframes the computer not as a static collection of tools, but as something that dynamically discloses its possibilities through ongoing interaction between the user, the LLM, and the machine.

I am very happy to see this article featured as the cover story in Interactions and grateful to the editors for granting this honor.