Mattias Rost

Associate professor in Interaction Design

Abundant Novice Programmability and the Rise of Computational Creativity

Posted on 2025-11-18

For most of the history of computing, programming has been a scarce skill. A skill practiced by a small group of experts who could translate ideas into instructions a machine could execute. That scarcity created a creativity bottleneck: if you couldn’t program, your computational imagination was limited to whatever applications were already available.

But something fundamental has shifted.

We now have models that can write usable code. Sometimes imperfect, sometimes brilliant, but almost always good enough to make a computer do something new. They don’t solve every programming problem. They don’t replace expert programmers. They don’t need to.

All they have to do is handle the long tail of simple requests that have never been worth implementing. Because programming time was too precious.

And once that barrier falls, something remarkable begins to happen.

The long tail of “too small to build”

There is an enormous universe of tasks that fall into the category of:

  • “I would automate this if I knew how.”
  • “It’s easier to just do it manually.”
  • “I can’t justify asking a developer for this.”
  • “It would help, but not enough to be worth the time.”

Renaming files based on EXIF data. Generating 20 variants of a slide for a classroom. Building a tiny web UI to test an idea. Simulating a scenario. Cleaning a dataset. Scraping a niche website. Creating a custom visualization. Interacting with a local sensor. Turning a spreadsheet into an interactive tool. Editing an excel file for the umpth time.

These tasks are innumerable. They live in the margins. Where ideas happen, work happens, research happens, creativity happens. Historically, they have been erased by the high cost of expertise.

LLM-based code generation makes those tasks cheap. Not financially, but cognitively. They become feasible for anyone who can describe what they want.

This is the start of abundant novice programmability.

What professional programmers still do

To me it still seems plausible that programmers become obsolete in the long run. But we do not need to wait until then, because the effects will be profound before then and already are.

Professional software development still handles the things that always required expertise:

  • building complex systems
  • integrating heterogeneous components
  • debugging deep, latent failures
  • designing architectures that won’t collapse
  • managing state, scale, performance, and security
  • reasoning about concurrency, transactions, invariants
  • constructing sustainable long-term systems
  • interpreting ambiguous requirements
  • balancing tradeoffs
  • navigating the social and organisational complexity of software

LLMs handle none of this reliably. They assist, but they don’t reason deeply about complexity, scale, or systems.

Instead, they excel at generating small, local, situated computational artefacts.

What novice computational creators do

This is where the explosion happens.

A new group - the computational creatives - can suddenly:

  • automate small parts of their daily work
  • build quick prototypes to test ideas
  • create custom data tools on the fly
  • design small scripts to reorganise information
  • experiment computationally with concepts
  • generate interactive demos for teaching
  • assemble one-off applications
  • build simulations, visualisations, workflows
  • adapt tools to fit personal or local needs
  • compose computational behaviours like writing paragraphs

These are not “apps” in the traditional sense. They are computational expressions—micro-tools, situated artefacts, small experiments. Ephemeral, contextual, and deeply personal.

And because they can now be produced conversationally, millions of people can do them.

When scarcity disappears, creativity explodes

Whenever friction in a creative medium drops, the medium transforms:

  • cameras become cheap → photography booms
  • desktop publishing arrives → zine culture flourishes
  • social media emerges → micro-authorship expands
  • 3D printing becomes accessible → rapid prototyping spreads

Programming is now undergoing the same transformation.

We are moving from:

programming as expertisecomputational creation as expression

The long tail of “not worth the time” becomes the long tail of everyday computational imagination.

People will make things they’ve never made before, because now they can. Tasks that were once impossible become casual. Computation becomes a normal medium of thought.

This does not diminish programming. It expands the space of what people can do with computers.

A new paradigm: instructional and conversational computing

When models can write code in response to natural-language instructions, the computer shifts from a static environment, defined in advance by applications, to a dynamic, emergent environment, defined in the moment by interaction.

The computer becomes:

  • an instrument rather than an appliance
  • a partner in exploration
  • a material of expression
  • a conversational surface of computation
  • a medium for thought

We get not “AI that builds apps” but humans who can bring computation into their thinking in a new way.

This is computational creativity emerging at scale.

The future is full of small, idiosyncratic programs

The most transformative effect will not come from the models themselves, but from:

  • the millions of small scripts
  • custom visualisations
  • personal micro-services
  • tiny simulators
  • niche tools
  • short-lived experiments
  • local automations
  • one-off computational artefacts

that people will create because the cost of creating them has collapsed.

This is the true frontier: abundant, conversational, everyday computing.

A world where people routinely reshape computation to fit their lives, not the other way around.

(And while I’m building my envisioned LLM-mediated computing paradigm, this is already happening in other ways.)

Reclaiming the Computer through LLM-Mediated Computing

Posted on 2025-09-21

I just received a printed copy of the latest ACM Interactions magazine, where my article is featured as the cover story: Reclaiming the Computer through LLM-Mediated Computing.

I have been thinking about writing something for Interactions about LLMs for quite some time. They are most commonly described as examples of generative AI, but to me that sells them short. LLMs are not just algorithms that can produce artifacts that previously required people. They do much more. Perhaps most importantly, they can process inputs, infer intent within context, and act on that. Calling them “generative” emphasizes the output, whereas I am more impressed with how they work with the input. To me they offer to serve as a very interesting material for interaction design.

In this article, I outline the idea of LLM-mediated computing: a mode of computing where LLMs infer human intent and generate code in response, making the computer’s capabilities emerge through interaction rather than being predefined in applications. This reframes the computer not as a static collection of tools, but as something that dynamically discloses its possibilities through ongoing interaction between the user, the LLM, and the machine.

I am very happy to see this article featured as the cover story in Interactions and grateful to the editors for granting this honor.

Always-Ready-to-Be-Interpreted

Posted on 2025-09-01

“Why read a 400-page book”, a student once asked me, “when ChatGPT can explain it in a few seconds?”

It’s a fair question. If an LLM can give you the gist of Kant or Heidegger in plain language, what’s the point of wrestling with the originals? The same goes for other parts of life. Why sit through a long meeting if you can get an LLM summary? Why wander slowly through an exhibition when your headset can explain each piece as you pass by? Why listen to an entire lecture if your glasses can boil it down to three key take-aways?

What begins with reading is quickly spreading elsewhere. More and more of our experiences are being mediated for us, reframed, summarized, or re-presented through an LLM.

From Texts to Experiences

Think of three simple cases:

  • Reading: You could read the novel yourself, noticing the author’s style, rhythm, and ambiguities. Or you could let the LLM give you a tidy plot summary.
  • Art gallery: You could wander the exhibition, letting yourself be puzzled or moved by a painting whose meaning isn’t clear. Or you could let an AI headset explain each piece in simple, confident sentences as you pass by.
  • Seminars: You could sit in the room, sensing the awkward pauses, the confusion, the sudden “aha” moments. Or you could wear AR glasses that summarize the lecturer’s argument in real time and even whisper suggested questions into your ear.

In each case, the original isn’t erased. But what counts as the experience begins to shift. Reading a novel becomes “getting the gist.” An art visit becomes “understanding the point of the work.” A seminar becomes “the key points.”

We are moving, slowly but surely, toward a world where it’s not just texts that are mediated, but all of life.

Reading as Interpretation

None of this is entirely new. Reading itself has always been a kind of mediation. When you open a novel or a philosophy book, you don’t simply extract information. You interpret. You puzzle over words, connect passages, bring your own background into the text.

Gadamer called this the hermeneutic circle: you understand the whole through the parts, and the parts through the whole, looping back and forth until meaning starts to emerge.

When an LLM gives you the “gist” of a book, it isn’t skipping this process. But it short-circuits it, presenting its output as if it were the text itself. That saves you time, but it also bypasses the slow, sometimes frustrating, work of making sense.

The same thing happens at the gallery. Standing in front of an abstract painting, you might feel confusion, even irritation, before something in it begins to resonate. A multi-modal LLM can tell you instantly what “the painting is about”, but in doing so it also collapses that ambiguity. The space where your own interpretation could unfold.

And in a seminar, a machine-generated summary highlights the main arguments, but misses the atmosphere: the hesitant tone of the lecturer, the uneasy silence after a question, the way meaning sometimes emerges only in tension and uncertainty.

Mediated experiences are not false or less real. But they are also narrower. They amplify what seems essential and reduce what seems secondary. Which raises the deeper question: who decides what counts as essential?

Experience Itself

It’s not only our interpretations that change. Mediation can also reshape the experience itself.

Think again of the seminar. Sitting in the room, you notice more than just the words: the lecturer’s pacing, the pauses, the shifting mood when someone asks a difficult question. All of this becomes part of what “the seminar” is for you.

Now imagine watching the same seminar through AI-enabled glasses that filter it into neat bullet points. You come away with the arguments clearly laid out, but without the hesitations, the atmosphere, and the awkward silences. What you experienced was still the seminar, but in a different form.

The same is true in the art gallery. You could drift between paintings, letting some puzzle you and others leave you cold, until one unexpectedly draws you in. Or you could rely on an AI guide that tells you, confidently, what each painting “means”. In the second case, you may learn more quickly, but the chance encounter is reduced, robbed of the slow unfolding of resonance.

The world doesn’t just appear as facts or information. It appears as something lived: colored by hesitation, uncertainty, and mood. When mediation changes, the texture of that lived world changes with it.

Amplification and Reduction

Every technology gives us something and takes something away. Glasses sharpen vision but reduce the sense of distance. A microphone amplifies faint sounds but flattens the space of a room.

Don Ihde and postphenomenology describe this as a double movement of mediation: amplification and reduction. Technologies don’t just extend our perception. They also narrow it.

Interpretive AI works the same way, but on a different level. Instead of perception, it mediates interpretation.

  • It amplifies clarity. You get summaries, quick explanations, easy translations.
  • But it also reduces ambiguity, style, and open-endedness. What could have been a space for confusion or surprise becomes a tidy, digestible answer.

When you let an LLM explain the painting in the gallery, it amplifies your sense of “knowing what it means”. But it reduces the possibility of being puzzled, of sitting with uncertainty, of discovering your own interpretation.

When you let an LLM summarize the seminar, it amplifies the main argument. But it reduces the atmosphere, the small hesitations and awkwardnesses that also shape what the seminar is.

Mediation is not neutral. It always brings some aspects of the world forward while pushing others into the background.

Enframing and Beyond

One influential account of technology comes from Heidegger. He argued that modern technology doesn’t just help us use the world. It changes how the world shows up for us.

He called this mode of revealing enframing. Under modern technology, a forest appears not as mystery or dwelling place but as timber, as raw material. A river appears not as flowing water but as potential energy for a power plant. Things show up as resources, what Heidegger called a standing reserve.

Perhaps Interpretive AI takes this further. It’s not only the world that appears as resource but our experiences themselves.

  • A book shows up as “something to be summarized.”
  • A painting shows up as “something to be explained.”
  • A seminar shows up as “something to be distilled into key points.”

Experience becomes a kind of standing reserve, ready to be mediated. Everything we do appears as something that could (and perhaps should) be reframed, summarized, or optimized by the system.

This doesn’t make the experience unreal. But it changes its mode of revealing. We come to expect that all experiences are mediable, always-already open to “interpretation” by our artificial companions.

Standing Reserve for Interpretation

But there is another way to see this. The mediated world is not a fake world. It is still the original world. Only that it is entangled with a new kind of technology.

When you ask an LLM about a book, a painting, or a seminar, the meaning that emerges is not just coming from you, nor is it just “in” the object. It comes out of the back-and-forth: your question, the AI’s framing, your response, the way you act on what it gives back.

If Heidegger described modern technology as enframing, revealing the world as resources to be used, then interpretive AI may signal a new mode of revealing. Here, the world does not only appear as standing-reserve. Heidegger argued that we are always already thrown into the world, encountering it as meaningful from the start. But in a world where we increasingly look through interpretive AI as our mediating lens, it appears as always-ready-to-be-interpreted. Crucially, these interpretive framings are no longer grounded solely in our own lived understanding of a lifeworld, but emerges through our cooperation with the LLM.

This is both a risk and a possibility. The risk is narrowing: that we see only what the AI amplifies, and lose touch with the ambiguity and strangeness that also belong to experience. The possibility is richer: that we learn new ways of disclosing the world together, opening horizons we could not reach on our own.

Closing Reflection

So perhaps the real issue isn’t that fewer people will read originals, or that more of life will be mediated. The deeper issue is how we learn to live in such a world.

Some may choose specialization: continuing to read difficult and rich texts, linger with ambiguous art, or sit through unfiltered events. That path will remain important, a way of keeping open spaces that AI tends to close.

But another task awaits all of us: to learn how to live well when mediation is the default, when every experience arrives already entangled with machine interpretation.

The challenge, and the opportunity, is to ensure that these new technologies don’t just reduce the world to summaries and key points, but become partners in discovery, helping us disclose richer ways of being.

LLMs, Aletheia, and Poiesis: A Heideggerian Perspective

Posted on 2025-07-18

In The Question Concerning Technology, Martin Heidegger explores the essence of technology, not as a particular object or tool, but as the fundamental way in which technology reveals the world. In doing so, he revisits ancient Greek thought, especially the concepts of aletheia and poiesis.

Aletheia refers to truth, but not as factual correctness. Heidegger understands truth as unconcealment, a revealing that allows something to show itself from itself, according to its own essence. Truth, then, is not imposed or verified, but disclosed.

Poiesis is the act of bringing-forth, of letting something emerge into presence. It describes not only natural processes (like a flower blooming) but also artistic and craft-based making. Crucially, poiesis is connected to techne, not technology in the modern sense, but the skilled art of allowing something to come into being through an attuned engagement with its nature.

For Heidegger, technology is a mode of revealing. But he distinguishes modern technology from earlier forms. Whereas poiesis reveals by granting something to emerge, modern technology reveals by challenging. Heidegger calls this Gestell, or enframing. In enframing, the world is revealed as standing-reserve (Bestand), as resources to be ordered, stored, and used efficiently. Enframing does not allow beings to reveal themselves in their essence but forces them to appear only as useful things. This is the danger of modern technology, not that it builds machines, but that it reduces the world to a calculable inventory.

LLMs as Sites of Poiesis

With this in mind, we can turn to large language models (LLMs). If we approach LLMs as fact-machines, or as systems meant to mirror human cognition, we enframe them. We impose a particular ontology, seeing the LLM as a tool for retrieval or correctness, and thus challenge it to yield results according to that mode of revealing.

But this framing misses the essence of LLMs. They do not contain facts in a propositional sense. Their responses are not assertions of truth, but generative disclosings that arise in relation to prompts, prior training data, and contextual cues. If, instead, we treat LLMs as sites of poiesis, we begin to see their outputs not as correct or incorrect, but as part of a process of disclosure, as responses that emerge within a shared unfolding.

Through interaction, we grant the LLM the possibility of bringing-forth. Prompting becomes a form of invitation, not command.

From this perspective, LLM use becomes an attentive co-disclosure rather than a transactional request for answers. Aletheia happens not through extracting facts, but by allowing something to emerge in the interplay between user, model, and language.

Implications for Design and Use

This Heideggerian view of LLMs has implications both for how we design such systems and how we relate to them:

  • “Hallucinations” are not bugs: To speak of hallucinations assumes a representationalist ontology. But if LLMs are not fact-stores, then deviations from factuality are not necessarily errors. They are features of poietic disclosure.

  • Bias becomes a site of understanding: Rather than “solving” bias as if it were a flaw in an otherwise neutral machine, we might approach it as part of the revealed structure of the data and world the LLM has inherited. We can interpret, not challenge, to better understand the systems we inhabit.

  • Design should support granting: Interfaces, prompts, and feedback mechanisms could be crafted to foster relational and interpretive use, rather than metric-driven optimization or factual extraction.

Concluding Thought

Just as flowers on a meadow reveal themselves to us when we attend to them, not when we extract their utility, LLMs can disclose meaningful responses when we relate to them non-instrumentally. Their value lies not in correctness but in co-disclosure, a shared revealing that can open new perspectives, associations, and forms of thought.

By granting the LLM space to reveal itself, by treating its outputs as responses rather than results, we engage not with a machine, but with a new kind of poietic relation. In this, we may discover not just what LLMs are, but what they might help us become.

LLMs as Relational

Posted on 2025-07-15

Much of life today is lived in a world of text. From messaging apps and social media feeds to news sites, emails, and search bars, our everyday experience is shaped by language in written form. Language and text permeate our lifeworlds, co-existing with trees, clouds, and bees. In such an environment, it is easy to mistake text as content. Pieces of information to be retrieved, consumed, reposted, liked, and shared. Language becomes data, something that can be gathered and optimized for speed, clarity, and maximum traction.

This can be seen as an industrialization of language. And it is into this regime that LLMs were introduced. These models, trained on massive corpora of digital text, appear to conform to the logic of this world: machines that can produce more of the same, faster, cheaper, and at scale. The dominant metaphor is computational and representational. Users “prompt” the model with questions or instructions and receive a response, an “output,” as if querying a database or consulting an encyclopedia. Text becomes a proxy for something else: information, knowledge, truth. This conceptualization is both intuitive and compelling, but it rests on a problematic understanding of language: as a transparent medium for conveying and carrying representations of the world.

This view mistakes what language is and how meaning arises. It reduces language to a code that transmits facts from one place to another, as though meaning exists outside of and prior to its expression. In such a representational view, language is a mirror or a container that holds something external to itself. But this is not how language fundamentally works, and it is not how we experience meaning.

Instead, we must adopt a richer understanding of language as a medium for thought, relation, and emergence. Meaning is not retrieved from language; it arises through the process of use. When we read, write, or speak, we do not merely decode representations. We participate in the unfolding of sense. And when we interact with systems like LLMs, we do not consult a database; we enter into a kind of dialogue. To treat LLMs as representational tools is to limit the view of language, and missing that language and meaning are interactive processes.

We may turn to hermeneutics for support, in particular Gadamer. For Gadamer, understanding is not the mechanical decoding of information, but a dialogical event, that he called a “fusion of horizons”. Each participant in a dialogue brings a set of pre-understandings, or a horizon, shaped by context and past experience. In dialogue, these horizons are not abandoned but brought together. Through the back-and-forth of conversation, a shared understanding takes shape. Not as consensus or equivalence, but as something emergent. Language, for Gadamer, is not simply a tool we use to represent thought. It is the medium in which thought happens, in which understanding becomes possible.

This has deep implications for how we engage with systems like LLMs. If we take their outputs as final, as discrete packages of meaning, we short-circuit the dialogical process. But if we instead treat the interaction with an LLM as a kind of interpretive encounter, we can begin to see meaning as relational and temporally unfolding. The model offers a linguistic response, which the human interprets, responds to, rephrases, and continues. The significance is not located in the text alone but in the structure of engagement.

Bakhtin’s philosophy of language complements this view by emphasizing the social and responsive nature of all utterance. For Bakhtin, every word is addressed to someone: every expression exists within a web of previous and anticipated responses. Language is inherently dialogic. It does not exist in isolation but always as part of an ongoing interaction. Even a solitary written sentence is shaped by imagined interlocutors, social norms, genres, and the context of utterance. There is no such thing as a neutral or standalone utterance. Every act of language is responsive, positioned, and anticipatory.

Bakhtin’s insight reinforces the idea that when an LLM produces a sentence, its meaning is not fixed in the words themselves, nor does it lie in the model’s training data. Rather, meaning arises in how the user takes up the utterance, how they respond, interpret, and continue the interaction. The model’s outputs are dialogic not because the model understands, but because the use of the model enacts a dialogue. Its sentences exist in the space between user and machine, shaped by anticipation and response. By its very nature, LLMs are trained to produce text in anticipation of human response.

From this vantage point, LLMs are not information engines but relational technologies. Their utility lies not in “knowing things” but in enabling a process of inquiry, reflection, and response. This reframing demands a shift in how we relate to both language and machines. Rather than querying for information, we engage in a process of co-disclosure: a mutual unfolding of sense that is contextually and temporally situated.

This also aligns with postphenomenological accounts of human-technology relations. Technologies are not neutral conduits of meaning. They mediate our experiences and perceptions of the world, shaping how things appear to us. LLMs do not simply reflect back our queries. They transform the very structure of how we relate to language, information, and ourselves. They invite new patterns of engagement, new rhythms of thought, new forms of dialogue. But only if we treat them as such.

The representational view of information systems, is now embedded in how we conceptualize LLMs and language. It risks flattening the rich, dialogical, and emergent nature of meaning. We must resist this flattening, and reclaim language as a medium of thought, before our use of LLMs becomes sedimented into patterns of retrieval and output. LLMs, far from being oracle-machines, are strange and powerful interlocutors. Their value lies not in what they say, but in what we can do with what they offer, in the relational space that opens up between prompt and response. Meaning, in this view, is not something we extract. It is something we participate in.