Mattias Rost

Associate professor in Interaction Design

Always-Ready-to-Be-Interpreted

Posted on 2025-09-01

“Why read a 400-page book”, a student once asked me, “when ChatGPT can explain it in a few seconds?”

It’s a fair question. If an LLM can give you the gist of Kant or Heidegger in plain language, what’s the point of wrestling with the originals? The same goes for other parts of life. Why sit through a long meeting if you can get an LLM summary? Why wander slowly through an exhibition when your headset can explain each piece as you pass by? Why listen to an entire lecture if your glasses can boil it down to three key take-aways?

What begins with reading is quickly spreading elsewhere. More and more of our experiences are being mediated for us, reframed, summarized, or re-presented through an LLM.

From Texts to Experiences

Think of three simple cases:

  • Reading: You could read the novel yourself, noticing the author’s style, rhythm, and ambiguities. Or you could let the LLM give you a tidy plot summary.
  • Art gallery: You could wander the exhibition, letting yourself be puzzled or moved by a painting whose meaning isn’t clear. Or you could let an AI headset explain each piece in simple, confident sentences as you pass by.
  • Seminars: You could sit in the room, sensing the awkward pauses, the confusion, the sudden “aha” moments. Or you could wear AR glasses that summarize the lecturer’s argument in real time and even whisper suggested questions into your ear.

In each case, the original isn’t erased. But what counts as the experience begins to shift. Reading a novel becomes “getting the gist.” An art visit becomes “understanding the point of the work.” A seminar becomes “the key points.”

We are moving, slowly but surely, toward a world where it’s not just texts that are mediated, but all of life.

Reading as Interpretation

None of this is entirely new. Reading itself has always been a kind of mediation. When you open a novel or a philosophy book, you don’t simply extract information. You interpret. You puzzle over words, connect passages, bring your own background into the text.

Gadamer called this the hermeneutic circle: you understand the whole through the parts, and the parts through the whole, looping back and forth until meaning starts to emerge.

When an LLM gives you the “gist” of a book, it isn’t skipping this process. But it short-circuits it, presenting its output as if it were the text itself. That saves you time, but it also bypasses the slow, sometimes frustrating, work of making sense.

The same thing happens at the gallery. Standing in front of an abstract painting, you might feel confusion, even irritation, before something in it begins to resonate. A multi-modal LLM can tell you instantly what “the painting is about”, but in doing so it also collapses that ambiguity. The space where your own interpretation could unfold.

And in a seminar, a machine-generated summary highlights the main arguments, but misses the atmosphere: the hesitant tone of the lecturer, the uneasy silence after a question, the way meaning sometimes emerges only in tension and uncertainty.

Mediated experiences are not false or less real. But they are also narrower. They amplify what seems essential and reduce what seems secondary. Which raises the deeper question: who decides what counts as essential?

Experience Itself

It’s not only our interpretations that change. Mediation can also reshape the experience itself.

Think again of the seminar. Sitting in the room, you notice more than just the words: the lecturer’s pacing, the pauses, the shifting mood when someone asks a difficult question. All of this becomes part of what “the seminar” is for you.

Now imagine watching the same seminar through AI-enabled glasses that filter it into neat bullet points. You come away with the arguments clearly laid out, but without the hesitations, the atmosphere, and the awkward silences. What you experienced was still the seminar, but in a different form.

The same is true in the art gallery. You could drift between paintings, letting some puzzle you and others leave you cold, until one unexpectedly draws you in. Or you could rely on an AI guide that tells you, confidently, what each painting “means”. In the second case, you may learn more quickly, but the chance encounter is reduced, robbed of the slow unfolding of resonance.

The world doesn’t just appear as facts or information. It appears as something lived: colored by hesitation, uncertainty, and mood. When mediation changes, the texture of that lived world changes with it.

Amplification and Reduction

Every technology gives us something and takes something away. Glasses sharpen vision but reduce the sense of distance. A microphone amplifies faint sounds but flattens the space of a room.

Don Ihde and postphenomenology describe this as a double movement of mediation: amplification and reduction. Technologies don’t just extend our perception. They also narrow it.

Interpretive AI works the same way, but on a different level. Instead of perception, it mediates interpretation.

  • It amplifies clarity. You get summaries, quick explanations, easy translations.
  • But it also reduces ambiguity, style, and open-endedness. What could have been a space for confusion or surprise becomes a tidy, digestible answer.

When you let an LLM explain the painting in the gallery, it amplifies your sense of “knowing what it means”. But it reduces the possibility of being puzzled, of sitting with uncertainty, of discovering your own interpretation.

When you let an LLM summarize the seminar, it amplifies the main argument. But it reduces the atmosphere, the small hesitations and awkwardnesses that also shape what the seminar is.

Mediation is not neutral. It always brings some aspects of the world forward while pushing others into the background.

Enframing and Beyond

One influential account of technology comes from Heidegger. He argued that modern technology doesn’t just help us use the world. It changes how the world shows up for us.

He called this mode of revealing enframing. Under modern technology, a forest appears not as mystery or dwelling place but as timber, as raw material. A river appears not as flowing water but as potential energy for a power plant. Things show up as resources, what Heidegger called a standing reserve.

Perhaps Interpretive AI takes this further. It’s not only the world that appears as resource but our experiences themselves.

  • A book shows up as “something to be summarized.”
  • A painting shows up as “something to be explained.”
  • A seminar shows up as “something to be distilled into key points.”

Experience becomes a kind of standing reserve, ready to be mediated. Everything we do appears as something that could (and perhaps should) be reframed, summarized, or optimized by the system.

This doesn’t make the experience unreal. But it changes its mode of revealing. We come to expect that all experiences are mediable, always-already open to “interpretation” by our artificial companions.

Standing Reserve for Interpretation

But there is another way to see this. The mediated world is not a fake world. It is still the original world. Only that it is entangled with a new kind of technology.

When you ask an LLM about a book, a painting, or a seminar, the meaning that emerges is not just coming from you, nor is it just “in” the object. It comes out of the back-and-forth: your question, the AI’s framing, your response, the way you act on what it gives back.

If Heidegger described modern technology as enframing, revealing the world as resources to be used, then interpretive AI may signal a new mode of revealing. Here, the world does not only appear as standing-reserve. Heidegger argued that we are always already thrown into the world, encountering it as meaningful from the start. But in a world where we increasingly look through interpretive AI as our mediating lens, it appears as always-ready-to-be-interpreted. Crucially, these interpretive framings are no longer grounded solely in our own lived understanding of a lifeworld, but emerges through our cooperation with the LLM.

This is both a risk and a possibility. The risk is narrowing: that we see only what the AI amplifies, and lose touch with the ambiguity and strangeness that also belong to experience. The possibility is richer: that we learn new ways of disclosing the world together, opening horizons we could not reach on our own.

Closing Reflection

So perhaps the real issue isn’t that fewer people will read originals, or that more of life will be mediated. The deeper issue is how we learn to live in such a world.

Some may choose specialization: continuing to read difficult and rich texts, linger with ambiguous art, or sit through unfiltered events. That path will remain important, a way of keeping open spaces that AI tends to close.

But another task awaits all of us: to learn how to live well when mediation is the default, when every experience arrives already entangled with machine interpretation.

The challenge, and the opportunity, is to ensure that these new technologies don’t just reduce the world to summaries and key points, but become partners in discovery, helping us disclose richer ways of being.