The phrase Generative AI (or GenAI) has rapidly become the go-to term for the class of machine learning systems capable of producing media: text, images, music, and video. It is catchy, easy to remember, and broadly descriptive. However, when applied to large language models (LLMs), the term does more harm than good.
LLMs like GPT-4o, Claude, and others do not simply generate plausible text. They interpret. They respond. They situate their outputs in relation to complex and evolving input contexts. A better name, I argue, is Interpretive AI.
What Makes LLMs Different?
Generative image or music models typically synthesize outputs based on a latent space learned from training data. The act of generation is largely one-directional. These systems do not interpret a user’s sketch or melody in order to generate an image or song. They generate based on a prompt.
By contrast, LLMs are dialogic. They process language in a context-sensitive manner. Each token is generated not in isolation but in relation to the user’s input and the model’s own previous outputs. The system must keep track of what has been said, what is being asked, and what expectations are implied. It does not simply produce. It engages.
Interpretation as Core Operation
This engagement is not trivial. For a model to maintain coherence across a conversation, to stay on topic, answer questions, offer relevant elaborations, and handle ambiguity, it must perform a form of ongoing interpretation. This interpretation is not semantic in the human sense, but it is functional. The model must infer structure, intent, relevance, and tone in order to continue the interaction in a way that humans find meaningful.
In this light, calling it merely generative fails to account for its interpretive doing. These models work by continuously negotiating meaning in a co-constructed context with the user.
Why Terminology Matters
Terminology shapes understanding. When we call something generative, we focus on its outputs. When we call it interpretive, we shift focus to its relational, responsive, and situated behavior. This is not a minor semantic difference. It influences:
- How we design interfaces
- How we set expectations for use
- How we regulate AI systems
- How we think about responsibility and agency
Interpretive AI: A Clearer Frame
The term Interpretive AI highlights that these systems do more than generate statistically plausible outputs. Their responses are shaped by how they condition each token on a dynamic context, consisting of previous inputs, prior outputs, and linguistic structure. This ongoing adjustment allows them to maintain relevance, coherence, and tone in ways that appear attuned to the evolving interaction, even though they lack understanding in the human sense.
This shift in framing opens up a richer vocabulary for discussing how such systems behave, how they misinterpret, and how they mediate human communication.
Philosophical Considerations
I propose this term intentionally provocatively. Interpretation traditionally implies human understanding, grounded in a condition of being-in-the-world. LLMs do not understand in this sense. But meaning and understanding are not located within the system. Instead, they emerge in relation, unfolding between human and model. In this view, interpretation is a co-operative, situated process, enacted through interaction. Through its ongoing token generation, the LLM opens up a space of possibilities, participating in a kind of relational meaning-making. Interpretive AI is therefore not a claim about internal cognition, but about the model’s relational role in negotiating understanding as a dialogical partner.