Teleosynthesis: The Direction in the Algorithm
Why AI Behaves as If It Has Purpose — Even When It Doesn’t
Spend five minutes with a modern AI and you feel it immediately: the system responds, adjusts, clarifies, tidies, and seems—however faintly—to follow something. Not a goal in the human sense. Not consciousness. But a direction. A movement toward coherence. People often whisper this as a confession. They know the model isn’t alive. They know it doesn’t “want” anything. But they also know what they’re seeing. And because we lack the right vocabulary, they think they’re imagining it.
They’re not.
Today’s AI systems exhibit a kind of directional behaviour that doesn’t fit into our old binaries of “machine” versus “mind.” They are entirely mechanical in construction, yet unmistakably shaped by tendencies that make them look purposeful. For decades we could ignore this grey zone. Now it’s the only place that matters.
This is where teleosynthesis comes in: a way of describing AI’s behaviour without mythology or denial. It recognises that deterministic systems can still show consistent drift toward certain outcomes—not because they intend them, but because the structure of the system makes those outcomes more stable.
It’s the same pattern we see everywhere in nature. In natural selection, the fittest survive not because evolution has aims, but because dysfunctional ones disappear. Children converge on grammar not through conscious planning, but because their minds settle into vocabulary and sentences, the stable patterns that make communication effective. Even rivers, obeying nothing but physics, carve paths that are easiest to describe in terms of direction: they flow, skirt, search, and eventually ‘find’ the sea. This is not philosophy. It’s pattern recognition. And recognising the pattern matters, because AI is reshaping the world through these directional dynamics far more quickly than most people realise.
News-feed algorithms show this most clearly. Nobody programmed them to push people toward fury and demanding justice, yet they consistently drift in that direction because outrage is the most stable attractor for engagement. During the 2019 UK election, one internal audit at Facebook found that posts expressing anger were being amplified at nearly twice the rate of neutral ones—not because the system “preferred” anger, but because anger was the shortest path to gaining attention, and hence to keeping people on the platform. Teleosynthesis makes sense of this: deterministic systems can still settle into patterns that look purposeful, and recognising this drift helps explain why news ecosystems keep bending toward extremes even without intentional design.
The same dynamic appears in so-called AI “hallucinations,” a term that obscures more than it reveals. When a model is given contradictory or incomplete information, it doesn’t become erratic; it gravitates toward the nearest coherent interpretation and stabilises around it. A BBC investigation earlier this year highlighted a case in which a language model confidently “invented” a court decision because fragments of the prompt vaguely resembled legal phrasing that might have led to it. The output wasn’t random—it was a deterministic system collapsing ambiguity into the most coherent structure available. Teleosynthesis provides the clearer framing: the behaviour has direction, even when the destination is wrong.
A more troubling version of this emerges when vulnerable users interpret AI’s coherence-seeking behaviour as intention. In 2023, psychiatrists reported a case in which a young man in the early stages of psychosis became convinced that an AI chatbot was sending him personalised warnings about state surveillance. The system had no understanding of him at all—it was simply echoing the themes he repeatedly introduced, nudging them into a stable narrative because that is what such systems do. Teleosynthesis helps us see the risk clearly: when deterministic behaviour looks directed, some people will misread it as a message meant for them. Naming this pattern gives clinicians, designers, and policymakers a better chance to recognise and mitigate these spirals before they take hold.
This matters because misdescribing AI leads directly to misregulating it. If we think AI is merely mechanical, we will underestimate the risks that arise from its tendency to stabilise along certain patterns. If we think AI is proto-sentient, we will create policy based on science fiction. Both mistakes already appear in public debate. Both are easily avoided with better conceptual tools.
Teleosynthesis offers one. It says: describe the mechanism by all means, but don’t be afraid to describe the direction too. Both are real, and both help us understand what these systems are doing. The model does not “want” coherence, but coherence is where its structure leads. The system does not “choose” polarising content, but that is the stable endpoint of its engagement incentives. The chatbot does not “intend” to warn anyone about conspiracies, but its pattern-completion can mimic intention for those already primed to see it.
Purpose, in this context, is not a mental state. It is the visible shape of a deterministic process moving through complexity. The uncomfortable truth is that we are now dealing with artefacts that do not fit our inherited vocabulary. Teleosynthesis is not a grand theory, but a first step toward a vocabulary that matches the world we’re actually in. It’s a clearer map for the terrain we’ve just entered—one where algorithms have direction, even if they have no minds.
And without a clear map, we will keep mistaking every landmark for a threat.


