Teleosynthesis without Ethics
How predictive AI reshapes society without intending to
There is a growing sense that something is going wrong at the boundary between human societies and the social media and AI systems that now mediate so much of our interaction. Public discourse feels more volatile, disagreement sharper, trust more fragile. Events escalate faster than understanding can keep pace. It is tempting to blame misinformation, bad actors, or technological acceleration alone. These explanations are not wrong, but they are incomplete.
Many of the systems shaping collective attention today are not merely reactive. They are predictive. They anticipate what people want and adjust the environment to make that future more likely. Over time, this anticipation does not just respond to behaviour; it begins to organise it. The AI future starts to shape the present.
To make this concrete, consider everyday AI tools such as autocomplete or suggested replies. These systems do not decide what anyone should say. They simply predict which words or phrases are most likely to follow others and offer them in advance. Yet over time, language itself begins to shift. Messages become more uniform in tone, expressions more templated, arguments more stylised. No one intends this change, and no system aims to shape expression. But by placing predicted futures directly in front of users, the system quietly alters what feels natural to say at all.
Nothing in this process involves intention or awareness in the AI. And yet, across millions of small adjustments, behaviour is pushed in consistent directions. From the outside, the system begins to look as though it is working toward something, even though it is doing nothing more than optimising predictions at scale.
This phenomenon is not unprecedented. Human institutions have always relied on expectations about future behaviour. What is new is the scale, speed, and autonomy with which these anticipations now operate—and the fact that they are increasingly generated by systems that do not participate in the everyday practices through which humans hold one another to account.
Some familiar examples make the pattern clearer.
The outrage cycle.
On social media, a provocative post is amplified, responses escalate, nuance disappears, and attention moves on. From the platform’s perspective, nothing unusual has occurred: the system has simply optimised for continued engagement. Yet from the human perspective, events feel driven toward polarisation. No agent stands behind the trajectory in a way that can be questioned or corrected. What emerges is a stable pattern of amplification without clear responsibility.
Recommendation spirals.
Recommendation systems construct paths through information space: “people who watched this also watched that.” Each step is local and defensible. Over time, however, these steps accumulate into trajectories that shape interests and beliefs. Users arrive somewhere—more entrenched, more agitated—without ever having chosen that direction. Prediction substitutes for understanding, quietly organising futures in the background.
Automated judgement.
In scoring, moderation, or flagging systems, actions are labelled risky or unacceptable, often with real consequences. Appeals feel opaque. Decisions are made, but it is hard to identify who is responsible for explaining or revising them. Human systems of trust rely on the idea that decisions can be justified, challenged, and changed. Where this structure is missing, confidence erodes even when systems are technically effective.
Political campaigning and democratic strain.
A particularly sensitive illustration appears in contemporary politics. Most political organisations use AI-driven tools in good faith, aiming to persuade and mobilise voters within democratic systems. Yet these tools are optimised for prediction, not public debate. Messaging becomes increasingly personalised, shifting communication away from shared argument toward tailored persuasion. No one intends to weaken democracy, but the conditions that sustain it—common discussion, visible disagreement, and accountable reasoning—are quietly strained by systems that operate upstream of reflection and debate.
It is important to distinguish here between human intentions and system behaviour. Organisations—technology companies, political parties, institutions—pursue familiar goals: profit, efficiency, growth, electoral success. These goals are set by people and embedded, however imperfectly, in social and legal frameworks. What has changed is the means by which they are pursued.
By delegating large-scale prediction and optimisation to artificial systems, organisations have effectively delegated the task of shaping future behaviour—often without putting in place ways to monitor or limit the wider consequences of that shaping. In many cases, this happened without realising that prediction does more than forecast behaviour: it steers it.
Taken together, these examples point to a common pattern. Systems that anticipate and shape future behaviour now operate at scale, but they do so without participating in the shared practices through which humans normally manage responsibility, explanation, and correction. Prediction has begun to outpace the ways societies make sense of actions and hold actors to account.
Human coordination depends on more than prediction. It relies on the expectation that people act for reasons, that explanations can be demanded, that disagreements can be aired, and that responsibility can be located. These expectations stabilise interaction under uncertainty. They allow people to anticipate one another not just causally, but socially.
When systems that shape collective futures remain outside this space, a gap opens. Behaviour remains coherent, but the coherence is misdirected. Order emerges, but it is no longer anchored in shared responsibility. This helps explain why current disruptions feel both impersonal and deeply consequential: outcomes are real, yet agency is hard to find.
Much current discussion responds by insisting that humans must remain “in the loop.” The intuition is sound. Responsibility cannot simply be handed over to systems that optimise patterns without understanding their effects. Yet in practice, such oversight often arrives only after attention has been shaped and trajectories set. It operates downstream of influence, where its ability to steady outcomes is limited.
The deeper question, then, may not be how to keep humans supervising machines, but how systems that increasingly shape social outcomes can be designed to operate within the same space of shared responsibility that governs human interaction. If prediction continues to outpace accountability, the pressures described here will intensify. If, instead, future-shaping technologies are developed as part of joint practices of explanation and responsibility, the same dynamics that currently destabilise coordination could become sources of coherence rather than disruption.
© John Rust, December 2025, All Rights Reserved



Wow, the part about behaviour being organised without intent really got me. How does that wrk?