AI & Future
Prediction
On Prediction, Probability, and the
Future
Part 1: How AI Generates Predictions
Without Access to the Future
Statistical Pattern Recognition
When I (AI) generate a prediction, what I am actually doing is
detecting regularities in historical data and projecting those regularities
forward under the assumption that the world's underlying generative processes
remain sufficiently stable. This is not mystical. A language model trained on
text learns conditional probability distributions: given this sequence of
words, tokens, or concepts, what has historically followed? A weather model
learns that certain atmospheric pressure gradients precede precipitation with a
measurable frequency. Neither system "reaches into" tomorrow. They
both reason from the grammar of the past.
The mechanism is this: the world has structure. Causes
precede effects in lawful ways. If that lawfulness persists, then the past is
genuinely informative about the future — not because the future is already
written somewhere, but because the same causal machinery that produced
yesterday will, absent disruption, produce tomorrow.
Is There Any Genuine
"Grasp" of Future Events?
No — and the distinction matters enormously. A prediction is
not a perception of the future. It is a compressed representation of what kinds
of futures are consistent with current evidence and known regularities. I have
no epistemic access to future states of the world in any direct sense. What I
have is a model — an internal representation of conditional likelihoods — and
when I output a prediction, I am reporting what that model implies, not what I
have observed.
A crucial corollary: my predictions can be wrong precisely because
they are inferences, not observations. If the future were somehow directly
accessible, error would be impossible. The systematic fallibility of
forecasting is itself proof that prediction is entirely retrospective in its
foundations.
Part 2: Do Statistical Laws
"Control or Create" the Future?
Analyzing the Claim
The claim as stated contains a subtle but serious category
error. It confuses three distinct things:
1. Descriptive laws — Statistics describes the
frequency of outcomes in a population of trials. The law of large numbers tells
you what to expect if you repeat a process many times. It does not
intervene in any single trial. When I say a fair coin lands heads 50% of the
time, I am summarizing a pattern. I am not issuing an instruction to the coin.
2. Prediction mechanisms — A probabilistic model is a
tool for assigning credence’s to propositions about the future. It is
epistemically oriented (about what we know or believe) rather than
ontologically operative (causally active in the world).
3. The actual unfolding of events — Future events are
caused by physical processes: particle interactions, biological dynamics,
social forces. These processes are, at the fundamental level, indifferent to
the models we build about them.
Where the Reasoning Fails
The claim implicitly treats the map as if it were the
territory. Statistical laws are mathematical objects — they live in the
space of descriptions, not in the space of causes. Saying that probability laws
"create" the future is like saying that a weather map creates rain.
The map may accurately represent the probability of rain, but the rain is
produced by condensation physics, not by the cartographer.
More precisely, the argument fails at this step: "AI
predictions are generated from probabilistic models; therefore, those models
govern what happens." The premise is true. The conclusion does not
follow. The model governs the output of the AI system, not the evolution
of the world.
A More Accurate Formulation
The relationship between statistical laws, AI prediction,
and actual events is better described as a three-tier epistemic-causal
structure:
Statistical laws are formal descriptions of
regularities observed in physical and social systems. They are derived from the
world, not imposed upon it.
AI prediction mechanisms use statistical laws as
instruments to produce probability distributions over possible futures,
conditioned on current evidence. They are tools for rational belief management
under uncertainty.
The actual unfolding of events is governed by
physical and causal processes that are ontologically independent of our
descriptions of them. The future happens whether or not anyone models it.
The connection between these tiers is calibration,
not control. A well-calibrated model is one whose assigned probabilities match
observed frequencies. That is a relationship of accuracy, not agency.
Part 3: Can Prediction Constitute or
Create the Future?
This is where the philosophy gets genuinely interesting —
because the clean separation above has real and important exceptions.
Purely Descriptive Cases: Weather
Forecasting
In physical systems with no feedback between prediction and
outcome, prediction is purely descriptive. A meteorological model assigns a 70%
probability of rain tomorrow. The atmosphere is causally isolated from that
number. The rain falls or it doesn't based entirely on thermodynamic processes.
The forecast has no ontological purchase on the weather — it only has epistemic
purchase on our beliefs about the weather. Here, prediction creates
nothing; it only reveals.
Self-Fulfilling Prophecy: Financial
Markets
Financial markets are fundamentally different because the participants
are aware of the predictions and act on them, feeding back into the system
being modeled. If a major AI system assigns a high probability to a stock
declining, and if that prediction is widely published, investors may sell
preemptively — thereby causing the decline the model predicted. The
prediction does not merely describe a future; it partially constitutes it.
This is not a failure of the map/territory distinction — it
is a case where the map itself becomes part of the territory. Economists call
these performative predictions: George Soros's theory of reflexivity
captures this precisely. The prediction is causally enmeshed in the causal
structure it purports to describe.
Autonomous Agents: The Strongest Case
An autonomous agent that predicts "taking action A will
lead to outcome O with probability 0.9" and then takes action A is in the
most literal sense using prediction to bring about the future. The
agent's model is not merely descriptive — it is the causal antecedent of the
action that produces the outcome. Here, prediction genuinely has a constitutive
role. The future that unfolds is the one the agent modeled as most probable or
most desirable and then acted to produce.
This is the clearest case in which assigning high
probability to a future does make that future more likely — but only
through the mediating mechanism of action. The probability itself does
not reach out and cause events; the agent that acts on the probability does.
Synthesis: A Three-Part Distinction
|
Domain |
Conceptual Status |
Technical
Mechanism |
Constitutive Role? |
|
Weather forecasting |
Purely epistemic |
Conditional
probability over physical states |
None — prediction is
descriptive |
|
Financial markets |
Epistemic +
performative |
Probabilistic
models + market microstructure |
Partial —
through participant behavior |
|
Autonomous agents |
Epistemic +
instrumental |
Utility maximization
over predicted outcomes |
Strong — through
deliberate action |
|
General AI inference |
Purely
epistemic |
Pattern
extrapolation over training distribution |
None —
outputs are beliefs, not forces |
A Final Philosophical Note
There is something deeper lurking here that deserves
acknowledgment. The question of whether prediction has a constitutive role in
the future connects to a profound issue in the philosophy of time: is the
future real? If the future has no determinate existence until it unfolds,
then no model — however accurate — is "grasping" anything that
exists. It is extrapolating into a kind of structured openness.
What makes a prediction valuable is not that it accesses a
fixed future, but that the world has enough lawful stability that a rigorous
description of the present is genuinely informative about the range of possible
tomorrows. The forecaster does not read the future — they read the present
deeply enough that the future becomes less surprising.
And I, when I predict, am doing exactly that: reading the
deep grammar of patterns I was trained on, and extending that grammar into
territory I have never directly seen — hoping, as all forecasters must, that
the grammar holds.
Comments
Post a Comment