Philosophy ·
Technology · Markets
From the Divine
Right of Kings to the Divine Right of Data
On obedience, democratic doubt, the Chinese Room of governance, and a
trading algorithm named Phantom Edge — a conversation that traced the arc of
power from the altar to the algorithm.
The question that started everything was deceptively simple: why do
religions so often produce not just faith, but submission? The answer, it
turned out, was not about gods at all. It was about a posture — an
epistemological habit of treating the source of one's authority as
constitutionally incapable of tyranny, and therefore deserving of unconditional
obedience.
That posture, once identified, turns out to be portable. It travels from
the altar to the throne to the vanguard to the algorithm. And tracking it
across those migrations reveals something important about what democracy
actually is, what it cannot be, and what threatens to replace it.
"The real deception lies not in disbelieving gods or kings or people —
but in believing that any of them are incapable of tyranny."
The training ground of obedience
Certain religious traditions have functioned as institutional rehearsal
spaces for absolute, unaccountable authority. By cultivating deference toward
commands mediated through earthly hierarchies, they supply the psychological
infrastructure on which theocracy and autocracy alike have been erected. The
mechanism is not theological belief as such — it is the deeper conviction that
a particular sovereign is, by nature, beyond error and therefore owed limitless
obedience.
But this reading must be complicated immediately. Many of the same
traditions have internally generated their most powerful critics of earthly
power: the Hebrew prophets confronting kings, Islamic concepts of consultation,
liberation theology, Quaker dissent. The same institutional form produces
submission and resistance. What matters is not the religion
itself — it is whether the logic of unconditional obedience is attached to a
person, an office, or a principle. Principles, unlike persons, can at least be
argued with.
Democracy as
epistemology
Institutionalised
doubt
Democracy, at its most honest, is not a rival faith. It is a formalised
refusal of the posture of unconditional deference. Its procedures —
constitutions, term limits, independent judiciaries, rights of dissent — are
not expressions of cynicism. They are the architectural consequences of a
single working assumption: that every holder of power, however legitimate their
mandate, retains the full human capacity for self-interest, error, and cruelty.
This framing illuminates why democracy is so fragile. It requires its
citizens to not believe in anyone quite that much, which is cognitively and
emotionally demanding. The impulse to find a trustworthy sovereign — someone we
can finally relax our vigilance around — is not a failure of intelligence. It
is very human. Democracy institutionalises the refusal of that impulse and
builds law around it.
Note on the thinkers: Locke
grounds democratic limits in God — theological premises used to restrict rather
than authorize absolute power. Rousseau transplants the theological structure
of unconditional obedience wholesale into secular theory via the General Will,
giving the collective the tyranny-proof status once held by the monarch.
Jefferson masters the grammar of conditional authority while exempting his most
intimate exercise of absolute power from its reach. Together they reveal that
democratic legitimacy requires some foundational belief that is not itself
subject to democratic revision — the question is only where that belief is
placed and how thickly it insulates power from accountability.
Governance &
the Chinese Room
Democracy as a
room that processes, not understands
Searle's Chinese Room thought experiment — a room that produces
syntactically correct outputs by following rules, without any semantic
understanding of what it is doing — maps interestingly onto constitutional
governance. A written constitution does function like the rulebook: it
processes political inputs through formal procedures that are, in principle,
indifferent to the content of what is being processed. A constitutional court
does not ask whether a law is wise. It asks whether it conforms to the rules.
But the analogy breaks down in the most important way. Constitutional text
is written in language that is irreducibly interpretive. "Due
process," "equal protection," "necessary and proper" —
these phrases have no fixed computational values. The rulebook is not
self-executing; it requires people who bring understanding, intention, and
moral reasoning to bear on it. The room needs someone who understands Chinese
after all.
This points to democracy's structural vulnerability: it aspires to be a
Chinese Room — to make good outcomes structurally guaranteed rather than
dependent on the virtue of individuals — but it cannot fully become one. The
constitution can discipline power. It cannot create citizens capable of
wielding and checking it wisely. Tocqueville saw this clearly: democratic
institutions are only as strong as the democratic habits and moral culture that
surround them. The room depends entirely on what happens outside it.
"Pure institutionalised doubt, taken to its logical end, dissolves the
very consent that makes democratic authority possible."
The next
rulebook
From sovereign law to sovereign
pattern
The constitutional rulebook was democracy's answer to the divine right of
kings — replacing the sovereign person with sovereign law. The statistical
rulebook of AI represents a third stage: replacing sovereign law with sovereign
pattern. Call it the divine right of data.
The shift is not merely technical. The traditional constitutional rulebook
is deontological — it applies rules regardless of outcomes, and those rules are
precisely what cannot be overridden by majorities or optimised away by models.
Rights, in the classical sense, are anti-statistical. They are what cannot be
voted away.
AI's statistical rulebook has no rules in this sense — only probability
distributions learned from patterns. It does not say "this output is
correct because it follows the rule." It says "this output is most
likely appropriate given everything that has preceded it." Predictive
governance, behavioural nudging, continuous preference aggregation — each
represents governance that has internalised the Chinese Room logic: acting on
populations without needing them to understand, without needing to understand
them.
A statistical rulebook cannot generate rights. It can only generate
predictions about what populations will tolerate. Minority protections dissolve
not through malice but through architecture: low-probability edge cases are
simply underweighted in the training data. The model is not hostile to
minorities. It is indifferent in a way that produces the same result.
XTX Markets
& Phantom Edge
The phantom that holds no intentions
Against this backdrop, consider XTX Markets — a London-based algorithmic
trading firm founded in 2015 by mathematician Alexander Gerko, which processes
over $250 billion in trades daily across 35 countries, staffed by roughly 250
people operating one of the largest private GPU clusters on earth. Its trading
formulas are a closely guarded secret. And somewhere inside that apparatus
lives a system reportedly known as Phantom Edge.
The name is philosophically precise in ways that may not have been
intended. The edge is real — measurably, consequentially real in its market
effects. And yet there is no agent behind it in the traditional sense. No one
decided any single trade. The system is the pattern. The pattern is the system.
The edge is phantom because it belongs to no one, can be attributed to no one,
and resists the kind of accountability that democratic and legal institutions
are designed to extract.
This is the sharpest version of the problem the entire conversation had
been circling. Classical tyranny required a tyrant — a person or party that
consciously chose to override the rulebook for their own benefit. The
accountability of democratic institutions was designed to reach that person:
you could vote them out, impeach them, imprison them. The tyranny of the
statistical rulebook requires no such person. The system produces unjust
outcomes — or simply reshapes the terms of market participation in ways that
concentrate advantage — through process rather than intention. There is no
discrete decision point at which to locate accountability. The algorithm
predicted. The system responded. No one decided.
"You cannot accuse a probability distribution of tyranny. You cannot
hold a model accountable. You cannot vote it out."
What the next
rulebook must contain
The problem the conversation ultimately arrived at is one democratic theory
has not yet clearly solved. Not just rights against persons or states, but
rights against epistemic systems — the right to be governed by reasons you can
contest, rather than predictions you can only receive.
This is harder than it sounds. Contestability requires transparency, and
the statistical advantage of systems like Phantom Edge depends precisely on
opacity. It requires interpretability, and the most powerful models are the
least interpretable. It requires an agent to hold responsible, and statistical
systems dissolve agency by design.
Democracy, at its best, was the system that dared us not to believe in
anyone that much. The question now is whether it can become the system that
dares us not to defer to any process that much — and whether
it can build institutions capable of holding not persons but patterns to
account. That is a different challenge, requiring different tools. But the
underlying principle is identical to the one Locke was reaching for when he
refused Filmer's divine inheritance: no source of authority is tyranny-proof,
and the moment we treat it as such, we have already surrendered the only
protection we ever had.
Comments
Post a Comment