Make no mistake—AI is owned by Big
Tech
Until late November, when the epic saga of OpenAI’s
board breakdown unfolded, the casual observer could be forgiven for
assuming that the industry around generative AI was a vibrant competitive
ecosystem.
But this is not the case—nor has it ever been. And understanding
why is fundamental to understanding what AI is, and what threats it
poses. Put simply, in the context of the current paradigm of building
larger- and larger-scale AI systems, there is no AI without Big
Tech. With vanishingly few exceptions, every startup, new
entrant, and even AI research lab is dependent on these firms. All rely on the
computing infrastructure of Microsoft, Amazon, and Google to train their
systems, and on those same firms’ vast consumer market reach to deploy and sell
their AI products.
Indeed, many startups simply license and rebrand AI models
created and sold by these tech giants or their partner startups. This is
because large tech firms have accrued significant advantages over the past decade.
Thanks to platform dominance and the self-reinforcing properties of the
surveillance business model, they own and control the ingredients necessary to
develop and deploy large-scale AI. They also shape the incentive
structures for the field of research and development in AI,
defining the technology’s present and future.
An exclusive conversation with Ilya Sutskever on his fears
for the future of AI and why they’ve made him change the focus of his life’s
work.
The recent OpenAI saga, in which Microsoft exerted its quiet
but firm dominance over the “capped profit” entity, provides a powerful
demonstration of what we’ve been analyzing for the last half-decade. To wit:
those with the money make the rules. And right now, they’re engaged in a race
to the bottom, releasing systems before they’re ready in an attempt to retain
their dominant position.
Concentrated power isn’t just a problem for markets. Relying
on a few unaccountable corporate actors for core infrastructure is a problem
for democracy, culture, and individual and collective agency. Without
significant intervention, the AI market will only end up rewarding and
entrenching the very same companies that reaped the profits of the invasive
surveillance business model that has powered the commercial internet, often at
the expense of the public.
The Cambridge Analytica scandal was just one among many that
exposed this seedy reality. Such concentration also creates single points of
failure, which raises real security threats. And Securities and Exchange
Commission chair Gary Gensler has
warned that having a small number of AI models and actors at the
foundation of the AI ecosystem poses systemic risks to the financial order, in
which the effects of a single failure could
be distributed much more widely.
The assertion that AI is contingent on—and exacerbates—concentration
of power in the tech industry has often been met with pushback. Investors who
have moved quickly from Web3 to the metaverse to AI are keen to realize returns
in an ecosystem where a frenzied press cycle drives valuations toward profitable
IPOs and acquisitions, even if the promises of the technology in question
aren’t ever realized.
But the attempted ouster—and subsequent reintegration—of
OpenAI cofounders Sam Altman and Greg Brockman doesn’t just bring the power
and influence of Microsoft into sharp focus; it also proves our case
that these commercial arrangements give Big Tech profound control over the
trajectory of AI. The story is fairly simple: after apparently being blindsided
by the board’s decision, Microsoft moved to protect its investment and its road
map to profit. The company quickly exerted its weight, rallying behind Altman and
promising to “acquihire” those who wanted to defect.
Microsoft now has a seat on OpenAI’s board, albeit
a nonvoting one. But the true leverage that Big Tech holds in the AI landscape
is the combination of its computing power, data, and vast market reach. In
order to pursue its bigger-is-better approach to AI development, OpenAI made a
deal. It exclusively licenses its GPT-4 system and all other OpenAI
models to Microsoft in exchange for access to Microsoft’s computing
infrastructure.
For companies hoping to build base models, there is little
alternative to working with either Microsoft, Google, or Amazon. And those at
the centre of AI are well aware of this, as illustrated by Sam Altman’s furtive
search for Saudi and Emirati sovereign investment in a hardware
venture he hoped would rival Nvidia. That company holds a near monopoly on
state-of-the-art chips for AI training and is another key choke
point along the AI supply chain. US regulators have since unwound an
initial investment by Saudi Arabia into an Altman-backed company, RainAI,
reinforcing the difficulty OpenAI faces in navigating the even more
concentrated chip making market.
In an exclusive interview with MIT Technology Review,
Nadella shares his view of the platform shift for developers.
There are few meaningful alternatives, even for those
willing to go the extra mile to build industry-independent AI. As we’ve
outlined elsewhere, “‘open-source AI”—an ill-defined term that’s currently used
to describe everything from Meta’s (comparatively closed) LLaMA-2 and Eleuther’s
(maximally open) Pythia
series—can’t on its own offer escape velocity from industry
concentration. For one thing, many open-source AI projects operate
through compute
credits, revenue sharing, or other contractual arrangements with tech
giants that grapple with the same structural
dependencies. In addition, Big Tech has a long
legacy of capturing, or otherwise attempting to seek
profit from, open-source development. Open-source AI can offer
transparency, reusability, and extensibility, and these can be positive. But it
does not address the problem of concentrated power in the AI market.
The OpenAI-Microsoft saga also demonstrates a fact that’s
frequently lost in the hype around AI: there isn’t yet a clear business model
outside of increasing
cloud profits for Big Tech by bundling AI services with cloud
infrastructure. And a business model is important when you’re talking about
systems that can cost hundreds Aof millions of dollars to train and develop.
Microsoft isn’t alone here: Amazon, for example, runs a
marketplace for AI models, on which all of its products, and a handful of
others, operate using Amazon Web Services. The company recently struck
an investment deal of up to $4 billion with Anthropic, which has also
pledged to use Amazon’s in-house chip, Trainium, optimized for building
large-scale AI.
Big Tech is becoming increasingly assertive in its manoeuvrings
to protect its hold over the market. Make no mistake: though OpenAI was in the
crosshairs this time, now that we’ve all seen what it looks like for a small
entity when a big firm it depends on decides to flex, others will be paying
attention and falling in line.
Regulation could help, but government policy often winds up
entrenching, rather than mitigating, the power of these companies as they
leverage their access to money and their political clout. Take Microsoft’s
recent moves in the UK as an example: last week it announced a £2.5
billion investment in building out cloud infrastructure in the UK, a
move lauded by a prime minister who has clearly signalled his ambitions to
build a home-grown AI sector in the UK as his primary legacy. This news can’t
be read in isolation: it is a clear attempt to blunt an investigation into
the cloud market by the UK’s competition regulator following a
study that specifically called out concerns registered by a range of
market participants regarding Microsoft’s anticompetitive behaviour.
Comments
Post a Comment