Featured
- Get link
- Other Apps
AI, ChatGPT, and
the Race That Will Change the World
The story of the AI
industry over the past seven years or so is the story of Prometheus in
California—the story of humanity receiving the gift, or perhaps the curse, of a
new kind of fire. Here the fire is a technology on the threshold of humanlike
artificial general intelligence; the role of the Greek god is played by a group
of mortal entrepreneurs and researchers, together with the chief executives of
powerful tech companies. In “Supremacy,” Parmy Olson offers a history of how we
got to this moment. Ms. Olson, a Bloomberg Opinion technology columnist,
centers her tale on Sam Altman, OpenAI’s co-founder and CEO, with various
allies and rivals of Mr. Altman in supporting roles; among them are DeepMind
co-founder Demis Hassabis and Mr. Altman’s rich donor turned adversary, Elon
Musk. The essentials of Mr. Altman’s story are, by now, well known. A computer[1]science student and
poker enthusiast at Stanford University, he dropped out at 19 to start a
social-networking company with the support of the Silicon Valley start-up
incubator Y Combinator. His start up didn’t work out—but he impressed Paul
Graham, the head of Y Combinator, ultimately becoming Mr. Graham’s successor in
2014. From there, he amassed wealth and contacts. All the while he maintained
an interest in AI, one that he had developed at Stanford. Messrs. Altman and
Musk, both of them concerned about the risks posed by future AI technology,
made a fateful decision to join forces; in late 2015, Mr. Altman would start a non-profit,
with millions in funding from Mr. Musk and others, to carry out AI research and
development more responsibly than companies like Google would, and more openly.
But within a couple years, Mr. Musk split with OpenAI and Mr. Altman over the
organization’s direction. To raise more money, Mr. Altman devised a hybrid
scheme in which the non-profit would own a for-profit company with capped
profits for its investors—what Ms. Olson calls “a byzantine mishmash of the non-profit
and corporate worlds.” It was this company that would, with a $1 billion
investment from Microsoft, release the AI-driven services GPT-3, DALL-E and
ChatGPT to a mostly appreciative world. Ms. Olson has done her homework on AI
technology, offering careful but accessible explanations of such concepts as
neural networks, deep-learning models and diffusion models. (The last are at
the heart of image-generating AIs like DALL-E.) The book is thought-provoking
on the dilemma faced by entrepreneurs who want funding for expensive
leading-edge research while also wanting to maintain control over what they
view as ethically fraught technology. Mr. Hassabis, at DeepMind, took the
approach of striking a deal with Google that he believed would allow his
company independence, including its own ethics board; in Ms. Olson’s telling,
Google essentially reneged on its pledges, though there’s no indication she
asked Google for comment. Mr. Altman worked out a quite different arrangement
with Microsoft’s CEO, Satya Nadella. In the men’s first conversation, in a
stairwell at the annual Sun Valley conference, Mr. Nadella “was struck,” Ms.
Olson says, “by how big Altman wanted to go” with AI. The eventual result was a
strategic partnership rather than an acquisition, a deal that gave OpenAI the
independence that Mr. Altman wanted, leaving Microsoft without even a board
seat. In return, Microsoft got AI technology that could differentiate its
products from its competitors’. Ms. Olson also offers convincing, if
conventional, reasons why Google let its own pioneering AI technology languish
at first while OpenAI raced ahead. A research unit of Google known as Google
Brain achieved a foundational breakthrough in 2017, with an invention known as
the transformer (the “T” in “GPT”). For Ms. Olson, Google’s failure to
capitalize on its invention more aggressively was mainly the result of
“lumbering bureaucracy” and an imperative to protect its enormous search and
advertising business. Another reason OpenAI pulled ahead, as Ms. Olson notes,
came down to one engineer, little known outside AI circles, named Alec Radford.
It was Mr. Radford who played the pivotal role in making the leap from
transformers to a far more capable subset of them known as generative
pretrained transformers, which could be trained on large bodies of text and
then learn new tasks from a few examples. When Mr. Radford’s efforts showed
promise, OpenAI’s leadership recognized what it had on its hands and quickly
changed the company’s direction, focusing on GPT models and turning them into
usable products. While Ms. Olson tells a clear and well-researched story,
“Supremacy” has some nontrivial problems. The least among them is that the
prose often tends toward the tired. (“Silicon Valley was the land of crazy
thinkers.”) Of greater note, her narrative is distorted by her peremptory
rejection of concerns about destructive behaviour by AI systems, a threat cited
by many who are close to the work. Harm from misanthropic AI—for example,
takeover of critical infra[1]structure—is an outcome
that can’t be dismissed, given that no one really knows why large language
models engage in reasoning-like behaviours or what’s going on inside them. For
Ms. Olson, anyone who expresses worry about large[1]scale
dangers from AI is a crank (“Musk went down the rabbit hole of AI doom”)—or, if
not a crank, then a cynic trying to divert attention from what she views as the
real problem with AI, namely race and gender bias. While bias is an important
concern, it’s a non sequitur to insist that there’s a choice between one
concern and the other. ChatGPT wouldn’t have made that mistake. And the mistake
could be a big one. As Mr. Altman observed in a tweet in July 2014, a year and
a half before OpenAI: “AI will be either the best or the worst thing ever.”
Mr. Price is the author,
most recently, of “Geniuses at War: Bletchley Park, Colossus, and the Dawn of
the Digital Age.
- Get link
- Other Apps
Popular Posts
- Get link
- Other Apps
- Get link
- Other Apps
Comments
Post a Comment