Artificial intelligence


There is no doubt that the latest advances in artificial intelligence are more impressive than what came before, but are we in just another bubble ofAI hype? Jeremy Hsu reports



What actually is artificial intelligence? The term artificial intelligence was coined in1956by computer scientist John McCarthy. The context was a workshop at Dartmouth College inNew Hampshire that attempted to “find how to make machines use language,form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves”. The field has evolved since then, but AI is essentially still about creating machines that can do what we can,and more. 

THIS moment for artificial intelligence is unlike any that has come before. Powerful languagebasedAIshave lurched forward and cannot produce reams of plausible prose that often can’t be distinguished from text written by humans. They can answer tricky technical questions, such as those posed to lawyers and computer programmers. They can even help better trainotherAIs. However,they have also raised serious concerns. Prominent AI researchers and tech industry leaders have called for research labs to pause the largest ongoing experiments inAIfor at least six months in order to allow time for the development and implementation of safety guidelines.Italy’s regulators have gone further,temporarily banning a leading AI chatbot. At The centre of it all are large language models and other types of generativeA that can create text and images in response to human prompts. Start-ups backed by the world's most powerful tech firms have been accelerating the deployment of these generative

AIs since 2022 – giving millions of people access to convincing but often inaccurate chatbots, while flooding the internet with AI-generated writing and imagery in ways that could reshape society. AI research has long been accompanied by hype.But those working on pushing the boundaries of what's possible and those calling for restraint all seem to agree on one thing: generative AIs could have much broader societal impacts thanthAI that came before. Boom and bust The story ofAIis one of repeating cycles involving surges of interest and funding followed by lulls after people’s great expectations fall short.In The 1950s,there was a huge amount of enthusiasm around creating machines that would display human-level intelligence (see “What actually is artificial intelligence?”, below left).Butthatlofty goal didn’t materialize because computer hardware and software quickly ran into technical limitations. The result was so-calledAIwinters winters in the 1970s and in the late 1980s, when research funding and corporate interest evaporated. The past decade has represented something of anAI summer both for researchers looking to improve learning capabilities and companies seeking to deploy AIs. Thanks to a combination of massive improvements in computer power and the availability of data, an approach that uses Is loosely inspired by the brain(see “What Is a Neural network?”, above right)hashad a lot of success. Voice and face-recognition capabilities in ordinary smartphones use such neural networks, as do computationally intensiveAIs that have beaten the world's best players at the ancient board game Go and solved previously intractable scientific challenges, such as predicting the structure of nearly all proteins known to science. Research Developments in the field have typically unfolded over years,with Itools being applied to specialised tasks or rolled invisibly into existing commercial products and services, such as internet search engines. But over the past few months, generativeAIs,whichalsouse neural networks,have become the focus of tech industry efforts to rush AIs out of corporate labs and into the hands of the public. The results have been messy, sometimes impressive and often


unpredictable, as individuals and organisations experiment with these models. “I Truly didn't expect the explosion of generative models that we are seeing now,” says Timnit Gebru,founder of the Distributed AI Research Institute in California. “Ihave never seen such a proliferation of products so fast.” The spark that lit the explosion came from OpenAI, a San Francisco-based company,when it launched a public prototype outfit's AI-powered chatbotChatGPT on 30 November 2022 and attracted 1 million users in injustice days. Microsoft, a multibillion-dollar investor in OpenAI,followed up in February by making a chatbot powered by the same technology

behind ChatGPT available through itsBing search engine – an obvious attempt to challenge Google’s long domination of the search engine market. That spurred Google to respond inMarchby debuting its own AI chatbot,Bard. Google Has also invested $300million in Anthropic, anAI start-up founded by former OpenAI employees, which made its Claude chatbot available to a limited number of people and commercial partners, starting inMarch.MajorChinese tech firms, suchasBaiduand Alibaba,have likewise joined the race to incorporate AI chatbots into their search engines and other services. These generativeAIs are already affecting fields such as education, with some schools have banned ChatGPT because it can generate entire essays that often appear indistinguishable from student writing. Software developers have shown that ChatGPT can find and fix bugs in programming code as well as write certain programs from scratch.Real estate agents have used ChatGPT to generate new sale listings and social media posts, and law firms have embracedAI chatbots to draft legal contracts. US government

research labs are eventesting howOpenAI’s technology could speedily sift through published studies to help guide new scientific experiments (see “Why isChatGPT so good?”, page 14). An Estimated 300 million full-time jobs may face at least partial automation from generative AIs, according to a report by analysts at investment bank GoldmanSachs.But, as they write,this depends on whether “generative AI delivers on its promised capabilities'' – a familiar caveat that has come up before inAI boom-and-bust cycles. What Is clear is that the very real risks of generative AIs are also manifesting at a dizzying pace. ChatGPT and other chatbots often present factual errors, referencing completely made-up events or articles,including,in one case, an invented sexualharassment scandal thatfalsely accused a real person.ChatGPTusagehas also led to data privacy scandals involving the leak of confidential company data, alongwith ChatGPT users being able to see other people’s chat histories and personal payment information. Artists and photographers have raised additional concerns about AI-generated artwork threatening their professional livelihoods, all while some companies train generative AIs on work of  those artists and photographers without compensating them. AI-generated imagery can also lead to mass misinformation, as demonstrated by fakeAI-created pictures of US president Donald Trump being arrested and Pope Francis Wearing a stylish white puffer jacket, both of which went viral. Plenty of people were fooled, believing they were real. Many of these potential hazards were anticipated by Gebruwhen she and her colleagues wrote about the risks of large language models in a seminal paper in2020, back whenshewas co-leader of Google’s ethical AI team. Gebru described being forced out of Google after the company’s leadership asked her to retract the paper, although Google described her departure as a resignation not a firing. “[The current situation]feels like yet another hype cycle, but the difference is that now there are actual products out there causing harm,” says Gebru. Making generative AIs Generative technology builds on decade's worth of research that has made Is significantly better at recognising images, classifying articles according to topic and converting spoken words to written ones, says Arvind Narayananat Princeton University.By flipping that process around,they can create synthetic image when given a description, generate papers about a given topic or produce audio versions of written text. “GenerativeAI genuinely makes many new things possible,” says Narayanan.Although This technology can be hard to evaluate,he says. Large language models are feats of engineering,using huge amounts of computing power in data centres operated by firms like Microsoft and Google. They Need massive amounts of training data that companies often scrape from public information repositories on the internet, suchasWikipedia. The technology also relies upon large numbers of human workers to provide feedback to steer the AIs in the right direction during the training process. Butthe powerfulAIs released by large technology companies tend to be closed systems that restrict access for the public or outside developers.Closed systems can help control for the potential risks and harms of letting anyone download and use theAIs, but they also concentrate power in the  hands of the organisations that developed them without allowing any input from many people whose lives theAIs could affect. “The Most pressing concern in closedness trends is how few models will be available outside a handful of developer organisations,” says Irene Solaiman, anAI safety and policy researcher atHugging Face, a company that develops tools for sharingAI code and data sets. Such Trends can be seen in howOpenAIhasmoved towards a proprietary and closed stance on its technology, despite starting as a non-profit organisation dedicated to open development ofAI.When OpenAI upgraded Chat PT's underlying technology to GPT-4,the company cited “the competitive landscape and safety implications of large-scale models like GPT-4” as the reason for not disclosing how this modelworks. This type of stance makes it hard for outsiders to assess the capabilities and limitations of generativeAIs, potentially fuelling hype. “Technology bubbles create a lot of emotional energy – both excitement and fear – but they are bad information environments,” says LeeVinsel, a historian of technology atVirginia Tech. Many techbubbles involve both hyper and what insel describes as “criti-hype” – criticism that  amplifies technology hype by taking the most sensational claims of companies at face value and flipping them to talk about the hypothetical risks. This can be seen in response toChatGPT. OpenAI’smission statement says the firm is dedicated to spreading the benefits of artificial general intelligence –AIs that can outperform humans at every intellectualtask.ChatGPT is very far from that goal but, on 22 March,AI researchers such as YoshuaBengio and tech industry figures such as ElonMusk signed open letter asking research labs to pause giantAI experiments, while referring toAIs as “nonhuman minds that might eventually outnumber, outsmart, obsolete and replaced”. Experts interviewed by New Scientist warned that both hyper and criti-hype can distract from the urgent task of managing actual risks fromgenerativeAIs. For instance, GPT-4 can automate many tasks, create misinformation on a massive scale, lock in the dominance of a few tech companies and break democracies, says Daron Acemoglu, an economist at theMassachusetts Institute of Technology. “It can do those things without coming close to artificial general intelligence.” Acemoglu Says this moment is a “critical juncture” for government regulators to ensure that such technologies help workers and empower citizens and for “reining in the tech barons who are controlling this technology”. European Union Law-makers are finalizing anArtificialIntelligence Acts That Would create the world's first broad standards for regulating this technology. The legislation aims to ban or regulate higher-risk AIs,with ongoing debate about includingChatGPT and similar generativeAIswithgeneral purpose uses under the “high risk” category.Meanwhile, regulators inItalyhave temporarily banned ChatGPT over concerns that it could violate existing data privacy laws. TheAI NowInstitute in New York and otherAI ethics experts such as Gebruhave proposed placing the burden of responsibility on big tech companies,forcing them to demonstrate that there aren’t causing harm,instead of requiring regulators to identify and deal with any harm after the fact. “Industry players have been some of the first to say i need regulation,” says SarahMyers West,managing director at the AI NowInstitute. “But wish that the question was counterposed to them, like, ‘How Are you sure that what you’re doing is legal in the first place?’” Next generation Muchofwhathappensnextin the generativeAI boom depends on how the technologies involved are used and regulated. “Ithink themostimportantnews from history is that we, as a society,have many more choices about how to develop and roll out technologies than what tech visionaries are telling us,” saysAcemoglu. SamAltman, OpenAI’sCEO,has said thatChatGPT can’t replace traditional searchengines right now.Butina Forbes interview, he suggested that anAI could someday change how people get information online in away that is “totally different and way cooler”. Altman has also contemplated muchmore extreme future scenarios involving powerful AIs that generally outperform humans, describing a “best case” ofAIs being able to “improve all aspects of reality and let us all live our best lives”,while also warning that the “bad case” could mean “lights out for all of us".Buthe described currentAI development as still being far from artificial general intelligence. Lastmonth, Gebruandher colleagues published a statement warning that “itis dangerous to distract ourselves with fantasizedAI-enabled utopia or apocalypse which promises either a ‘flourishing’ or ‘potentially catastrophic future”. “The current race towards ever larger ‘AI experiments is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive,” they wrote. “The actions and choices of corporations must be shaped by legislation which protects the rights and interests of people.” If The frothy bubble of business expectations around generativeAI builds to unsustainable levels and eventually bursts,that could also dampen future development in general, says Sasha Luccioni, anAI research scientist atHugging Face.However,the boom in generativeAIneedn’tinevitably lead to another winter. One reason is that,unlike in previous cycles,many organisations continue to pursue other avenues of artificial intelligence research instead of putting all their eggs in the generativeAI basket. Opening up AI Organisations suchasHugging Face are advocating for a culture of openness inAI research and development that can help prevent both hyper and actual societal impacts from spiraling out of control. Luccioniisworking with the organisers of NeurONS – one of the largestAI research gatherings – to establish a conference code of ethics where researchers must disclose their training data, allow access to their models and show their work instead of hiding it as proprietary technology. AI researchers should clearly explain why models can and can’t do, draw a distinction between product development and more scientific research, and work closely with the communities most affected byAIto learn about the features and safeguards that are relevant to them, says Nima Boscarino, an ethics engineer at Hugging Face.Boscarino also highlights the need to adopt practices such as evaluating how anAI performs with people of different identities. Work ongenerativeAI carried out this way could ensure a more stable and sustainable form of beneficial technological development well into the future. “These are exciting times in theAI ethics space and Ihope that the broader machine-learning sector learns to take the opposite approach of what OpenAIhas been doing,” saysBoscarino.


Comments

Popular Posts