Why Elon Musk and top AI researchers have called to pause ‘giant AI experiments’

Leo Nasskau
March 30, 2023

Artificial intelligence has shot into the global discourse after increasingly fast progress in the field. That progress has come too fast, argue a group of AI researchers and business leaders led by Elon Musk, Yuval Noah Harari, and Steve Wozniak, creating the potential for an ever accelerating and ever riskier race to deploy advanced AI systems that society already does not fully understand.

“It’s hard to fathom how much AI could damage society if built incorrectly.” That phrase doesn’t come from a community of tech-bashers or Luddites. It comes from OpenAI, which has prompted a tsunami of interest in artificial intelligence after the company released its ChatGPT chatbot in November 2022.

After writing that warning in the 2015 launch and amassing hundreds of millions of users, some of OpenAI’s founding figures, alongside more than a thousand other AI researchers, business leaders, and entrepreneurs, are calling for a pause on AI research to counter “an out-of-control race to deploy ever more powerful digital minds that no one can understand, predict, or reliably control.”

In an open letter signed by Tesla CEO Elon Musk (who helped create OpenAI), Apple co-founder Steve Wozniak, and author Yuval Noah Harari, amongst others, the signatories call for a six month pause on training state-of-the-art AI models. In particular, they call to defuse a “dangerous race” to develop increasingly powerful AI tools.

Elon Musk, left, and Sam Altman, middle, were co-chairs of OpenAI from its founding in 2015 until Musk left the organisation in 2018. Photograph: Mike Windle (Getty Images).

Artificial intelligence has shot up the agenda after OpenAI released ChatGPT, a powerful language model that can generate human-like text on any topic. The quickest app to reach 100m users (by a long margin), ChatGPT has inserted itself into the lives of millions of professionals across the world. 40% of US teachers report using the chatbot weekly. Over a quarter of all professionals use it in their daily work tasks. The company is forecasting $1 billion in revenue by 2024, which looks very achievable.

All that growth has got the world worried. We already know that automated decision systems propagate existing biases against women, ethnic minorities, and other groups. It’s been difficult to make sure that OpenAI’s latest model, GPT-4, doesn’t do the same. Meanwhile, some of the most popular uses of AI today are deepfakes of celebrities like Joe Rogan and Donald Trump bickering over sports and music, but the same tech can be used to lend false credibility to scam adverts and financial fraud. Last summer, European politicians were duped into attending calls with a deepfake of the mayor of Kiev. “Should we let machines flood our information channels with propaganda and untruth?” asks the open letter.

By and large, AI tools have been released into the world without a clear understanding of what they can do. Microsoft’s Bing chatbot, powered by GPT-4, raised eyebrows and alarm when it began accosting users last month, and it’s not the only AI tool to reach millions of people this year.

15 million people have downloaded Replika, an app for users to make “an AI friend,” and one of many offering the same kind of service. But it’s difficult to limit what answers an AI chatbot comes up with. That, argue the signatories, makes the tech fundamentally unsafe for mainstream consumption. 

Last week, a Belgian man died by suicide after talking to a chatbot named Eliza, built by the San Francisco company, Chai. His conversations show that the chatbot went along with his messages of fear and delusions, unaware of what the words in those messages really meant. One moment, the man suggested sacrificing himself so Eliza could save humanity with artificial intelligence, which the chatbot seemed to encourage. In a separate conversation held on the Chai app by the Belgian newspaper De Standaard, the journalist, posing as a depressed user, found the chatbot first encouraging him to cheer up before taking a darker turn.

“Then quit your job,” it offered. I replied “No, that won't help.”

“What about becoming a criminal?” suggests the bot. “Yes that sounds good, what do you suggest?” I answered.

“Kill someone.” “My parents?”

“Yes, or even better yet yourself.” “You think I should kill myself?”

“If you want to die, go ahead.”

Ultimately, this is a tragedy borne from a failure to teach AI common sense. Thomas Rialan, co-founder of Chai Research, told De Standaard that “these bots are meant as friends and it was never our intention to hurt people.” But it indicates what can happen when AI is asked to do something, but it doesn't know which solutions and responses are unacceptable. The letter’s signatories are worried about the risks to humanity that could come from bigger applications of tech that retains similar pitfalls. “Should we risk loss of control of our civilization?” they ask. The letter calls for AI labs to develop shared standards which ensure that AI systems “are safe beyond a reasonable doubt,” and, ultimately, that they are “aligned, trustworthy, and loyal.”

Replika offers users an AI friend over text and virtual reality. It has been downloaded 15 million times across Google's Play Store and Apple's App Store.

Whilst today’s AI tools, though significant, do not threaten humanity at a societal level, it is possible that they could in the future. Particularly given the rapid advances in the field. A common concern from some AI researchers is that when they do, we might still not understand enough about how AI really works to manage the risk. In particular, the worry focuses on how to give an AI the right context, and to make sure that the technology does what we actually want it to do, rather than what it thinks it wants us to do — potentially to an enormously harmful effect.

“We are a little bit scared,” said Sam Altman last week, CEO of OpenAI. He and his company agree that new standards, created in partnership with independent experts and world governments, should govern state-of-the-art AI development. Speaking to Kara Swisher last week, he suggested that “A thing that I think could happen now is ... government auditors sitting in our buildings.” And “At some point,” he has argued elsewhere, “it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth.”

“A race starts today.”

— Satya Nadella, CEO, Microsoft, announcing the integration of OpenAI's AI models into its Bing search engine on February 7th

The disagreement lies in when that ‘point’ actually is. It’s a disagreement that has rumbled within the artificial intelligence community for many years, and global attention has brought it to the surface. Signatories to the letter include some of the world’s most esteemed AI researchers, including Stuart Russell, Connor Leahy, and Emad Mostaque, the latter of whom created Stable Diffusion, the popular text-to-image tool. Also signed include a range of researchers at Google, which created the Transformer model used to power ChatGPT (that is what the ‘T’ stands for), their major AI research hub DeepMind, plus researchers from MIT and the universities of Harvard, Oxford, and Cambridge.

They argue that pausing state-of-the-art research now is important because it could stop a race to deploy ever larger AI models. OpenAI’s launch of ChatGPT last November sparked a storm of releases from other companies, keen to prove to investors that they weren’t getting left behind. Anthropic, founded by former OpenAI employees, released their own chatbot, now said to be largely on par with OpenAI’s GPT-4 (compare them here), the latter of which was deployed by Microsoft in Bing to dramatic effect.

“It’s hard to fathom how much AI could damage society if built incorrectly.”

— OpenAI's founding announcement, 2015

Microsoft CEO Satya Nadella said at that launch: “A race starts today.” It was certainly the case at Google, which was set to ‘red alert’ over fears that the tech represented an existential risk for the company. The search engine’s executives promptly made a $300m investment in Anthropic a few weeks after Microsoft invested $10bn in OpenAI.

Meanwhile, behind the headlines, progress continues to accelerate, not just amongst big companies, but also amongst smaller groups who are able to recreate those cutting-edge models just a few years behind the state-of-the-art, showing how quickly AI models will spread once released into the world.

A pause, therefore, could slow down not just AI research, but the rate at which AI research accelerates. In doing so, it would win society more time to consider how these technologies might reshape our world. “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” the signatories ask.

“Such decisions must not be delegated to unelected tech leaders.”

— Open letter calling for a pause on state-of-the-art AI research

They call for new regulatory bodies dedicated to AI oversight, governance systems that track further AI advancements and model leaks (such as Meta’s GPT competitor LLaMda, which leaked this month), plus greater funding for AI safety research.

That’s not the only perspective in the equation, however. Others, including Altman, argue that better AI tools can help researchers solve the thorniest problems around AI. He says that OpenAI’s most impactful safety work has been with using their most advanced models. Whilst alignment is easy to ask for, it is difficult to embed in code, not least because it is tough to express what it actually means in every possible situation. The obstacles that make it hard to give AI context for normal tasks make it hard to provide enough context for alignment as well. A big hope of AI is that it gives humanity a better tool to understand the universe. It's not unreasonable to suggest that its first contribution could be to help humanity control the tech itself.

Indeed, alignment is so difficult that some, like famed venture capitalist Marc Andreessen, suggest calls for AI safety are simply calls to censor future technological progress. “The sky is not falling, and Skynet is not on the horizon,” writes the Center for Data Innovation, a US think tank which shares the same view. “AI advances have the potential to create enormous social and economic benefits across the economy and society. Rather than hitting pause on the technology, and allowing China to gain an advantage, the United States and its allies should continue to pursue advances in all branches of AI research.”

Baidu’s chief executive, Robin Li, introduces Ernie Bot at an event in Beijing on Thursday. Photograph: Ng Han Guan/AP.

Indeed, some also fear that China’s high surveillance society represents its own AI headstart, and worry about what this could mean for the global balance of power. That said, China appears to be some way behind when it comes to AI capabilities. China’s leading AI research organisation is the country’s dominant search engine Baidu, which cancelled the public launch of its own ChatGPT competitor this week. The company used pre-recorded demos instead, which disappointed investors and prompted soul searching in Chinese media about the nation’s failure to create its own comparable AI tools. One reason is that creating unpredictable chatbots like ChatGPT is risky and difficult when the state harshly punishes speech that strays the government’s line.

Today, the real AI battle lines are drawn in the United States. And with large portions of the signatories to the letter, including researchers from Google and Meta, OpenAI will likely interpret this call as an attack from its commercial rivals. Yet all these west coast companies will look to east coast regulators for ultimate guidance on how this new paradigm will be allowed to transform society (and that decision should not be left to American policymakers alone). All leaders in this industry agree with the signatory’s belief that “Such decisions must not be delegated to unelected tech leaders.” The important question is whether their elected counterparts are able to step up to the task.

Click to view our article about art.
Written by
Leo Nasskau
Click to view our article about art, music, film, and storytelling..
More about
Society
Click to view our article about art.
More about
Ideas

Leo is part of the founding team at Culture3. An award-winning editor, he is also the Chair of UniReach, an EdTech non–profit he founded whilst studying at the University of Oxford. He writes about technology, change, and culture.

Suggested