Grimes, OpenAI’s Sam Altman and Hundreds More warn of ‘risk of extinction from AI’

Credit: Eric Krull

As tech companies continue to make advances in artificial intelligence, another attention-grabbing AI intervention policy is emerging, with many signatures from tech company scientists and hundreds of others, including musician Grimes.

Hundreds of AI scientists, academics, government and public figures – including musician Grimes and OpenAI CEO Sam Altman – have signed their names new statement We call for global attention to the existential risks of advances in artificial intelligence.

The statement is brief and urges policymakers to focus on mitigating risks up to “extinction-level AI risk” — comparable to the existential concerns surrounding nuclear war — published on the website of a San Francisco-based resident non-profit organization called Center for AI Safety (CAIS).

“Containing the risk of extinction caused by AI should be a global priority alongside other societal risks such as pandemics and nuclear war,” the statement said, deliberately brief.

CAIS’s website explains that the statement was deliberately kept “concise” as backers fear their message about “some of the most serious risks of advanced AI” will be amid a flurry of discussions about other “important and urgent risks of AI.” could go under.”

“AI experts, journalists, policymakers and the public are increasingly discussing a broad spectrum (of AI risks),” explains CAIS. “The following concise statement aims to overcome this obstacle and start a discussion. It is also designed to create a common understanding of the growing number of experts and public figures who are also taking some of the most serious risks of advanced AI seriously.”

If this sounds familiar, that’s because we’ve heard a lot about it Issue voiced loud And repeated In recent months, the hype surrounding AI development has increased significantly while increasing public access to generative AI like ChatGPT and DALL-E.

Interestingly, the onslaught of heavily publicized warnings about the potential risks of AI has tended to divert attention away from it very real and existing harms caused by AI – such as the use of copyrighted data Train AI systems without permission or compensation, or the subsequent spamming of AI-created content on streaming platforms. Or the systematic data scraping that violates privacy – or any transparency from companies developing AI about the data used to train their tools.

Of course, there are obvious commercial motivations for drawing regulators’ attention to AI development issues in the theoretical future, as opposed to the actual problems plaguing the present. As TechCrunch puts it in a nutshell: “Data exploitation as a tool for concentrating market power is nothing new.”

Signatories include Demis Hassabis, CEO of Google DeepMind, Mustafa Suleyman, CEO of Inflection AI, Emad Mostaque, CEO of Stability AI, OpenAI co-founders Sam Altman (who also serves as CEO), and John Schulman, Anthropic AI co-founders Jared Kaplan and Chris Olah, and many other prominent figureheads in the field of artificial intelligence.