Join our brand new verified AMN Telegram channel and get important news uncensored!

AI poses ‘risk of extinction’ on par with nuclear war, 350 experts and CEOs warn

Computer coding (Unsplash)
June 02, 2023

Top artificial intelligence (AI) executives, including OpenAI CEO Sam Altman, joined experts and professors in expressing concerns about the “risk of extinction from AI.”

In a letter published by the nonprofit Center for AI Safety (CAIS), over 350 signatories emphasized the need for policymakers to treat this risk on par with the dangers posed by pandemics and nuclear war.

The CEOs of AI companies DeepMind and Anthropic, as well as executives from Microsoft and Google, were among the signatories. Additionally, notable figures like Geoffrey Hinton and Yoshua Bengio, known as the “godfathers of AI,” and professors from institutions such as Harvard and Tsinghua University were involved.

CAIS director Dan Hendrycks stated that Meta, where Yann LeCun, the third “godfather of AI,” works, did not sign the letter, despite efforts by CAIS to seek signatures from Meta employees. The letter coincided with the U.S.-EU Trade and Technology Council meeting in Sweden, during which AI regulation was expected to be discussed.

“I think if this technology goes wrong, it can go quite wrong,” Altman said in a recent Senate Judiciary subcommittee hearing about AI. “And we want to be vocal about that. We want to work with the government to prevent that from happening.

READ MORE: How US military detects sarcasm with artificial intelligence

Earlier in April, Elon Musk and a group of AI experts and industry executives were the first to highlight potential risks to society. CAIS expressed hope that Musk would sign the letter, indicating that an invitation had been extended to him.

Concerns regarding AI have arisen due to recent advancements, with supporters claiming that the technology can be applied in various areas such as medical diagnostics and legal brief writing. However, fears have also been raised about privacy breaches, misinformation campaigns, and the autonomy of “smart machines.” This warning follows a similar open letter from the nonprofit Future of Life Institute (FLI) two months prior, which called for a pause in advanced AI research to address risks to humanity. The FLI president, Max Tegmark, who signed both letters, stated that the recent letter aimed to bring the issue of extinction risk into the mainstream, enabling constructive conversations to take place.

Altman, CEO of OpenAI, gained prominence through the success of the ChatGPT chatbot. He faced criticism after referring to EU AI regulation as over-regulation and threatening to withdraw from Europe. However, he later retracted his stance following political backlash. Altman is scheduled to meet European Commission President Ursula von der Leyen and EU industry chief Thierry Breton in the near future.