As the artificial intelligence (AI) revolution heats up, some experts are warning of AI’s inherent dangers, including what some believe is its inevitable conclusion: humanity’s extinction.
During an appearance on Bloomberg’s latest episode of “AI IRL,” Eliezer Yudkowsky, an AI researcher, said he believes AI is going to cause the end of humanity.
“Where we are right now is exciting. Where we are heading is a lot more important than that, I’d say,” Yudkowsky said, adding, “I think [AI] gets smarter than us. I think we’re not ready, I think we don’t know what we’re doing, and I think we’re all going to die.”
When asked how experts can get the public to take the threat of AI seriously, Yudkowsky said people need to see early versions of AI in action through programs like ChatGPT.
“[After seeing ChatGPT] people were like, ‘Oh, ok. This is not a movie. This is real. This is starting to happen.’ Some people were like, ‘Oh, well this thing is clearly not smart enough to kill everybody yet, therefore that can never happen. It’s not actually obvious to me that the way this plays out is that people just never notice,” Yudkowsky said.
Yudkowsky explained that he is generally for experimentation and progress, but warned AI is unlike even nuclear technology in its potential ability to destroy humanity.
“When it comes to artificial intelligence or gain of function research in biology … those are place where you have a little booboo and maybe there’s nobody around to learn from it because everybody is dead. That I think is a different and special case,” he said.
Asked if there is a point of no return when it comes to AI, Yudkowsky offered some tepid comfort, saying, “If you’re all still alive, you haven’t hit the point of no return. Where there is life, there’s hope. We could all wake up tomorrow and decide to not do this. It’s not a lot of hope. I’m not willing to just decide by myself that humanity won’t commit suicide about this. It’s not my place to decide that.”