Navigation
Join our brand new verified AMN Telegram channel and get important news uncensored!
  •  

OpenAI CEO Sam Altman says AI is ‘most important step yet’ for humans and tech

OpenAI Chief Executive Officer Sam Altman speaks during an event at Keio University on June 12, 2023, in Tokyo, Japan. (Tomohiro Ohsumi/Getty Images/TNS)

Sam Altman, chief executive officer of artificial intelligence startup OpenAI Inc., said there are many ways that rapidly progressing AI technology “could go wrong.” But he argued that the benefits outweigh the costs, “We work with dangerous technology that could be used in dangerous ways very frequently.”

Altman addressed growing concern about the rapid progress of AI in an interview onstage at the Bloomberg Technology Summit in San Francisco. Altman has also publicly pushed for increased regulation of artificial intelligence in recent months, speaking frequently with officials around the world about responsible stewardship of AI.

Despite the potential dangers of what he called an exponential technological shift, Altman spoke about several areas where AI could be beneficial, including medicine, science and education.

“I think it’d be good to end poverty,” he said. “But we’re going to have to manage the risk to get there.”

OpenAI has been valued at more than $27 billion, putting it at the forefront of the booming field of venture-backed AI companies. Addressing whether he would financially benefit from OpenAI’s success, Altman said, “I have enough money,” and stressed that his motivations were not financial.

“This concept of having enough money is not something that is easy to get across to other people,” he said, adding that it’s human nature to want to be useful and work on “something that matters.”

“I think this will be the most important step yet that humanity has to get through with technology,” Altman said. “And I really care about that.”

Elon Musk, who helped Altman start OpenAI, has subsequently been critical of the organization and its potential to do harm. Altman said that Musk “really cares about AI safety a lot,” and that his criticism was “coming from a good place.” Asked about the theoretical “cage match” between Musk and his fellow billionaire Mark Zuckerberg, Altman joked: “I would go watch if he and Zuck actually did that.”

OpenAI’s products — including the chatbot ChatGPT and image generator Dall-E — have dazzled audiences. They’ve also helped spark a multilbillion-dollar frenzy among venture capital investors and entrepreneurs who are vying to help lay the foundation of a new era of technology.

To generate revenue, OpenAI is giving companies access to the application programming interfaces needed to create their own software that makes use of its AI models. The company is also selling access to a premium version of its chatbot, called ChatGPT Plus. OpenAI doesn’t release information about total sales.

Microsoft Corp. has invested a total of $13 billion in the company, people familiar with the matter have said. Much of that will be used to pay Microsoft back for using its Azure cloud network to train and run OpenAI’s models.

The speed and power of the fast-growing AI industry has spurred governments and regulators to try to set guardrails around its development. Altman was among the artificial intelligence experts who met with President Joe Biden this week in San Francisco. The CEO has been traveling widely and speaking about AI, including in Washington, where he told U.S. senators that, “if this technology goes wrong, it can go quite wrong.”

Major AI companies, including Microsoft and Alphabet Inc.’s Google, have committed to participating in an independent public evaluation of their systems. But the U.S. is also seeking a broader regulatory push. The Commerce Department said earlier this year that it was considering rules that could require AI models to go through a certification process before being released.

Last month, Altman signed onto a brief statement that included support from more than 350 executives and researchers saying “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

Despite dire warnings from technology leaders, some AI researchers contend that artificial intelligence isn’t advanced enough to justify fears that it will destroy humanity, and that focusing on doomsday scenarios is only a distraction from issues like algorithmic bias, racism and the risk of rampant disinformation.

OpenAI’s ChatGPT and Dall-E, both released last year, have inspired startups to incorporate AI into a vast array of fields, including financial services, consumer goods, health care and entertainment. Bloomberg Intelligence analyst Mandeep Singh estimates the generative AI market could grow by 42% to reach $1.3 trillion by 2032.

___

© 2023 Bloomberg News

Distributed by Tribune Content Agency, LLC.