- The Washington Times - Friday, June 21, 2024

A version of this story appeared in the daily Threat Status newsletter from The Washington Times. Click here to receive Threat Status delivered directly to your inbox each weekday.

Growing fears about the dangers of artificial intelligence have sparked a leadership shake-up at one of America’s leading AI companies.

OpenAI co-founder Ilya Sutskever recently left the company and is now working on Safe Superintelligence Inc., whose aim is to safely develop superintelligence, or AI systems thought to be smarter than human beings.

While at OpenAI, known for its ChatGPT technology, Mr. Sutskever warned that AI could go rogue and ultimately spark the end of humanity. Mr. Sutskever also helped drive the public ouster of CEO Sam Altman last year.

Mr. Altman, however, returned to OpenAI’s top spot within weeks and announced Mr. Sutskever’s removal from the company’s board. Now, Mr. Sutskever is in pursuit of a business model that Safe Superintelligence says will insulate safety and security from the growing commercial pressures of the lucrative AI sector.

Mr. Sutskever announced his AI project last week with partners Daniel Gross, formerly of Apple, and Daniel Levy, who also left OpenAI.

“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the Safe Superintelligence founders said. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.”

Safe Superintelligence is styling AI safety as a top priority. Its leaders pledged not to be distracted by “management overhead or product cycles.”

The company plans to be based in Palo Alto, California, and Tel Aviv, Israel. Lulu Cheng Meservey, a spokeswoman for Safe Superintelligence, said Russian media rumors that Mr. Sutskever was negotiating with a Russian bank for his work are not true. She told The Washington Times that Mr. Sutskever has “had zero conversations with any Russian banks.”

Mr. Altman’s solicitation of foreign funding has drawn criticism. He reportedly sought investment from the United Arab Emirates for his grander AI ambitions, which are thought to have a price tag of up to $7 trillion.

The seesawing drama surrounding Mr. Altman’s ouster and return in November and former employees’ complaints about OpenAI’s culture have contributed to outsider suspicion of the popular ChatGPT.

With Mr. Sutskever leaving the company, retired Army Gen. Paul Nakasone has joined OpenAI’s board. He is a former director of the National Security Agency and head of U.S. Cyber Command.

He helped establish an AI Security Center at NSA that he said would help private companies understand threats to their intellectual property and work closely with businesses, national labs, academia, other agencies and select foreign partners.

“OpenAI’s dedication to its mission aligns closely with my own values and experience in public service,” Mr. Nakasone said in a statement this month. “I look forward to contributing to OpenAI’s efforts to ensure artificial general intelligence is safe and beneficial to people around the world.”

Mr. Nakasone’s arrival also attracted criticism.

Former NSA contractor Edward Snowden, who fled the U.S. in 2013 after leaking a trove of confidential government documents about domestic and foreign surveillance, chastised OpenAI for betraying people.

“They’ve gone full mask-off: do not ever trust @OpenAI or its products (ChatGPT etc),” Mr. Snowden said on X. “There is only one reason for appointing an @NSAGov Director to your board. This is a willful, calculated betrayal of the rights of every person on Earth. You have been warned.”

Mr. Snowden, who was granted Russian citizenship in 2022, said the intersection of AI and large quantities of surveillance data will concentrate power in the hands of an unaccountable few.

Nations around the world appear eager to see inside leading AI labs. U.S. government-backed researchers investigating safety at AI labs told The Times this month that they found leading labs vulnerable to foreign spies, including Chinese theft of technology.

The researchers, working with the State Department, dug into labs, including OpenAI, Google DeepMind and Anthropic, and witnessed minimal security and negligent attitudes about safety.

Google DeepMind previously told The Times it takes security seriously.

OpenAI did not respond to requests for comment.

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide