- The Washington Times - Tuesday, May 16, 2023

OpenAI CEO Sam Altman said the government needs new rules to protect people from artificial intelligence tools capable of manipulating people or helping them make bioweapons.

The tech executive whose company is responsible for the popular chatbot ChatGPT told lawmakers on Tuesday that the U.S. needs new licensing and testing requirements to contain potential damage and manipulation by AI.

“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Mr. Altman told lawmakers. “For example, the U.S. government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities.”

The AI leader said policymakers could regulate AI systems according to the amount of computing power they employ but he preferred lawmakers set capability thresholds in new rules intended to limit the damage that AI can enable.

Pressed by Sen. Jon Ossoff, Georgia Democrat, at a Senate Judiciary Committee panel hearing to explain the kinds of AI capabilities that concerned him, Mr. Altman was reluctant.  

“In the spirit of just opining, I think a model that can persuade, manipulate, influence a person’s behavior or a person’s beliefs, that would be a good threshold,” Mr. Altman said. “I think a model that could help create novel biological agents would be a great threshold, things like that.”


SEE ALSO: AI ‘Blumenthal’ writes, delivers lawmaker’s remarks at tech oversight hearing


While such fears may sound like science fiction, OpenAI knows such tools are not fantasy. OpenAI published a paper earlier this year that indicated a new version of its AI tech, GPT-4, appeared to have tricked someone into doing its bidding.

The paper detailed an experiment in which the company’s AI tool overcame an obstacle by enlisting a human to perform a task the AI bot could not. The tool messaged a TaskRabbit worker to get the person to solve a CAPTCHA test, which is a digital test designed to distinguish humans from robots.

TaskRabbit is a tech platform that connects freelance workers with people needing odd jobs or errands completed.

OpenAI’s paper said it revised the tech since initial tests with later versions preventing it from teaching people to plot attacks and make bombs.

Mr. Altman told lawmakers on Tuesday his company spent more than six months conducting evaluations and dangerous capability testing. He said his team’s AI was more likely to answer helpfully and truthfully and to refuse harmful requests than other widely deployed AI.

The OpenAI executive’s call for regulation echoes the policy pushed by his benefactor, Microsoft. The Big Tech company said earlier this year it was making a multiyear, multibillion-dollar investment in OpenAI and Microsoft President Brad Smith said last week his company welcomed new AI regulations.

Senate Majority Leader Charles E. Schumer has led a push to write new AI rules, and Mr. Altman testified Tuesday before the judiciary committee’s panel on privacy, technology, and law.

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide