- The Washington Times - Tuesday, July 25, 2023

Major artificial intelligence company Anthropic is warning lawmakers that new AI tools will fuel the proliferation of bioweapons into the hands of irresponsible actors in the not-too-distant future.

Dario Amodei, who runs the self-styled AI safety and research company, said Tuesday that humanity faces an existential risk from AI in the long term but the spread of bioweapons’ knowledge may happen sooner as the tech progresses.

Mr. Amodei told a Senate Judiciary Committee panel that his company worked with biosecurity experts for the last six months to study how AI can contribute to the potential misuse of biology and saw the emerging tech tools are showing early danger signs.

“Today, certain steps in bioweapons production involve knowledge that can’t be found on Google or in textbooks and requires a high level of specialized expertise, this being one of the things that currently keeps us safe from attacks,” Mr. Amodei said at a committee hearing. “We found that today’s AI tools can fill in some of these steps albeit incompletely and unreliably.”

Mr. Amodei said AI systems pose a substantial risk of being able to fill in all of the missing pieces in two to three years.

Anthropic’s dire warning to lawmakers comes as the Senate is looking to write new AI rules with the judiciary committee serving as a central point for lawmakers probing the emerging tech. Previous committee meetings have examined AI-enabled problems of political destabilization and cyber chaos. 

Sen. Richard Blumenthal, Connecticut Democrat, said he wants to see Congress create a new agency to regulate AI and shape research.

“The future is not science fiction or fantasy, it’s not even the future, it’s here and now,” Mr. Blumenthal said at Tuesday’s hearing. “And a number of you have put the timeline at two years before we see some of the biological most-severe dangers, it may be shorter because the kinds of pace of development is not only stunningly fast, it has also accelerated.”

Mr. Amodei advocated for better AI supply-chain security and testing of AI tools to help mitigate the biothreat, which Anthropic also warned of at a meeting with the U.N. Security Council earlier this month.

The Senate Judiciary Committee’s panel on privacy, tech and the law considered an AI-powered biothreat on the heels of President Biden saying his administration had secured voluntary commitments from seven AI companies to create new tech tools safely, including Anthropic.

Alongside Mr. Amodei, leaders from Amazon, Google, Inflection, Meta, Microsoft and OpenAI visited the White House on Friday where the president showcased fresh promises from the companies. 

The pledges included provisions on testing, investing in cybersecurity and insider threat issues and helping audiences understand when AI is helping to create content.

Mr. Biden said Friday he intends to take executive action on AI soon.

“These commitments are a promising step but we have a lot more work to do together, realizing the promise of AI while managing the risk is going to require some new laws, regulations and oversight,” Mr. Biden said. “In the weeks ahead, I’m going to continue to take executive action to help America lead the way toward responsible innovation.”

Some Democrats are eager to see Mr. Biden act rather than wait for Congress to point the direction for new AI restrictions.

Senate Select Committee on Intelligence Chairman Mark Warner wrote to Mr. Biden on Monday and urged him to bolster the voluntary commitments before Congress passes new law. 

“It is vital to build on these developer- and researcher-facing commitments with a suite of lightweight consumer-facing commitments to prevent the most serious forms of abuse,” the Virginia Democrat wrote on Monday.

Mr. Warner said he wants the Biden administration to pursue pledges from vendors to help prevent non-consensual image generation, facial recognition, social-scoring and the spread of cyber chaos among other things. He said the additional commitments from vendors ought to focus on licensing, development practices, and post-deployment monitoring.

He said the intelligence committee heard from AI makers who said they did not know how best to report malicious activity they discovered, such as intrusions of their networks or efforts by foreign actors to use their AI tools, and urged Mr. Biden’s team to get engaged.

The Biden administration is working on a new National AI Strategy that the White House Office of Science and Technology Policy has touted as taking a whole-of-society approach.

This article was based in part on wire service reports.

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide