- The Washington Times - Monday, June 24, 2024

A version of this story appeared in the daily Threat Status newsletter from The Washington Times. Click here to receive Threat Status delivered directly to your inbox each weekday.

China’s government does not accept a Biden administration policy that restricts the use of artificial intelligence for using nuclear weapons, a senior White House official disclosed Monday.

Tarun Chhabra, director of technology at the White House National Security Council, and a second White House official also said the White House will soon issue a national security memorandum outlining U.S. government use of AI for a range of agencies, from the Pentagon’s efforts to develop hypersonic missiles and new nuclear arms to aid guidelines for programs at the U.S. Agency for International Development.

Mr. Chhabra was asked during a conference about recent talks between U.S. and Chinese officials on the risks of artificial intelligence, a meeting President Biden sought during his meeting with Chinese President Xi Jinping in November.

“Our position has been publicly clear for a very long time: We don’t think that autonomous systems should be getting near any decision to launch a nuclear weapon,” Mr. Chhabra said during a conference on China at the Council on Foreign Relations. “That’s long-stated U.S. policy.”

China, however, does not agree, he said. Beijing’s rejection of limits on AI use for its rapidly expanding nuclear forces was made during recent talks in Geneva between U.S. and Chinese officials.

“We think all countries around the world should sign up to that,” he said. “We think that makes a lot of sense to do.”

Mr. Chhabra defended the Geneva talks held last month as needed because AI is becoming a more powerful national security tool for many nations, even as many warn crucial decisions on deploying and detonating powerful weapons could be taken out of human commanders’ hands. U.S.-China dialogue on the subject will allow for discussion of the dangers and how to make the technology safe.

The White House hopes the U.S.-China AI talks will remain an open channel of communication, he said.

“It’s not a venue for us in any way to negotiate our technology protection measures,” he said. “Those are not up for pre-negotiation. But it is a venue for us just to exchange views on risk and safety, and then to have a channel open, if and when that becomes useful and needed.”

Earlier during the conference Deputy Secretary of State Kurt Campbell also said China is reluctant to hold talks on its growing nuclear arsenal.

“I think China has been reluctant to have any discussion that would in any way suggest that they’re prepared to limit a dramatic increase in their nuclear arsenal,” Mr. Campbell told the Council session. “But it’s possible they may be prepared to talk about other issues around nuclear issue issues.”

One of those is AI, he said.

“I think both nations understand on some level some of the challenges that AI presents to the military command and control, particularly in the nuclear arena,” he said.

Artificial intelligence will allow large-scale processing of sensor data that can track mobile missiles on land and submarines at sea, especially when combined with other new technology such as quantum sensors, according to testimony to a congressional China commission. That could destabilize strategic deterrence by undermining nuclear survivability through allowing targeting of systems once thought to be safer from attack.

Chinese military writings also have stated that AI technology could thus weaken the country’s nuclear forces.

The American goal for holding talks with China on AI is to create a “risk hierarchy” for military AI, and for exchanging information on test, evaluation, validation and verification processes. The U.S. also wants to have China commit to keeping humans in the loop for action related to nuclear weapons command and control.

It is unknown how China would use AI for nuclear command and control.

The Soviet Union during the Cold War developed an autonomous nuclear launch system called “Dead Hand” that reportedly remains in use today.

The system, also known as “Perimeter,” can automatically launch long-range nuclear missiles if the country’s leaders are killed or incapacitated.

Maher Bitar, deputy assistant to the president and coordinator for intelligence and defense policy at the NSC, said the forthcoming memorandum on AI will seek to preserve the U.S. lead in the technology. Mr. Bitar said the memorandum aims to make sure the United States adopts AI “responsibly,” and also will seek to prepare for how U.S. adversaries will use AI.

The memorandum will cover the Pentagon and U.S. intelligence agencies, the Energy Department and national laboratories, Homeland Security Department, FBI and Justice Department and other federal agencies.

Another concern related to AI is its use in violating human rights through internal surveillance and control mechanisms such as China’s Communist regime employs, Mr. Chhabra said.

Mr. Bitar said AI poses an additional threat to U.S. intelligence personnel who could be identified through the technology.

“That is something that I think we have to be very mindful of,” he said. “And that’s sometimes an area where both human rights concerns and counterintelligence concerns can actually merge together because we also want to make sure that we are protecting the U.S. government, in our capabilities and our people in the process.”

• Bill Gertz can be reached at bgertz@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.