OPINION:
Of the invention of nuclear weapons, Albert Einstein said, “The unleashed power of the atom has changed everything save our modes of thinking and we thus drift toward unparalleled catastrophe.” The same admonition should be applied to the advent of artificial intelligence.
Governments and companies around the world are in a headlong rush to develop AI without guarding against its potential dangers.
We know, for example, that some U.S. government agencies are worried about AI influencing the presidential election through disinformation campaigns run by foreign governments. Worse still, weapon systems are increasingly becoming independent of human control.
During the Cold War, the U.S. and the Soviet Union were — as we, Russia, China and North Korea are now — operating under a “launch on warning” policy that requires that a nuclear counterattack be launched when an adversary’s nuclear missile launch is detected.
The potential for a catastrophic effect from AI was illustrated nearly 40 years ago when AI was just science fiction. On Sept. 26, 1983, Soviet satellites detected an apparent missile attack by the United States. Tensions were high between the U.S. and the Soviet Union. Soviet leader Yuri Andropov, a former chief of the KGB, had all of the paranoia that came with that job.
The duty to warn Andropov of a U.S. missile attack — and trigger a Soviet counterattack — fell to a Soviet lieutenant colonel named Stanislav Petrov. In the early hours of that day, Soviet satellites flashed a warning of an American ICBM attack. As Petrov told Time magazine in 2015, he didn’t want to be the person who triggered a nuclear holocaust, so he reported to his commanders that the alarm was a false one without knowing whether it was.
An investigation proved later that the Soviet satellites detected the sun reflecting off some clouds and concluded that five U.S. ICBMs had been launched. It was a classic case of “garbage in, garbage out” because the Soviet sensors lacked the power to distinguish between missile launches and the sun’s reflections.
Had the Soviet satellites been connected to an AI system, it could have short-circuited Petrov’s decision and probably caused an all-out nuclear war.
During the Cold War, the U.S. and Soviet militaries believed they had the ability to detect missile launches less than 20 minutes before the attacking missiles arrived. There was barely time to have a U.S. president or Soviet leader make the most fateful decision the human race ever faced.
Today, the ability to detect missile launches is both confused and thwarted by technology. Decision-making times are cut to almost nothing, for example, by the advent of hypersonic missiles.
Hypersonic missiles can reach the U.S. in minutes and could be undetected until they hit their targets. China, according to an Asia Times report, has developed a stealth camouflage “veil” that can make cruise missiles appear to be airliners on radar, perhaps masking an attack on Taiwan.
Both technologies reduce the time a president has to decide whether to respond to a possible attack to minutes or seconds, which brings us back to AI.
The greater the power that any AI system is given, the greater the danger is created. Because decisions have to be so accelerated, governments — not only ours — will surrender more decision-making power to machines. As Stanislav Petrov proved in 1983, however, any machine’s decision must be second-guessed by a human being.
AI isn’t yet a reality: It is machine learning that provides nearly instant information to its users. But as AI becomes an actuality, how can we trust it to make decisions that could result in nuclear war? Simply put, we can’t prevent more “garbage in, garbage out” incidents.
That President Biden has tasked Vice President Kamala Harris with dealing with AI is of no comfort. She is not equipped with the intellect, education or experience to deal with its complex dangers.
The only U.S. guidance on AI is found in Mr. Biden’s executive order last October and Defense Department Instruction 3000.9. The executive order concerns itself with privacy and civil rights and is useless as a defense policy tool.
The Defense Department instruction defines an autonomous weapon system as a “weapon system that, once activated, can select and engage targets without further intervention by an operator. This includes, but is not limited to, operator-supervised autonomous weapon systems that are designed to allow operators to override operation of the weapon system, but can select and engage targets without further operator input after activation.”
That instruction emphasizes human control but allows weapon systems with complete autonomy in broad — too broad — circumstances. Obviously, our adversaries are not bound by it or by Mr. Biden’s executive order. They can deploy autonomous weapon systems at their leisure.
So, how can we apply Einstein’s warning and begin a new way of thinking about AI? We need to begin by recognizing that no AI should be able to control nuclear weapons, directly or indirectly, without human supervision. A multinational nuclear arms control treaty could be fashioned around that concept.
It may be too late to change our adversaries’ minds. But we have to try.
• Jed Babbin is a national security and foreign affairs columnist for The Washington Times and a contributing editor for The American Spectator.
Please read our comment policy before commenting.