OPINION:
Media stories comment on the benefits and dangers of artificial intelligence as if it were today’s reality instead of a scientific research goal. Real AI — in which a computer’s ability to function would be equal to that of the human mind — hasn’t been achieved, but it could be in a decade, a month or tomorrow.
The present reality is “machine learning,” in which computer algorithms can “learn” from absorbing enormous databases that “teach” them to answer complex questions. To reach the level of real AI would require computers to be able to program themselves, think, reason independently and in every other way become sentient.
AI, when — not if — it is achieved will be an enormous scientific breakthrough, empowering everything from medical diagnoses to intelligence analysis. It could also — not inevitably but possibly — be an enormous threat to human survival.
There are many science fiction stories about sentient machines trying to rid the world of humans. The “Terminator” movies centered on the fictional “Skynet” computers that sent the robots back in time to kill the leader of human forces fighting the robots before he was born. Left to govern itself, AI could decide that Skynet’s plan to rid the planet of humans is a jolly good idea.
In an ideal world, computers (and robots) would be governed by the great science fiction author Isaac Asimov’s “Three Laws of Robotics.” Asimov’s laws provided that: First, no robot may injure a human being by action or inaction; second, they must obey human commands unless they violate the first law; and third, they must protect their own existence unless doing so would violate either of the first two laws.
There may be an ideal world somewhere in the galaxy, but Earth ain’t it.
We don’t know which nation really leads the world in developing AI, but China is a safe bet. It is reportedly spending billions in a race to develop AI. We also don’t know how far the U.S. lags behind — if we do — because U.S., Chinese and other nations’ research is secret.
According to a report last week, China’s Cyberspace Administration has proposed a new law that provides that all AI programs must “reflect core socialist values, and must not contain content on subversion of state power.” It also requires that before any AI products are made available to the public, a security assessment on them will have to be made by China’s “national internet regulatory departments.”
And what of the AI “products” that aren’t made available to the public?
American research and development of AI also lack the guidance that Asimov gave us. In October 2022, the White House published its “Blueprint for an AI Bill of Rights.” It was nothing more than liberal virtue signaling about people’s right to privacy and to control their personal data. Elon Musk and other industrialists have proposed a six-month pause in developing systems more powerful than OpenAI’s ChatGPT, which can communicate and pass college and professional exams and has been shown to be biased against conservative thought. That, too, is no answer.
On April 4, President Biden said that it remained to be seen whether AI was dangerous and that companies must ensure their products are safe before releasing them to the public. Seriously? He apparently believes there’s no danger from AI being created by China, Russia, Iran, North Korea and a host of other nations. Wouldn’t it be vastly smarter to take steps to ensure the dangers are minimized?
On April 11, the Biden administration said it would try to implement its “AI Bill of Rights” by rules ensuring that people aren’t subjected to discrimination by programs such as ChatGPT.
Mr. Biden leaves us wandering a world in which lethal autonomous weapon systems will soon be common and may soon dominate combat. Defense Department Directive 3000.09 defines them as “weapon system[s] that, once activated, can select and engage targets without further intervention by a human operator.” Humans are out of the loop in deciding which targets — including people — will be attacked.
Our adversaries can produce any lethal autonomous systems they choose. There are no laws or treaties governing AI, so there is no safety for Americans or the rest of humanity as AI is developed to sentience.
Though there is no guarantee that nations would obey it, there could be an international treaty governing AI, specifically including all three of Asimov’s laws. Under the auspices of the United Nations Convention on Certain Conventional Weapons, we have been engaged in such discussions since 2014, to no avail.
Mr. Biden could, and should, make the signing of such a treaty, including Asimov’s laws, a top priority. But he won’t, because he thinks the dangers of uncontrolled AI have not been proved. Once they are, it will be too late to undo them.
Our adversaries see AI as the means to economic growth and military advantage. (The Iranians, of course, believe an apocalypse is a career objective.) Without such a treaty, there is little or nothing we can do to lessen the dangers posed by AI.
The only protection we have is that humanity is not usually suicidal. But once AI is fully sentient, we may not have anything to say about its decisions.
• Jed Babbin is a national security and foreign affairs columnist for The Washington Times and contributing editor for The American Spectator.
Please read our comment policy before commenting.