The Pentagon on Monday rolled out a sweeping set of ethical guidelines to govern the use of artificial intelligence on the battlefield, marking a major step forward in the military’s campaign to establish firm controls over 21st-century technology and ensure that humans retain control over machines.
The principles, developed after 15 months of consultation with top technology industry leaders, call for the responsible use of all AI, concrete rules to trace the implementation of all systems and how they behave, and a blanket policy that says a human monitor can at all times override the AI technology. Military officials also said they will try to ensure that all AI capabilities do not develop or show “unintended bias” in how they behave or in whom they target, and the Pentagon stressed that AI will be employed only in a given situation after an explicit, well-defined purpose has been laid out and is thoroughly understood by all involved.
The effort dramatically demonstrates how machine learning and vast data-crunching technologies are changing the face of modern warfare and posing ethical problems that could scarcely have been conceived just a generation ago.
Drones, autonomous tanks and planes, missile systems that can track incoming fire with little to no human involvement and other artificial intelligence will revolutionize warfare and slowly erase the lines between humans and technology on the battlefield, military strategists say.
Technology leaders in Silicon Valley, ethicists and a host of other critics warn that the military is running the risk of implementing unproven technology before writing a rulebook to govern it. The Defense Department has struggled to quell those concerns while beefing up its AI development, which already includes research centers across the country and a push on Capitol Hill for more money to fund projects.
The effort has gained urgency as rivals such as China make artificial intelligence a centerpiece of their economic and military strategies in the decades to come.
In developing the guidelines, Pentagon officials cast their mission as a broad responsibility to set the tone and lead the world on responsible use of AI.
“AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behavior,” said Defense Secretary Mark Esper. “The adoption of AI ethical principles will enhance the department’s commitment to upholding the highest ethical standards as outlined in the DoD AI Strategy, while embracing the U.S. military’s strong history of applying rigorous testing and fielding standards for technology innovations.”
The U.S. military already is incorporating AI software that can locate enemy forces across great distances, fly drones to evacuate wounded service members from the battlefield and perform other tasks. AI is also playing a part in relatively mundane tasks, such as the use of complex algorithms that can search through millions of candidates to find the right man or woman for a specific military mission.
Rules of the road
Military officials say the successful integration of that technology hinges on setting up proper rules of the road.
Air Force Lt. Gen. Jack Shanahan, director of the Pentagon’s Joint Artificial Intelligence Center and a central figure in the military’s AI push, noted that technology changes but “the U.S. military’s commitment to upholding the highest ethical standards will not.”
Artificial intelligence “is a powerful emerging and enabling technology that is rapidly transforming culture, society and eventually even war fighting,” he said. “Whether it does so in a positive or negative way depends on our approach to adoption and use.”
But critics already are taking aim at the guidelines and the five pillars that form their foundation: responsibility, equitable use, traceability, reliability and governance. Some observers say the principles appear too vague and that the assumption that everyone involved in the AI process will define responsible use in the same way could result in disaster.
Specifically, a section of the guidelines calls for all Defense Department personnel to “exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.”
“I worry that the principles are a bit of an ethics-washing project,” Lucy Suchman, an anthropologist who studies the role of AI in warfare, told The Associated Press. “The word ’appropriate’ is open to a lot of interpretations.”
Rebecca Crootof, a University of Richmond law school professor, said a key test of the Pentagon effort will be whether it will be adopted beyond the U.S.
“There are a number of areas where it is still unclear what international law requires for AI systems or weapon systems with increasingly autonomous capabilities,” she told the online publication Defense One last week.
Although the Trump administration has focused heavily on the development of responsible AI, critics say there have been a number of troubling signs. The U.S. in recent years has led international opposition to a global ban on lethal autonomous weapons, joining other major nations such as Britain in opposing such a policy.
Administration officials have said the ban is too far-reaching and could stem innovation.
The AI push also has opened a major rift between the Pentagon and Silicon Valley.
In 2018, an employee protest inside Google led the company to drop out of the Pentagon’s groundbreaking Project Maven, which relies on using algorithms to examine and interpret aerial images from war zones. Google’s withdrawal sparked a major debate among leading tech firms about whether they become involved in helping the Pentagon develop AI systems that ultimately will be used to conduct wars and kill the enemy more efficiently.
That debate is still raging. Last month, Apple acquired the AI company Xnor.ai and immediately canceled the firm’s existing work with Project Maven, according to tech news site The Information.
Former Google CEO Eric Schmidt took over as chairman of the Pentagon’s Defense Innovation Board in 2016 in one of the first clear signs that the Pentagon was mounting a concerted effort to address the fears of Silicon Valley head on.
In his own statement, Mr. Schmidt said Monday’s announcement “demonstrates … that the U.S. and [Defense Department] are committed to ethics.”
Indeed, some analysts have said that establishing a set of formal guidelines could make technology firms less reluctant to work with the Pentagon.
• Mike Glenn can be reached at mglenn@washingtontimes.com.
• Ben Wolfgang can be reached at bwolfgang@washingtontimes.com.
Please read our comment policy before commenting.