A version of this story appeared in the daily Threat Status newsletter from The Washington Times. Click here to receive Threat Status delivered directly to your inbox each weekday.
The level of human participation in U.S. military operations is undergoing radical change, fueled by new artificial intelligence tools capable of replacing direct human action.
The Defense Department is implementing a 2023 policy it interprets as allowing the replacement of human beings with human judgment.
The department’s chief digital and artificial intelligence officer, Craig Martell, told The Washington Times someone will always be accountable for the function of cutting-edge technology used by the military, but the nature of humans’ contribution will change.
“Will there always be a human in the loop for every autonomous decision? I don’t think that’s possible anymore,” Mr. Martell said at a government symposium organized by his office.
Col. Matt Strohmeyer told reporters at the symposium that the department is evolving from human-driven decisions to human-supervised actions, explainable by terms such as human-is-the-loop, human-on-the-loop and human-in-the-loop.
Human-is-the-loop decisions are how the department used to do things with humans accomplishing every step of a particular task. In administrative work, an example is a government employee who has to aggregate information from across many departments in the military from emails and phone calls to develop presentation slides.
Human-in-the-loop decisions maintain human participation but automate some tasks previously possible only by humans. An example is Israel’s Iron Dome system to detect rockets, predict their trajectory, and then let a human decide whether to launch an intercepting rocket, according to a 2017 Army University Press journal.
Human-on-the-loop decisions are made without any necessary human input, but allow a human to override the machine’s determination. The U.S. has such technology, and the 2017 journal article points to Navy ships already deploying weapons systems that detect, evaluate, track, and use force against anti-ship missiles and high-speed aircraft without a human telling it to do so.
Concerns abound that the rapid advancement of AI will push humans out of the loop entirely as militaries around the world look for ways to move faster and more efficiently than their enemies.
Mr. Martell said the Department of Defense will not allow new technology to usurp the judgment of the U.S. military.
“There will always be a responsible party who understands the boundaries of the technology, who when deploying the technology takes responsibility for deploying that technology,” Mr. Martell told The Times.
To ensure this is the case, new training is necessary and Mr. Martell said training is one of the things he believes the military does especially well.
Among the groups getting U.S. troops ready for the AI overhaul of warfare is MIT Horizon from the Massachusetts Institute of Technology.
MIT Horizon’s Philip O’Connell said his team is providing AI training for the U.S. military, producing a range of instruction to troops that will ideally prepare them within the first few hours to be conversant with colleagues or vendors about AI topics.
“We also can take input from an agency or a branch or something else where they can tell us, ‘Here’s a fairly specific thing that we want to do, here are the people that we want to put through that experience, can you design it for us?’” Mr. O’Connell said. “And we can work with MIT faculty, people from industry and a collection of [subject matter experts] and basically help deliver workshops on the combination of things that you need.”
For example, Mr. O’Connell said MIT is working with a “group in Europe” on instruction involving the intersection of AI and banking.
Mr. Martell’s office determines access for active duty and reserves to MIT’s AI training, according to Mr. O’Connell, who said MIT plans to help demystify AI for the Department of Defense as it adopts new technologies.
• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.
Please read our comment policy before commenting.