- The Washington Times - Monday, June 13, 2022

TAMPA, Florida — Writing an aircraft maintenance plan is far less sexy than the Hollywood-esque killing machines or images of a dystopian future often associated with artificial intelligence and its rise in the U.S. military.

Still, defense industry leaders say such relatively mundane tasks are perfect examples of the tangible, everyday capabilities that AI and machine learning can provide and underscore how the burgeoning partnership between humans and machines will lead to a more effective, safer fighting force. Whether predicting equipment failures on an F-16 before they happen to sort through a mountain of data or correcting overhead video in real time for a U.S. Special Forces team, the rapidly expanding role of artificial intelligence in the military is often much less exciting than its critics suggest but more important than most realize.

Rather than shy away from a debate about the potential moral pitfalls of AI and its myriad uses in war fighting, industry insiders argue that it would be irresponsible — perhaps even immoral — to ignore the technology when it can accomplish a great deal of good in the right hands.

“These are often 18-, 19-year-olds with months of experience and training. And they say, ‘OK, your job, at the operational level, is to maintain this F-16.’ I think it’s more ethical to enable them with the tools to help implement the appropriate solution rather than have them guess at what the problem is,” said Logan Jones, general manager and president of SparkCognition Government Systems, an AI-focused firm that is devoted to the government and national defense sectors.

Mr. Jones spoke to The Washington Times during a recent U.S. special operations convention in Tampa that drew companies from around the world, including many on the cutting edge of AI and its military applications.

One of Spark’s AI products, the “digital maintenance adviser” used by the Air Force, can comb through huge amounts of data — including handwritten maintenance logs — to help identify problems and prescribe solutions in far less time than a human brain alone.

“You give somebody a tablet and help them better triage or take symptoms and make a recommendation on what the problem might be — AI at the edge. It helps them do their job,” Mr. Jones said, before addressing the ethical debates surrounding AI and what it should or shouldn’t do.

The sensational, he said, distracts from the truly useful in the AI debate.

“If you look at the world of opportunities, a healthy debate to have is on some small fraction of use cases,” he said. “I think it distracts from the huge amount of value that’s ready to go today. There’s so much mundane, low-hanging fruit” in the military and national security sectors.

Indeed, while critics’ focus often turns to “killer robots” and existential debates about whether AI can determine the value of human life, the focus deep inside the Pentagon usually is centered on how machines can quickly go through data, process reports, scour job applications for the right candidates, sort audio and video files, and perform other routine tasks.

Those missions have become big business. This year, the Pentagon is reportedly set to spend as much as $874 million on AI initiatives, a figure that has risen dramatically over the past several years. The exact amount of spending is difficult to pin down because the Defense Department and its industry partners are involved in hundreds of AI-related programs, many of which remain highly classified and the details of which will not be made public.

Pentagon leaders seem arguably most excited about AI’s potential to process, evaluate and sort massive amounts of information collected from various sources on or around the battlefield. Officials say that there is now so much open-source or commercial satellite imagery and other information on the internet that it’s incumbent on military units to examine that data in real time rather than rely solely on drone footage collected by military personnel, for example.

The challenge arises when attempting to pull all of that information together and evaluate it in a matter of minutes or seconds.

“How do you fuse that together, and not with overburdening the operator … so that it provides a holistic level of confidence to the operator without them having to do” all of the work? James Smith, acquisition executive at U.S. Special Operations Command, told reporters during a question-and-answer session during last month’s Tampa convention.

“What artificial intelligence could bring to bear on that very interesting problem: to provide a very simple user interface to the operator to say, ‘Here’s a level of confidence about what you’re going to see on this terrain,’” he said.

Man and machine

For skeptics, the rise of AI and the potential enlistment of autonomous weapons raises serious moral questions and should spur governments around the world to enact tough laws to limit their use. The Campaign to Stop Killer Robots, for example, has been a leading international voice in the push to get the U.S., Britain and other major powers to restrict AI in the military domain.

“It’s time for government leaders to draw a legal and moral line against the killing of people by machines and commit to negotiate new international law on autonomy in weapons systems,” Clare Conboy, media and communications manager for the campaign, said in a recent statement.

Industry leaders say that even at today’s cutting edge, there simply isn’t a world in which killer robots develop minds of their own and start taking lives.

“There’s going to be a human on the loop all day, every day,” Brandon Tseng, co-founder and president of Shield AI, told The Times.

“I think a lot of people go straight to Hollywood and think: ‘worst-case scenario.’ But there’s a lot of nuance in between,” he said. “There’s a lot of technology and engineering that goes into making a system safe.”

Mr. Tseng described his company’s portfolio as “self-driving technology for unmanned systems.” Shield AI, he said, specializes in unmanned systems that are able to operate without GPS using a program called “Hivemind.”

That technology allows military personnel to give the system its mission and then allow the machine to carry out its objective. In other words, there is no need for human hands on a joystick to control the machine’s every move as it scours a building for hostages, for example, because the system can carry out movements and make decisions on its own.

“You just want to be telling it what it should do and it should execute what that mission is,” he said.

When integrated into a fighter jet, the company’s AI technology can build a pilot with decades of flight experience in a matter of several days, Mr. Tseng said.

“They will become incredible pilots capable of doing things other people could never imagine,” he said.

Such technology may seem scary to some, but Mr. Tseng and other proponents argue that AI-piloted planes can take more risks and attempt more daring maneuvers than their human counterparts, all without putting a pilot’s life at risk.

“This is where the battlefield is going,” Mr. Tseng said.

Beyond the battle itself, AI also will play a central role in ensuring the accuracy of data that comes across the screens of military personnel.

Craig Brower, president of U.S. government operations at the visual intelligence technology company Edgybees, said his firm aims to “make the world accurate” by enlisting AI to help instantly correct raw video footage used by troops, firefighters, police or others on the front lines.

Such satellite video may appear accurate to the naked eye, but “it can be off from 20 to 120 meters,” Mr. Brower said. In the past, such corrections and verifications were performed by humans in arduous, time-consuming jobs that could cost valuable time.

“What the technology is actually doing is our AI machine-learning is looking at that video feed as it’s coming in in real time and it’s identifying control points throughout that scene,” Mr. Brower said in a conversation just off the Tampa convention center floor. “And then it’s mapping those control points to an image and elevation base.”

The video corrections, he said, are done “practically instantly,” which would be crucial during a military mission that is running on a tight schedule.

• Ben Wolfgang can be reached at bwolfgang@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide