- The Washington Times - Thursday, May 20, 2021

Tyndall Air Force Base in Florida this month made military history: the first full-length test of Skyborg, a groundbreaking artificial intelligence system that hitched a ride on a drone and performed “basic aviation capabilities” with limited human involvement.

The systems worked, but critics say that historic flight may have been a small step toward a doomsday scenario in which AI-powered aircraft inadvertently spark the next human world war.

The Pentagon’s cutting-edge Skyborg program aims to eventually put autonomous drones next to traditional fighter aircraft to form man/machine combat tag teams. It is just one piece of the nation’s much broader AI initiative, which military officials say is crucial to staying a step ahead of China, Russia and other potential adversaries. With unmanned ships and planes and software that could replace the work of flesh-and-blood computer analysts, AI programs across the Defense Department have received top priority and billions of dollars in funding.

The multimillion-dollar Skyborg program has encountered little resistance, but some researchers say the system is a prime example of a core concern within the Pentagon’s thought process on AI. They argue that the Pentagon is focusing too often on what autonomous weapons and vehicles can offer in combat scenarios and is paying too little attention to what could go wrong if enemies hack into American systems or, should science fiction become reality, what may happen if the program develops a mind of its own.

“What happens when the electronics are jammed or spoofed? Or all communications are lost? Will there be some way to ensure that the Skyborg doesn’t go rogue and do something we don’t want it to do?” said Michael T. Klare, a senior visiting fellow with the Arms Control Association who specializes in emerging technologies. Mr. Klare also works with the Campaign to Stop Killer Robots, a coalition of organizations that warns against potential pitfalls of autonomous technology and is pushing for strict international rules to govern AI.

“That’s a real worry because they’re intended for missions against high-value Russian and Chinese military [targets],” he said. “This could be viewed by an adversary as a very escalatory act. You want to make sure there’s a human who has full control over these devices in the event of a conflict so it doesn’t do anything we don’t want it to do.”

Pentagon officials insist that safety and ethics remain at the center of the AI playbook. The Skyborg autonomy core system (ACS), specifically, remains in an experimental phase.

Military officials stress that they are laser-focused on developing a system they can fully trust to perform its assigned mission — and only its assigned mission.

“We’re extremely excited for the successful flight of an early version of the ‘brain’ of the Skyborg system,” Brig Gen. Dale White, program executive officer for fighters and advanced aircraft with the Skyborg program, said in a statement after the test flight.

“It is the first step in a marathon of progressive growth for Skyborg technology,” he said. “These initial flights kick off the experimentation campaign that will continue to mature the ACS and build trust in the system.”

Milestone flight

During its milestone flight, Skyborg “demonstrated basic aviation capabilities and responded to navigational commands, while reacting to geo-fences, adhering to aircraft flight envelopes, and demonstrating coordinated maneuvering,” Pentagon officials said. The flight lasted two hours and 10 minutes. The Skyborg system was loaded aboard a Kratos UTAP-22 drone, which teams on the ground and in the air monitored throughout the flight, officials said.

Once fully up and running, the Skyborg initiative is expected to deliver a host of benefits for the U.S. military. Chief among them is the ability to field multiple “low-cost, attritable” craft that can operate mainly autonomously, putting human pilots in much less danger and theoretically giving American troops a numerical advantage over enemy air forces.

But the worst-case scenarios that Mr. Klare and other AI researchers mentioned remain top of mind inside the Pentagon and across the U.S. government as a whole.

The National Security Commission on Artificial Intelligence, an independent federal panel formed in 2018 and chaired by former Google CEO Eric Schmidt, released its final recommendations on national AI policy earlier this year. The study underscored the importance of AI programs in the military and across society but also highlighted dangers.

“Human operators will not be able to keep up with or defend against AI-enabled cyber or disinformation attacks, drone swarms or missile attacks without the assistance of AI-enabled machines,” the report reads in part.

The study specifically addressed AI systems in combat scenarios and said human commanders must remain indispensable.

“Provided their use is authorized by a human commander or operator, properly designed and tested AI-enabled and autonomous weapon systems can be used in ways that are consistent with international humanitarian law,” the report reads in part.

Indeed, Pentagon officials have regularly stressed how critical it is to build some level of human direction into AI systems to shut them down in an emergency or in case an enemy tries to take control of the program. A 2012 Defense Department directive, now a part of the much broader Pentagon AI strategy, calls for “guidelines to minimize the probability and consequences of failure in autonomous and semiautonomous weapon systems that could lead to unintended engagements.”

• Ben Wolfgang can be reached at bwolfgang@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide