- The Washington Times - Monday, February 19, 2024

A version of this article appeared in the daily Threat Status newsletter from The Washington Times. Click here to receive Threat Status delivered directly to your inbox each weekday.

The military’s ardent push for an artificial intelligence overhaul is taking center stage this week in Washington, where government officials will huddle with top minds in technology at a gathering organized by the Defense Department’s chief digital and artificial intelligence office.

The symposium, starting Tuesday, is designed to address complex questions involving the military use of powerful AI models and the ethics of replacing and augmenting warfighters with machines.

The event will assemble defense and intelligence officers, Big Tech powerhouses and smaller companies, academics, and foreign government officials. It will culminate with a classified session in Virginia featuring briefings from the National Security Agency and the National Geospatial-Intelligence Agency.

A primary focus will be the Defense Department’s use of generative AI. Attendees will learn about the work of Task Force Lima, a team created to determine where to implement large language models, or powerful algorithms, within the department.

Deputy Secretary of Defense Kathleen H. Hicks told reporters last year that the department identified more than 180 instances where generative AI tools could add value to its operations.


SEE ALSO: Americans ignoring Chinese info war, Taiwan tech exec says; algorithms show digital armies’ march


She said most commercially available AI systems were not sufficiently mature to comply with the government’s ethical rules. That problem was underscored by a Defense Advanced Research Projects Agency program that bypassed security constraints for OpenAI’s ChatGPT to deliver bomb-making instructions.

Private businesses hope that is all about to change. Panels focused on Task Force Lima’s work at the symposium will feature representatives from major technology companies such as Amazon and Microsoft.

Various companies are eager to work with the government on AI for military and intelligence purposes. Earlier this year, OpenAI rewrote its rules prohibiting work with the military and on weapons development, which allowed the AI-maker to continue partnering with the Department of Defense.

Government personnel from around the world who are concerned about setting standards will get a fresh look at the buildup of AI tools and munitions at the symposium. A panel on the responsible use of AI in the military will feature representatives from Britain, the South Korean army and Singapore.

“Given the significance of responsible AI in defense and the importance of addressing risks and concerns globally, the internationally focused session at the symposium will be focused on these critical global efforts to adopt and implement responsible AI in defense,” the symposium’s agenda says.

A summit organized by the Defense Innovation Unit in Silicon Valley last week focused on the buildup of AI and autonomous weaponry.

The Silicon Valley meetings provided a rare glimpse into the progress of the Defense Department’s “Replicator” initiative, which the federal government hopes will remake the military by infusing AI into weapons systems.

Defense Department adviser Joy Angela Shanaberger told the gathering of hundreds of technology entrepreneurs, funders and government personnel that the military’s goal is to field multiple thousands of all-domain autonomous systems by August 2025.

Aditi Kumar, deputy director of the Defense Innovation Unit, said the autonomous systems will be built up quickly, safely and responsibly under the guidelines of a policy for autonomous weapons created in January 2023, DOD Directive 3000.09.

The policy allows human judgment to replace the use of force.

“The policy, what it says is not that there needs to be a human in the loop but that there needs to be human judgment in the deployment of these weapons systems,” Ms. Kumar told the gathering in Silicon Valley.

The ethics of AI tools replacing humans will feature prominently at the symposium this week in Washington, and defense officials are well aware of concerns about battlefields dominated by killer robots.

Although doomsday scenarios of AI gone rogue have panicked some in Silicon Valley, military officials want to ensure the technology sector does not discount AI opportunities.

Navy Adm. Samuel J. Paparo told the Silicon Valley audience that replacing humans with machines will save American lives.

“It really ought to be a dictum to us that we should never send a human being to do something dangerous that a machine can do for us,” Adm. Paparo said at the summit. “That when doing so, we should never have human beings making decisions that can’t be better aided by machines.”

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.