- The Washington Times - Wednesday, February 21, 2024

A version of this article appeared in the daily Threat Status newsletter from The Washington Times. Click here to receive Threat Status delivered directly to your inbox each weekday.

The artificial intelligence revolution of warfare is well underway, and the Pentagon is scrambling to get its troops to the front.

The military failed to fully communicate its AI problems and desired solutions as the hot technology promises to rewrite the rules for preparing armies and fighting wars, said Craig Martell, the Defense Department’s chief digital and artificial intelligence officer.

Mr. Martell assembled top AI minds from around the world this week for a brainstorming conference in Washington. He was frank with the technology experts: “We really need your help.”

“We were too quiet, we should have let you know a little bit more loudly,” Mr. Martell told the gathering on Tuesday. “We’re going to fix that going forward, and we’re going to start at this symposium.”

The Defense Department symposium is putting big technology companies and founders of smaller businesses in the same rooms with defense and intelligence officers to work on complex challenges posed by AI-powered weaponry. For starters, Mr. Martell promised to open up access to government data for private industry in the next year.

He said the Defense Department will create opportunities for AI developers to sit next to government users to gather immediate feedback on how their tools can help the military.

Plans include collecting new testing and evaluation tools so the developers can create responsible AI algorithms for projects such as Replicator, a Defense Department initiative to infuse AI products into its guns, bombs and other weaponry.

Deputy Defense Secretary Kathleen H. Hicks noted last year that the department identified more than 180 instances where generative AI tools could add value to the U.S. military operations.

At the time, she noted that most commercially available AI systems were not sufficiently mature to comply with the government’s ethical rules, a problem highlighted when a Defense Advanced Research Projects Agency program bypassed security constraints for OpenAI’s ChatGPT to deliver bomb-making instructions.

The department’s AI development is driven by a policy adopted last year for autonomous weapons, said Aditi Kumar, deputy director of the defense innovation unit.

Ms. Kumar said in Silicon Valley last week that the policy means a human is no longer needed in the loop for various weapons systems, but the government requires “human judgment in the deployment of these weapons systems.”

The human touch

Precisely how the military replaces human beings with human judgment is still a significant question, but the Pentagon is forming a partnership with the technology experts at Scale AI to test and evaluate its AI systems.

Scale AI, led by 27-year-old entrepreneur Alexandr Wang, said Tuesday that his company would build benchmark tests for the Defense Department to scrutinize large language models, or powerful algorithms that can process and analyze massive streams of data at a speed and level of sophistication never before imagined.

“The evaluation metrics will help identify generative AI models that are ready to support military applications with accurate and relevant results using DoD terminology and knowledge bases,” Scale AI said on its blog. “The rigorous [test and evaluation] process aims to enhance the robustness and resilience of AI systems in classified environments, enabling the adoption of LLM technology in secure environments.”

Task Force Lima is hard at work figuring out precisely where to use those large language models within the Defense Department. The department has charged alternative “red teams” with testing AI models for vulnerabilities, said Capt. Manuel Xavier Lugo, the task force’s mission commander.

Asked at the symposium whether large language models could control autonomous weapons systems such as militarized drones, Capt. Lugo demurred, citing the need to protect sensitive information.

He said unmanned aerial vehicles, commonly known as drones, are among the machines that can take orders from AI models.

“If you think about it, the most basic use case in any of this stuff is plain language talking to machines,” Capt. Lugo said. “And that’s not a use case as UAV-specific; that’s a use case for anything.”

Mr. Martell said he was not sold on the Defense Department’s reliance on its own AI models. Paying private industry to make models instead is better and more efficient, he said, because businesses will always be on the cutting edge of technology.

U.S. adversaries, notably China, are rushing to exploit artificial intelligence approaches to their own militaries. Some analysts say the Pentagon has lagged. A classified session at the Washington symposium scheduled for Friday will feature briefings from the National Security Agency and the National Geospatial-Intelligence Agency.

Ms. Hicks said Wednesday that the U.S. has an AI advantage over the rest of the world, particularly in terms of development of artificial intelligence tools.

“Our advantage comes from who we have building them, who we have using them and how we do so,” Ms. Hicks said. “And in these arenas, America will always be unbeatable.”

Still, cybersecurity professionals studying hackers’ expanding use of generative AI tools are growing concerned. The cybersecurity firm CrowdStrike said Wednesday that it observed state-sponsored cyberattackers and activist hackers experimenting with generative AI in 2023 to carry out increasingly sophisticated cyber-invasions of sensitive networks.

“Rapidly evolving adversary tradecraft homed in on both cloud and identity with unheard-of speed, while threat groups continued to experiment with new technologies like GenAI to increase the success and tempo of their malicious operations,” CrowdStrike’s Adam Meyers said in a statement on his company’s discoveries.

CrowdStrike noted that the speed of cyberattacks is continually accelerating, with the fastest recorded breakout time by an attacker last year at two minutes, seven seconds.

The Pentagon’s AI officials are also intent on moving faster.

Although Mr. Martell and Capt. Lugo detailed ambitious plans in the coming year for the Pentagon to adopt cutting-edge AI, both acknowledged struggling with the challenge of maintaining public-facing websites for their work.

Mr. Martell said he has tried for a year to get the Defense Department to change his group’s website to better identify his office.

“Things move slowly in government,” Mr. Martell said.

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.