The Defense Intelligence Agency is putting the finishing touches on a new artificial intelligence strategy designed to stop powerful new technologies in the pipeline from bypassing their human users on decisions leading to war or peace.
The new strategy was approved internally earlier this week, according to DIA chief technology officer Ramesh Menon, who is assuming the role of chief AI officer of the military intelligence agency as it adjusts to the promises and perils of artificial intelligence technologies.
“We just want to make sure that we control the machines, and machines are not controlling us. That’s bottom line,” Mr. Menon said onstage at the GovAI conference in Virginia this week.
Mr. Menon said his agency wants an explainable, responsible AI capability that complies with the law and with the Constitution.
AI refers to science and engineering that enables machines to accomplish tasks requiring complex reasoning through the application of advanced computing and statistical modeling. The U.S., China and other countries are scrambling to determine how AI systems will transform the future of war.
To acquire capabilities that can survive the rapid change of AI tools, America’s spy agencies are working closely with private businesses.
One example is Behavioral Signals, a self-styled “emotion-cognitive AI provider” that builds tech designed to analyze human behavior from voice data. The Los Angeles-based company’s AI tools measure such things as tone variety and speaking rate to detect emotions and assess a speaker’s intent, according to its website.
Behavioral Signals CEO Rana Gujral told the GovAI conference that his company’s tech is readily applicable to call centers for businesses’ interactions with customers.
Mr. Gujral said at GovAI he got to talking about defense applications of his tech with Mr. Menon a few months ago at a conference in Amsterdam. Behavioral Signals announced in November that it received an undisclosed sum from In-Q-Tel, the taxpayer-funded investment group financing tech startups on behalf of American spy agencies.
Mr. Gujral said his technology may help America’s intelligence officers assess someone’s trustworthiness, such as for walk-in agents who enter diplomatic and military installations promising to share valuable information but may instead be an enemy plant.
“It’s a tough job. You have an individual human there reacting to that information and have to make a call and AI can be a tool,” Mr. Gujral said this week. “Obviously the goal is not to replace that decision making by AI, but [to] offer another perspective, another tool to that decision making that needs to happen.”
As the DIA determines how to incorporate new AI spy tech into its military endeavors, Mr. Menon said his team solicited feedback from a wide range of Defense Department personnel. He said his team focused on tradecraft, platforms and tools, talent and skills, mission priorities and partnerships.
And he had a warning for people and businesses that dismiss the hype over AI as a fad that will pass over time.
“People who don’t use it effectively will probably be left out and probably have to shut down sometimes, depending on the type of industry and sector you are in,” Mr. Menon said. “So whether we like it or not, like it doesn’t matter.”
While Mr. Menon’s team readies to share its new approach to AI, lawmakers on Capitol Hill are studying proposals to govern AI’s national security implications. Senate Majority Leader Charles E. Schumer convened a private forum for senators to meet with AI makers about national security on Wednesday, attended by executives from tech companies such as Microsoft and Palantir.
• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.
Please read our comment policy before commenting.