- The Washington Times - Wednesday, October 9, 2024

The U.S. Army’s artificial intelligence research aims are organized to develop a “general purpose thinking machine” that is difficult for researchers and national security officials alike to define.

Col. Isaac J. Faber, U.S. Army Artificial Intelligence Integration Center director, said Tuesday that defense officials’ understanding of AI and its applications differ from the private-sector leaders gathered at a summit organized by tech giant NVIDIA in Washington this week.

“How we think about this is that artificial intelligence isn’t a product, it’s not a chip, it’s not a set of models. Very simply stated, it’s an academic field of research,” Col. Faber said at the summit. “That’s what it is. It is a field of research with researchers attempting to create a general purpose thinking machine.”

A “general purpose thinking machine” is the language that some AI practitioners, such as Microsoft, have used to explain artificial general intelligence, or AGI. Technologists describe AGI as the quest to make AI models and tools that perform tasks and demonstrate comprehension as successfully as humans.

Col. Faber said AI researchers have not adequately defined the general purpose thinking machine, so these researchers focus instead on more narrow tasks such as image recognition and the understanding of language.

Companies often use the phrase ’computer vision’ to encapsulate their image-recognition technology, and businesses may use the term ’natural language processing’ to refer to their AI tools capable of understanding written or spoken words.

Col. Faber said AI researchers may think that these individual tools will someday “collect together” into the general purpose thinking machine, but the Department of Defense already sees their real-world value.

“Where we’re focused inside of the [Defense Department] is this body of research happens to make very useful things that have very awesome practical applications. But the researchers are not interested in those practical applications,” Col. Faber said. “So there’s this translation layer that has to happen.”

Among those laboring to get AI tools adopted by the military is Air Force Research Laboratory senior computer scientist Collen Roller. He helped build NIPRGPT, a generative AI platform with the approval of the Pentagon.

Earlier this year, the Defense Department said NIPRGPT was an AI chatbot intended to create a bridge between new generative AI tools and the Pentagon’s unclassified operations.

Mr. Roller told the NVIDIA assembly that Pentagon leaders were worried about the possibility of “hallucinations,” or the generative AI tool offering inexplicably false answers to people’s queries.

“Senior leadership see AI and they think, ’I really don’t want to trust this thing, maybe it’s hallucinating. For GenAI, maybe something’s coming out of it where I don’t necessarily want to put any fact value in it,’” Mr. Roller said. “But the reality is that for the use cases that we’re seeing, for NIPRGPT at least, we see a lot of people using this for basic toil-reduction tasks.”

Schuyler Moore, U.S. Central Command chief technology officer, said her combatant command has a chatbot they refer to as CentGPT, which is “very simply the code base that was built out of NIPRGPT that we then pulled up onto our [secret] network.”

U.S. Central Command’s ability to create a more robust technical development environment on its “Secret Internet Protocol Router Network” was extremely helpful because almost all of the command’s work is on classified networks, according to Ms. Moore.

“For those of you who do not have the privilege of working in government, working on a classified network is like working in a barren wasteland,” Ms. Moore said. “You’re not able to access the open internet, you certainly do not have large language models. Your experience, especially if you are a programmer, as you can imagine, is quite painful.”

She compared the new AI tools’ value to developers as the difference between a soldier being given a rifle instead of a pool noodle.

Rifles are an upgrade over pool noodles, but are far from expected weaponry benefiting from AGI tools. Such theoretical weapons include things such as wearable or implanted devices enabling control of a battleship by a user’s brain.

The race to develop AGI is worldwide. John Beieler, the U.S. intelligence community’s AI chief, told The Washington Times earlier this year that no one was yet close to producing AGI. He said measuring whether anyone could even accomplish such a task was difficult.

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide