A version of this story appeared in the daily Threat Status newsletter from The Washington Times. Click here to receive Threat Status delivered directly to your inbox each weekday.
LAS VEGAS — “Invisibility cloaks” and “digital twins” are among artificial intelligence technologies that could soon give America’s enemies weapons beyond people’s wildest imaginations, say national security officials gathered Tuesday at the Black Hat USA 2024 hacker conference.
Kathleen Fisher of the Defense Advanced Research Projects Agency said the invisibility cloak technology exists and the Pentagon has a program to counter it.
Ms. Fisher displayed the “invisibility cloak” on a presentation slide describing how the Pentagon is working to disrupt state-of-the-art adversarial AI. The Black Hat gathering, considered the premier computer security event of its kind, attracts leading hackers, tech-sector companies and government agencies from around the world.
The cloak is simply a colorful sweater with a hidden “adversarial” pattern that confuses the most sophisticated identifying programs. A person wearing the cloak is visible to the naked eye and surrounded by peers in an auditorium, but the pattern is undetectable to cutting-edge AI surveillance systems trained to recognize all objects.
“We live in interesting AI times,” Ms. Fisher said. “It’s kind of the best of times, the worst of times: amazing new technology that we need to figure out how to leverage to make the world a better place, but also massive new threats that we need to figure out how to counter.”
GARD duty
DARPA’s program defending against the dark arts of AI is called GARD, or Guaranteeing AI Robustness Against Deception.
The program investigated adversarial AI to learn how to stop AI-generated crime and chaos. Ms. Fisher, who leads DARPA’s information innovation office, said the GARD program discovered that hackers could pay $60 to put data on the internet to override AI developers’ “large language” models.
Large language models undergird many popular generative AI tools, such as OpenAI’s ChatGPT.
The invisibility cloak was crafted by a research team from the University of Maryland, separately from DARPA, with the assistance of Facebook AI. The researchers published a paper in 2020 explaining their work to “generate an adversarial pattern that, when placed over an object either digitally or physically, makes that object invisible to detectors.”
Asked whether DARPA was interested in developing its own AI tricks, a spokesperson told The Washington Times, “You have to understand how tools can be broken in order to develop defenses.”
Hackers and cybersecurity professionals see promise and profit where the U.S. government sees peril.
Nvidia engineering manager Bartley Richardson said the world is “not that far away” from creating a digital twin for each of the hundreds of hackers assembled at Black Hat’s AI summit on Tuesday.
A digital twin is a virtual replica of a person or a real-world asset or system. NVIDIA, an AI powerhouse whose market capitalization exceeded $3 trillion in June, is hard at work building digital twins. In 2021, NVIDIA began working with BMW on the technology, including exploring how to make a digital twin of an automotive factory.
The technology would have broad applications, such as gaming and cybersecurity. World Wide Technology co-founder Jim Kavanaugh told the hacker conference that digital twin technology would also prove useful for health care and for recording and preserving DNA.
Cybersecurity company Balbix unveiled BIX, an AI assistant for cyber risk and exposure management.
Balbix founder Gaurav Banga demonstrated how the AI assistant tailors answers for professionals working to recover from a cyberattack. He showed the chat assistant delivering specific actions for an information technology worker to patch problems while providing details on the financial impact on the company’s bottom line for a different company executive.
For the intelligence community, the fear is that digital twins and AI assistants are ripe for cyberattacks. Hackers could manipulate real people’s digital clones and AI assistants.
Kathryn Knerler, the U.S. intelligence community’s chief information security officer, warned that such technology will make phishing attacks more difficult to stop. Phishing refers to scammers’ efforts to dupe people into revealing sensitive information, frequently through emails with malicious links.
“When I heard the idea of digital twins this morning and the AI assistant just recently here, of course I thought about, ‘Well, what a great thing to target to be able to figure out how do I use all this great information to create the perfect phishing attack?’” Ms. Knerler said. “And not only that but to do it at scale. So knowing the person you’re going after, knowing their language, and then knowing their habits — all those put together into a great attack.”
National security officials repeatedly sought to assure cybersecurity professionals that they are not naive to the threats posed by cutting-edge technology employed by hackers and legitimate businesses. Ms. Fisher said her office was focused on deepfakes before the term existed.
She said this year that a DARPA program built to detect deepfakes is focused on commercializing the technology at the direction of Congress so people can better understand whether they are viewing manipulated content.
• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.
Please read our comment policy before commenting.