OpenAI rewrote its rules to allow its work with the Department of Defense to proceed amid confusion about whether the company’s artificial intelligence projects violated the company’s guidelines, The Washington Times has learned.
The Pentagon’s efforts to secure an AI advantage as adversaries such as China pursue their own programs are well underway. Last year, the Times reported that a Defense Advanced Research Projects Agency program bypassed security constraints to probe OpenAI’s ChatGPT and got the popular chatbot to generate bomb-making instructions.
Until earlier this month, OpenAI’s internal rules prohibited the use of its AI models for the military and for weapons development. OpenAI scrubbed such prohibitions from its rules in a change first reported by The Intercept.
Asked about its rewritten policies and whether DARPA violated its rules, OpenAI said it made changes to provide clarity and thwart danger.
“Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property,” the company said in a statement. “There are, however, national security use cases that align with our mission.”
The company acknowledged working with DARPA to “spur the creation of new cybersecurity tools to secure open-source software that critical infrastructure and industry depend on.”
“It was not clear whether these beneficial use cases would have been allowed under ’military’ in our previous policies,” the company said in a statement. “So the goal with our policy update is to provide clarity and the ability to have these discussions.”
The changes made by OpenAI reflect the mindset of many people in the broader AI industry, who are eager to work with the American military as it looks to incorporate emerging technologies. The Department of Defense has discovered more than 180 instances where generative AI tools can be of use, according to Deputy Secretary of Defense Kathleen Hicks.
Ms. Hicks told reporters in November that most commercially available AI systems were not ready for the Department of Defense’s use because they were not mature enough to comply with the government’s ethical AI principles.
DARPA program manager Alvaro Velasquez said he found that complex algorithms called “large language models” were “a lot easier to attack than they are to defend,” in remarks at a National Defense Industrial Association symposium in October.
“I’ve actually funded some work under one of my programs at DARPA where we could completely bypass the safety guardrails of these LLMs, and we actually got ChatGPT to tell us how to make a bomb, and we got it to tell us all kinds of unsavory things that it shouldn’t be telling us, and we did it in a mathematically principled way,” he said onstage.
Understanding what artificial intelligence can and cannot do is a tall task for the Department of Defense. Explaining it has proven even more challenging.
Col. Tucker Hamilton, who leads AI testing and operations for the Air Force, sparked confusion about the military’s use of AI in remarks last year regarding the potential for the technology to go rogue. Speaking at a Royal Aeronautical Society event last year, Col. Hamilton described an AI test where the AI system attacked its human operator in a simulation.
After his remarks attracted widespread attention, the Royal Aeronautical Society updated its website to explain Col. Hamilton said he misspoke and the simulation was a hypothetical thought experiment.
On Tuesday, Col. Hamilton addressed concerns that AI could usurp humans’ decision-making, insisting there are guardrails in place to stop AI from becoming an unchecked monster.
“AI is not becoming sentient and it’s not taking over the world,” Col. Hamilton said at a web conference with C4ISRNET.
Col. Hamilton said the conversation over the potential dangers of AI is important. The technology is not a fad, but he emphasized that “AI is not magic, it is math, it is software.”
Tech wizards are awaiting new opportunities to work with the Pentagon on AI. The Department of Defense is hard at work on its new “Replicator” initiative that officials hope will overhaul military operations through the adoption of artificial intelligence into weapons systems and into the practices of those who operate them.
• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.
Please read our comment policy before commenting.