- The Washington Times - Thursday, August 1, 2024

The U.S. intelligence community is readying new rules for internal use of artificial intelligence, creating a first directive to govern how spies adopt and deploy the rapidly advancing technology.

John Beieler, the U.S. intelligence community’s AI chief, told The Washington Times the directive is still a work in progress focused on such things as the ethical use of AI and ongoing monitoring of models once they are deployed.

Precisely when the rules will be issued is not clear, but Mr. Bieieler said they are intended to hold up under whoever is in charge of the federal government next year.

“A lot of the things that we’re baking into the [intelligence community directive] are good governance practices to have within the [intelligence community], regardless of ​[the] political landscape,” Mr. Beieler said. “We’re talking about telling the IC elements that you should have robust tests and evaluation procedures, you should have robust documentation, you should have [standards for] how you accord to civil liberties and privacy.”

The group working to craft the rules grew out of a gathering of machine-learning professionals within the spy agencies more than five years ago, Mr. Beieler said.

In 2024, Mr. Beieler oversees a council of chief AI officers pulled from the U.S. government’s 18 intelligence agencies. The resulting group, informally referred to as the “CAIO [pronounced chow] Council,” is focused on standardizing rules across the various agencies.

Mr. Beieler said implementing the common set of rules will be a collective effort from a group with differing backgrounds, joking that AI is a “team sport.”

“From lawyers to acquisition professionals to policy people to machine-learning researchers and data scientists and things like that, it takes all types,” he said.

Mr. Beieler’s background separates him from the herd of tech professionals chasing the AI gold rush in top technical schools and Big Tech labs. He is a 30-something graduate of state universities in Pennsylvania and Louisiana and a political scientist by training.

He said his focus on econometrics and statistics led him to study applying machine learning to understand social problems such as protest movements, civil unrest, coups and leadership turnovers in various countries. He said his early machine-learning work helped him forecast violence in Afghanistan down to the province level.

As the American government turns its gaze from the global war on terror to a great-power competition with China and Russia, Mr. Beieler argued that the U.S. is ahead in the global AI race.

“I think we have the strongest research labs, I think we have the strongest academic ecosystem, I think we are an attracter of talent in this space in many ways,” Mr. Beieler said. “When you look at a lot of the folks that are leaders in the AI-machine learning space at places like OpenAI and Anthropic, they often come from other countries and have come here to America to work on these problems. That’s because of the strength of our ecosystem.”

Measuring an AI advantage is a problem no one has completely solved. Mr. Beieler acknowledged that assessing an AI advantage is difficult because things can shift very quickly in the world of software as compared to the physical world of manufacturing and engineering.

“Where one side has an advantage one day, another can have an advantage the next, right?” he said. “So caveat all this with, compared to our adversaries, saying [we are ahead] is not something that I think we can kind of rest on our laurels about.”

He pointed to Silicon Valley and failed efforts to replicate it as an example of America’s exceptional relationship among government, academia and private industry. He said government spending provides “very important risk capital” driving the growth of new tech.

Some of the AI advantage metrics are more subjective, such as adherence to norms and values regarding freedom of speech and transparency.

“If you’re a researcher in an authoritarian country and you’re focusing on making sure your models don’t spit out some piece of information that would be offensive to your government, that will slow you down,” Mr. Beieler said. “And that is a very hard research problem in and of itself.”

Many Americans, however, have begun questioning whether Big Tech is working to ensure its algorithms don’t surface certain answers in response to political questions. Google and Meta this week said their systems that excluded former President Donald Trump from search results and restricted his image on social media platforms were technical blunders and not human errors driven by political bias.

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide