- The Washington Times - Thursday, July 25, 2024

The U.S. intelligence community needs to develop a better way of distinguishing man from machine as the global race for advanced artificial intelligence accelerates, its AI chief said.

Computer scientists have relied on the Turing Test framework for more than a half-century. Legendary computer scientist Alan Turing proposed an “imitation game” to help people differentiate computers from humans in an attempt to answer questions about machines’ ability to think.

A new rush for AI models that can perform tasks and comprehend as well as humans, commonly referred to as artificial general intelligence, has picked up inside private labs and governments’ research sites around the world.

John Beieler, the chief AI officer in the Office of the Director of National Intelligence, said in an interview with The Washington Times that no one is close to producing artificial general intelligence, or AGI, but even measuring whether anyone is capable of such an accomplishment is difficult.

“It seems like we’ve blown past things like the Turing Test quietly and things that were the bar for sentient AI,” Mr. Beieler said. “We’ve kind of crossed that, but, of course, I don’t think anyone really thinks that the computers are sentient right now.”

China wants artificial general intelligence. It published an AI development plan in 2017 touting the advantage of “brainlike intelligence” for AI, according to an English-language translation of the plan.

Chinese researchers aim to merge AI and neuroscience in pursuit of artificial general intelligence, according to a July 2023 report by the Georgetown University Center for Security and Emerging Technology. The report said China views brain-inspired AI as the path to general AI.

American technology dynamo OpenAI also is planning for AGI. In 2023, the company outlined its vision for AGI to “give everyone incredible new capabilities” but cautioned that “drastic accidents and societal disruption” could ensue from AGI misuse.

The American intelligence community wants to ensure the worst outcomes don’t result from such powerful models’ deployment, Mr. Beieler said.

“We don’t have a really good, consistent way of measuring the performance of models across the board,” he said.

Mr. Beieler said some models show proficiency at coding but struggle with answering questions, while other models may demonstrate an inverse skill.

OpenAI said in March 2023 that its GPT-4 model scored in the 88th percentile for the law school admissions test, up from the 40th percentile achieved by its earlier GPT-3.5 model. When it came to the Graduate Record Examinations writing test, however, the newer model performed roughly the same as the earlier model.

AI models’ nonlinear results suggest that it is difficult to determine what the models will do next.

Mr. Beieler said the academic and research and development effort aimed at characterizing models must be “much more robust” so the intelligence community can better understand which models to deploy, pursue and fine-tune.

“The intelligence community, we’re a little bit paranoid by nature, so we, of course, consider all the downsides to this technology both from our own deployment. … But [we are] also very cognizant of ways that our adversaries could use these sorts of technologies,” Mr. Beieler said.

He said the intelligence community is paying close attention to the problems of models’ “hallucinations,” or generating false answers, and the intelligence agencies are looking at ways adversaries are exploring the convergence of AI technology with biotechnology and agriculture.

Forecasting the future of AI is likely a fool’s errand, but concerns about unpredictable advances and questions about the technology growing increasingly self-aware are motivating the industry’s top minds.

Tech mogul Elon Musk said in April that the total amount of “sentient compute of AI” will probably exceed all humans in five years.

“My guess is that we’ll have AI that is smarter than any one human probably around the end of next year,” Mr. Musk said in a livestream interview on his social platform X.

OpenAI co-founder Ilya Sutskever quit the company this year and is building a team called Safe Superintelligence Inc. Before his departure, Mr. Sutskever warned of AI systems going rogue, ignoring the commands of their creators and sparking the extinction of humans.

Regarding the question of whether AI is capable of perception and feelings, Mr. Beieler said he has not given it much thought. He said he knows plenty of people are working on answering such questions but he has not seen evidence to suggest the AI field is headed toward sentience.

Mr. Beieler said the U.S. intelligence community views AI advances as more of an opportunity than a threat.

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide