A congressionally chartered panel is urging the U.S. government to establish a modern-day “Manhattan Project” to develop advanced artificial intelligence capabilities that surpass anything China has created.
The U.S.-China Economic and Security Review Commission’s annual report to Congress made pursuing artificial general intelligence its top recommendation. As a model, it returned to the secret World War II-era crash project to develop the first nuclear bomb.
“The Commission recommends [that] Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability,” said the report, released this month. “AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would usurp the sharpest human minds at every task.”
The U.S. government and private sector are straining to develop and perfect AI applications, just as the Chinese communist regime is doing. The panel recommended that Congress enact “broad multiyear contracting authority to the executive branch” for AI, cloud computing and data center companies to compete for AGI dominance. The secretary of defense would be directed to make this a national priority.
Oak Ridge National Laboratory grew from the Manhattan Project’s development of the first atomic bombs. The cutting-edge lab is now prioritizing research into artificial intelligence.
The government lab in East Tennessee created an AI security research center last year to study the technology. Edmon Begoli, the center’s founding director, told The Washington Times in April that he worries about the implications of a ruthlessly efficient AI system.
He is investigating existential risks to humanity from unchecked AI applications. He said he is more worried about an AI system connected to everything than AI as a tool that consciously seeks to harm.
“It’s not like some big mind trying to kill humans. It’s just a thing that is so good at doing what it does, it can hurt us because it’s misaligned,” he told The Times.
Private efforts
Leading technologists in the private sector are also developing advanced AI.
OpenAI co-founder Ilya Sutskever has warned that AI systems could go rogue and fuel human extinction. He used those concerns to help push out CEO Sam Altman last year. Mr. Altman returned to helm the company, and Mr. Sutskever departed to build an AI company called Safe Superintelligence Inc.
Under Mr. Altman, OpenAI unveiled models in September that the company contends are capable of “reasoning” at the level of Ph.D. students, particularly for tasks involving biology, chemistry and physics.
Some skeptics say AI makers’ claims are marketing hype. Hugging Face CEO Clement Delangue said earlier this year that people who give the false impression that AI systems are human are peddling “cheap snake oil.”
Others say fears about AI are not being taken seriously enough.
Before his death last year, former diplomat and security official Henry Kissinger, in his final book with two top technologists, Eric Schmidt and Craig Mundie, issued a dire warning about the consequences of AI advances.
The book, released last week, advises readers to think about the day “when we will no longer be the only or even the principal actors on our planet.”
The authors of “Genesis” wrote that AI tools have already outperformed humans in some respects and raised the concern that a society one day could create a hereditary genetic line of people who are inherently more capable of working with AI tools. They oppose such a future and fear it could cause the human race to split into different lines, with some having more authority over others.
The authors note that some biological engineering efforts to integrate man and machine are underway, including research into brain-computer interfaces.
Such BCIs, also called brain-machine interfaces, link the electrical signals of a brain to a device to decipher the signals to accomplish a task.
Concerns are not limited to civilian applications.
Lt. Cmdr. Mark Wess wrote last year in the U.S. Naval Institute Proceedings publication that no emerging technology was “potentially more important to the military.” He wrote about a world where his fellow naval officers could use brain technology to control a battleship’s navigation, weapons and engineering systems.
As nations compete for technology that sounds like science fiction, the U.S.-China Economic and Security Review Commission said the American government cannot wait to take action because so many others are ready to fill the void.
Commissioner Michael Kuiken, a former Senate aide, said at a hearing on the commission’s report that “being the first mover on artificial general intelligence is critical.”
“If the Chinese government were to get there first, I think the United States will find itself at a perpetual strategic disadvantage and that makes it an imperative to [race] to the front of the line on this one,” he said.
• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.
Please read our comment policy before commenting.