- The Washington Times - Tuesday, September 21, 2021

U.N. Secretary-General Antonio Guterres laid down a marker Tuesday for the United States and other nations pursuing the development of weaponry that uses artificial intelligence, asserting that weapons capable of picking their own targets “should be banned.”

Mr. Guterres made the assertion in remarks Tuesday morning, moments before a series of speeches by world leaders, including President Biden, got underway at the annual U.N. General Assembly in New York City.

Mr. Guterres sought broadly to draw attention to what he described as the potentially positive but dangerously unregulated world of cyber developments.

In addition to his comments referring to artificial intelligence in weapons, he underscored the danger that a world war-type event could one day be triggered by a cyberattack.

“I’m certain that any future major confrontation — and heaven forbid it should ever happen — will start with a massive cyberattack,” the secretary-general said.

“Where are the legal frameworks to address this?” Mr. Guterres asked, suggesting it must become a greater priority for the international community to establish rules for regulating cyberwar actions.

At the same time, he called for global efforts to spread the internet to people around the world, asserting “half of humanity has no access to internet,” and “we must connect everyone by 2030.”

While the world should “embrace the promise of digital technology,” world leaders should also focus on “protecting people from its perils,” the U.N. chief said.

“One of the greatest perils we face is [the] growing reach of digital platforms and the use and abuse of data,” he said. “A vast library of information is being assembled about each one of us and … We don’t know how this information has been collected, by whom or for what purposes, but we do know our data is being used commercially to boost corporate profits. Our behavior patterns are being commodified and sold like futures contracts. Our data is also being used to influence our perceptions and opinions.”

“Governments and others can exploit it to control or manipulate people’s behavior, violating the human rights of individuals or groups and undermining democracy,” Mr. Guterres said. “This is not science fiction. It is science fact, and it requires a serious discussion. So, too, do other dangers in the digital frontier.”

He then referred to the intersection between cyber-oriented artificial intelligence and weapons development.

“Today, autonomous weapons can choose targets and kill people without human interference,” the U.N. chief said. “They should be banned. But there is no consensus on how to regulate those technologies.”

Such weapons are capable of swooping in, picking their own targets, and then firing and killing without requiring any trigger to have been pulled by a remote operator.

Concerns and heated debates over the weapons have been rising for years.

More than two dozen nations seized on a U.N. meeting in 2018 to push for a total ban on fully autonomous weapons, or “killer robots,” as critics have dubbed them.

The rise of lethal robots — from relatively simple drones to “Transformers”-like killing machines and guns that can choose their targets — has pitted large countries against small, with the U.S., Russia, Israel, the United Kingdom and other global powerhouses having insisted they will resist any effort to ban the development of autonomous technology.

Concerns spiked earlier this year, meanwhile, when a U.N. report claimed an autonomous drone made by a Turkish company was found to have been used in Libya in March 2020.

A June report by NPR cited the U.N. findings as saying the drone was capable of being “programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ’fire, forget and find’ capability.”

• Ben Wolfgang contributed to this article.

• Guy Taylor can be reached at gtaylor@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide