OPINION:
Lest you think the idea of killer robots scouring the planet in search of targets to terminate is simply the stuff of science fiction, think again.
There’s this, out of South Korea: More than 50 artificial intelligence researchers from 30 countries just called for a boycott against the Korea Advanced Institute of Science and Technology (KAIST) and its manufacturing partner Hanwha Systems over concerns that the duo are devising and developing killer bots.
Alarming? Indeed.
Look at it this way. These academics wouldn’t have called for the boycott if they hadn’t believed killer robots were in the works. And who would know best about the possible development of robotic killers than the academics working in that very field?
The university, for its part, has denied any intent to develop killer bots. But for the sake of ending the boycott, it also vowed to never, never, never build a killer bot.
Its president backed that statement with a strongly worded statement of his own: “[We won’t conduct] any research activities counter to human dignity including autonomous weapons lacking meaningful human control,” said KAIST’s Shin Sung-chul, the South China Morning Post reported.
Well and good. But suspicions linger.
South Korea opened KAIST’s Research Center for the Convergence of National Defense and Artificial Intelligence in late February with a mission to “provide a strong foundation” for the country’s security. That Hanwha came aboard as a partner proved worrisome, though. Why?
Hanwha is the largest weapons manufacturer in South Korea and produces explosives that are banned by 120 countries — cluster bombs — because of their tendency to leave remnants that can harm civilians. In other words: In a toss-up between ethics and money, Hanwha might very well side with the latter.
“This is a very respected university partnering with a very ethically dubious partner that continues to violate international norms,” Toby Walsh, a professor at the University of New South Wales and one of the boycott organizers, told The Guardian. “[To] have a partner like this sparks huge concern.”
Yes. And it’s a concern that’s not confined to academia.
The boycott came amid a planned United Nations meeting to discuss the future of autonomous weaponry — including drones and armed robots. Hundreds of political, scientific and intelligence leaders from dozens of countries have called for an international agreement to halt the creation of any kind of weapon that removes humans from the control room.
The StopKillerRobots.org campaign website states: “The concern is that a variety of available sensors and advances in artificial intelligence are making it increasingly practical to design weapons systems that would target and attack without any meaningful human control. If the trend towards autonomy continues, humans may start to fade out of the decision-making loop for certain military actions, perhaps retaining only a limited oversight role, or simply setting broad mission parameters.”
And once that occurs — goodbye, rules of engagement. Catch ya later, protocols of war. So long, notions of fair play. The Geneva Conventions and their protections for civilians, sick and wounded? Fuhgeddaboudit.
Machine learning has hardly advanced to the point of programming computers to grasp such complicated human concepts as compassion or justice. Fact is, remove the human touch and war is all about the “win” — the utter destruction of the enemy with no regard for the consequences, possible concessions, other courses of action or loss of life.
Armed robots ultimately issuing their own commands would open a Pandora’s box of epic disaster.
Not to be glib, but if humans find cause to kill other humans, it ought to be humans who make the decisions, humans who pull the triggers, humans who launch the missiles — humans who are held accountable. Killing should never, ever be as easy as sending in the bots.
⦁ Cheryl Chumley can be reached at cchumley@washingtontimes.com or on Twitter, @ckchumley.
Please read our comment policy before commenting.