- The Washington Times - Tuesday, October 9, 2018

A few months ago, Google’s DeepMind department discovered that in a gathering game over Who Can Get the Most Apples, vying artificial intelligence systems wouldn’t hesitate to go aggressive and shoot to injure, stop or even kill, if need be.

That’s a bit of problem, given the push to integrate A.I. into nearly all aspects of humanity. We can’t have the machinery going nuts on the humans, now can we?

But it’s also a problem when you have statements like this, from none other than the cofounder of DeepMind, Demis Hassabis, speaking from the Economist Innovation Summit in London: “I would actually be very pessimistic about the world if something like A.I. wasn’t coming down the road,” TechRepublic reported.

Humans need A.I. to save us from ourselves, he said.

“[I]f you look at the challenges that confront society — climate change, sustainability, mass inequality, which is getting worse, diseases and healthcare — we’re not making progress anywhere near fast enough in any of these areas,” he said. “Either we need an exponential improvement in human behavior — less selfishness, less short-termism, more collaboration, more generosity — or we need an exponential improvement in technology.”

And since humans aren’t evolving at any “exponential” rate in these categories — “we need a quantum leap in technology like A.I.,” Hassabis finished.

More machinery, less man’s minds, in other words.

The world of science, filled with brainiacs who think they know best how humans ought to behave — and maybe they do, maybe they don’t — is always coming forward with statements like this. Meanwhile, the world outside of science is always being told that technology, not man, and certainly not God, is the more sensible means of realizing the ultimate best of humanity — of solving all that ails, fixing all that foils. And again: Maybe machinery is, maybe machinery isn’t.

But caveats come from this 2017 game of apples.

More than a year ago, DeepMind ran a new test on its neural networks that set up two teams of artificial intelligence, Red and Blue, and armed each with virtual lasers. The teams were then sent into an environment filled with virtual green apples, and tasked with competing against each other to see which could gather the most.

After playing thousands of times, the A.I. systems began to learn the best way to win — and it wasn’t by asking nicely.

The study found that the scarcer the apples, the more aggressive the A.I.

As DeepMind’s Joel Leibo told Wired: “This model … shows that some aspects of human-like behavior emerge as a product of the environment and learning. Less aggressive policies emerge from learning in relatively abundant environments … The greed motivation reflects the temptation to take out a rival and collect all the apples oneself.”

It’s all fun and games — until it isn’t.

Truly, the realities of this game makes the views of Hassabis, who sees artificial intelligence as the means of solving a host of humanly problems, a bit concerning.

Why? Simply put, A.I. that hat tips at greed and takes aggressive action to win the objects of that greed doesn’t seem that different from the frequent path humanity already takes. 

In other words, A.I. could save humanity — unless, of course, it instead destroys humanity.

Potatoes, potahtoes? Hardly. This is mankind we’re talking about here. With that in mind, perhaps banking the future on humanity, not A.I., isn’t such the raw deal or high risk the world of science would have believed, after all.

• Cheryl Chumley can be reached at cchumley@washingtontimes.com or on Twitter, @ckchumley.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.