OPINION:
Stephen Hawking, world-renowned theoretical physicist, mathematician, cosmologist and author, may have died in March, but the warnings of his final book, “Brief Answers to the Big Questions,” published just this week, shout from beyond the grave as something like this: Watch out, humanity, artificially intelligent beings will soon rule.
And ’lest you laugh — Hawking was regarded by many as the smartest guy in the world.
In excerpts published in the U.K.’s Sunday Times, Hawking wrote: “[I]n the future, AI could develop a will of its own, a will that is in conflict with ours. … The real risk with AI isn’t malice, but competence.”
Point taken.
Think about it. An A.I. program tasked with securing home, possessions and persons, for instance, might find the best path toward accomplishing that goal is to keep everything sealed and locked, safely protected behind closed doors, under camera watch and machine surveillance key. That’s a life of imprisonment, for sure. But on the flip side, prisoners kept behind lock and key are pretty safe from the troubles and dangers of an outside unpredictable world, right?
What’s horrific to freedom lovers could very well be competence to technology.
“A super-intelligent AI,” Hawking wrote, Quartz reported, “will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”
Hawking certainly isn’t the only one who’s seen A.I. as potentially devastating to humanity.
Tesla and SpaceX chief Elon Musk said in March at the South by Southwest tech conference in Austin, Texas, “Mark my words, A.I. is far more dangerous than nukes.” He’s stated similarly on numerous previous occasions.
Meanwhile, both Microsoft founder Bill Gates and Apple co-founder Steve Wozniak believed pretty much the same as Musk on A.I. — until they didn’t.
Gates, during an “ask me anything” Q&A on Reddit in 2015, said that “I am in the camp that is concerned about super intelligence” and “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” Wozniak, in an interview with the Australian Financial Review in 2015, said that “computers are going to take over from humans, no question” and that “I agree the future is scary and very bad for people.”
Fast-forward a few years, however, and both Gates and Wozniak had flipped.
In 2017, Gates told WSJ Magazine that Musk was wrong — that “we shouldn’t panic about” artificial intelligence; in early 2018, while speaking at Hunter College in New York City, he said, “A.I. can be our friend.” And Wozniak, at a Nordic Business Forum in Stockholm in January, said this: “Artificial intelligence doesn’t scare me at all.”
Whom to believe?
Perhaps common sense should be a guide.
Hawking also predicted in his final book that “superhumans” buoyed by gene editing technology and the A.I.-fueled ability to rapidly self-improve will soon enough replace regular humanity — giving rise to the reality that one man’s hell might very well be another man’s heaven.
Disastrous to Hawking and Musk could simply be desirability to Gates and Wozniak. But Musk raises an interesting perspective.
“The biggest issue I see with so-called A.I. experts is that they think they know more than they do, and they think they are smarter than they actually are,” he said at this same SXSW conference in Texas, as CNBC reported. “This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”
Excellent point. Pride does indeed goeth before a fall.
• Cheryl Chumley can be reached at cchumley@washingtontimes.com or on Twitter, @ckchumley.
Please read our comment policy before commenting.