OPINION:
Tesla CEO Elon Musk has taken quite a beating from critics in the press for his oft-perceived hyperbolic warnings against artificial intelligence, and how its development will soon enough destroy the world as we know it — humanity as we experience it.
But the guy’s got a point.
AI, machine learning and top-tier technology may, in its best form, have the power to transform lofty do-gooder ideas into real medical, social, security and educational benefits for even the littlest and least of the world. But there’s a darker side, too. And it goes like this: China.
China currently uses facial-recognition technology to speed security checkpoints at national events, discourage cheating among students in schools and identify jaywalkers on the street.
The country has become a true surveillance state: With more than 176 million cameras surveilling everything from shoppers to drivers, and with plans to install another 400 million or 500 million cameras by 2020, China’s got the goods on just about anyone who dares show face in public.
The BBC, for instance, in late last year sent a journalist to the southern Chinese city of Guiyang to test the capabilities of the country’s surveillance system. The findings were eye-opening.
Within a matter of seven minutes, AI-powered lens had picked the journalist out from the crowd of about 3.5 million who call Guiyang home, identified him as a “suspect” and tasked authorities to arrest him.
How? China has no constraints on collecting data on private citizens and using it to feed the machine-learning process.
The country can truthfully lay claim to having among the most advanced AI-fueled surveillance systems in the world. Moreover, it’s only going to continue to grow. Police in China now regularly cart around on their uniforms panoramic-view body cameras that are linked to software programs that can provide nearly real-time identification of all who come within lens-capturing range.
The Chinese government, meanwhile, not only applauds such technological advances but also vows to be the world leader of artificial intelligence by 2030.
America, of course, is racing against that clock, competing madly to make sure that doesn’t happen.
Now for Mr. Musk’s rhetoric.
In July, he told attendees of the National Governors Association in Rhode Island that of all the information he’s privy to in his world of privilege and executive-level whisperings, AI is “the scariest problem” and it needs a massive and speedy infusion of regulation.
His message hasn’t much changed. This month, he told a South by Southwest Conference audience that AI presents “a case where you have a very serious danger to the public” and that it’s “extremely important” that controls on machine learning be applied soon.
Scoffers will scoff. Mockers will mock. Free marketers will fight. Scientists will shrug. And very likely, their arguments will be aligned on this one particular point: America’s not China.
What happens there, they’ll say, won’t happen here. Can’t happen here.
But the smart money’s on evaluating that claim with some skepticism. It wasn’t long ago that America didn’t have a Department of Homeland Security, a nationwide system of data collection and sharing called fusion centers, a Section 702 government surveillance power against the people, a Transportation Security Administration that operates body cameras at the airports — and so on and so on and so goes the spying-in-the-name-of-security road even farther. The point?
Sure, China’s not America and America’s not China and the land of the free can never, ever become the country of the communists. One would think, anyway.
But surveillance oftentimes comes wearing a security hat. And all this new AI-fueled security technology in the end might — in these dangerous, uncertain, unsafe times — be too tempting for America’s government to resist.
America might not become China. But America could become Not America.
Suddenly, Mr. Musk’s warnings make a bit of sense.
Cheryl Chumley can be reached at cchumley@washingtontimes.com or on Twitter, @ckchumley.
Please read our comment policy before commenting.