- The Washington Times - Friday, June 29, 2018

Privacy advocates, civil libertarians and those who fear the dawn of machine learning could bring an end of humankind as we know it might find a bit of comfort in the growing field of explainable artificial intelligence, XAI. Why?

With XAI, humans always stay at the helm.

The goal of this technology, said Stanford University Master of Science in Computer Science graduate David Gunning, who joined with the Defense Advanced Research Projects Agency for a four-year project to develop XAI, is to equip the machine with the ability to tell its human operators why it arrives at the conclusions it does – to make the machine explain itself. That’s quite different from most of today’s artificial intelligence which seeks to “take human thinking and put it into machines,” Gunning said.

That means fears of humans becoming supplanted by machines are moot with XAI.

That also means that a main pitfall of A.I.-driven data collection and dissemination can be avoided.

Here’s the beauty of XAI: Say your job is to sift through National Security Agency video and satellite feeds to find security risks, using artificial intelligent programs to help red-flag behaviors that go against the norm. The results could hit in the hundreds — thousands. From an analyst perspective, the challenge then becomes determining which red-flags are real, which are false.

With XAI, much of that challenge is reduced.

XAI gives the analyst reasons for the red flags. Is it a bias? Is it an easily explained human activity?

The analyst can then narrow the scope of the search even further – before making recommendations to act.

“Analysts have to put their names on recommendations, but they don’t always understand why a recommendation to red-flag came,” Gunning said. “If there’s bias in the training [model], the system will learn that bias.”

DARPA isn’t the only entity working on explainable artificial intelligence. Researchers at the University of California, Berkeley, and with the Georgie Institute of Technology, to name a couple, have been trying out different software approaches to give neural networks the ability to explain themselves, so to speak.

But whoever cracks this code, whether with DARPA or a civilian outfit, will be providing a great service to technology — and not just in the fields of national security or crime-fighting.

Think of it: Would you rather your surgeon base an operating decision on radiology scans powered by A.I. that simply collects data and spits back common denominators — or by XAI-generated scans that have been filtered for biases, held to the fires of accountability, and analyzed as to the reasons why that invasive procedure is truly the best course of action?

It’s a no-brainer. XAI is the common-sense older brother in a digitized world filled with flashy, privacy-invading, data-gobbling gadgets and machine-controlling bullies.

“You’re not trying to improve the accuracy of the machine running the technology,” Gunning said, of XAI. “The numbers of false alarms are not changed. … But you can improve the accuracy of the explanation of the false alarms.”

That, in turn, leads to better decisions.

And all while keeping the humans in charge. 

• Cheryl Chumley can be reached at cchumley@washingtontimes.com or on Twitter, @ckchumley.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide