- The Washington Times - Saturday, October 13, 2018

Technology’s only as good as its imperfect human programmers.

That’s been the general rule of thumb for climate-change modeling. That’s been the talked-about challenge for artificial intelligence development for years.

And now, given a new report from Reuters that revealed how Amazon’s employment recruitment A.I. disregarded and dismissed qualified candidates based solely on sex — they were women — that’s the apparently unfixed, unchanged challenge of the technology world.

Amazon scrapped its algorithm-based program. But that’s like putting a Band-Aid on a long-festering wound.

Bias in A.I. has been a long-studied, long time problem. So far, the solution’s proven elusive.

In May of 2016, ProPublica found that COMPAS, an algorithm used to estimate how likely a criminal was to re-offend, was racially biased and predicted blacks, much more so than whites, were at higher risks for recidivism.

Also in 2016, the policing tool PredPol, revered for its so-called ability to predict crimes before they occur — and in so doing, enable local law enforcement to better utilize and manage budgets, manpower and resources — was outed by one human rights’ group for unfairly targeting neighborhoods with large racial minority populations. Part of the technological bias, critics said, came from the fact the A.I.-fueled software based its predictive powers solely on reports from police, not on true crimes and actual arrests.

In February of 2018, researchers with the Massachusetts Institute of Technology discovered that three emerging facial recognition programs, all of which were commercially available, were prone to skin and gender biases.

“In the researchers’ experiments,” MIT News reported, “the three programs’ error rates in determining the gender of light-skinned men were never worse than 0.8 percent. For darker-skinned women, however, the error rates ballooned — to more than 20 percent in one case and more than 34 percent in the other two.”

These statistics are hardly insignificant.

They reflect how police respond, how minorities are regarded, how women are perceived — and, as Amazon’s recruitment tool showed — how certain demographics of society can nab the offers and opportunities that are outright denied others.

That’s no to say AI isn’t helpful to humanity, but it has to be seen in a realistic light.

Bias in A.I. isn’t simply a catchphrase to talk up at a techie conference. It’s a modern-day dilemma that brings real consequences onto real people.

Machine bias is simply reflective of human bias. Human bias, meanwhile, is a complicated matter that can lead one analyst to conclude a black man’s arrest is racist and another, analyzing that same incident, as warranted. Reconciling such discrepancies to design and fuel an artificially intelligence predictive police program, therefore, might prove problematic.

So let’s be honest here: Obtaining zero bias, in man or machine, is going to be an impossibility.

Technology has its limits, but that’s because humankind does, as well.

That’s why, in the end, the best A.I. should always be a partner to humankind, not a replacement.

Technology’s fine as an alert, an aid, a red flag, a compass, a support system — but it should never be allowed to take the place of a human to be the decider, the chief executive, the commander, the cop, the judge or the jury. 

• Cheryl Chumley can be reached at cchumley@washingtontimes.com or on Twitter, @ckchumley.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.