- The Washington Times - Tuesday, April 2, 2019

The European Court of Human Rights, in 2016, found that artificial intelligence could predict the outcomes of cases heard by human judges with a 79 percent accuracy rating.

Great. But perhaps the better lead would be this: Artificial intelligence used in the European Court of Human Rights failed to accurately predict outcomes in 21 percent of the cases. And for those 21 percent, that’s a pretty big deal.

Regardless, A.I. has come to America’s courtrooms, too.

States like Arizona, Kentucky, Ohio, Alaska and New Jersey, and cities like Chicago, Houston, Pittsburgh and Phoenix, have been using artificial intelligence in courtrooms to determine the risk factors of defendants seeking bail. In New Jersey, Kentucky and Arizona, for instance, a human judge still hears the pleas, but then uses computerized analysis to determine the defendants’ Public Safety Assessment score based on artificial intelligence-fused software from the Laura and John Arnold Foundation.

The PSA looks at the person’s age at arrest, nature of offense, criminal history, prior court appearances and other factors and assigns a risk score based on the likelihood of the defendant’s failure to appear at the next hearing, as well as his or her chance of committing a new crime while on bail. The lower the number, the lower the risk. The judge can then use that score to determine yay or nay on the bail request.

No doubt, the A.I.-generated scores help judges cut through the clutter of paperwork, filings, attorney arguments and emotional pleas to arrive at a more blind justice determination of a defendant’s fate, based on what’s best for the community at-large.

Artificial intelligence in the courtroom can also help with some of the judicial disparities between wealthier and poorer jurisdictions, as illustrated by this excerpt of a report from the Judicial Council of California, back in 2016: “The ability to have a critical criminal, family law, domestic violence or civil matter addressed by the court should not be based on the judicial resources in the county in which one happens to reside. … Access to the courts is fundamentally compromised by judicial shortages.”

But the pitfall here is the potential for over-reliance.

The pitfall is A.I.’s inability to determine the emotional inner workings of a defendant, the change of heart, the level of contriteness, the end-of-my-rope moments that bring about real change and lead a formerly criminal mind to adopt a path of repentance that leads to true personal contributions to society.

Determining that takes a human touch.

Besides, aren’t Americans supposed to be guaranteed trials of juries of peers? This A.I. isn’t being used for complicated and complex criminal cases — yet. But with technology, it’s important to look long-term.

It’s important to consider not only the “now,” but the “could be.”

And the “could be” here is that freedoms, at least the freedoms of those facing court hearings, could be left in the hands of an algorithm to decide.

• Cheryl Chumley can be reached at cchumley@washingtontimes.com or on Twitter, @ckchumley.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.