Can AI Actually Be an Objective Judge?

Use Cases & Projects Claire Carroll

The small nation of Estonia bet on the internet when it was founded in 1991 and has since been a strong defender of digital technologies and internet connectivity as a universal right and tool for democracy.They’ve established digital ID cards for their citizens and have online platforms for everything from voting to ordering a prescription. While there was a critical error with the cards in 2017, Estonia maintains its trust in computational systems, and is turning to AI as the next phase in the nation’s digital development.

estonia

Estonia has been using AI since 2017 to decrease government inefficiencies and response times. The technology is already used to help match the unemployed with jobs and monitor farming progress at harvest time, but now Estonia is taking the next step into developing an AI judge to handle their small claims court. The AI judge will process all the legal documents and then offer a verdict, which can be appealed to a human judge. The goal is to minimize the backlog of small claims cases and relieve strain on the judicial system without increasing costs.

In Search of Objectivity

There are two ways to approach the computational judge: one is to predict what a human judge would decide, with all the biases that come with that, and the other is to attempt to make decisions independent of human judges, which is the system Estonia is implementing. One key benefit of the prediction system is that it can be layered on top of existing judicial processes; judges can consult the AI prediction in conjunction with their own conclusions. Researchers like Daniel Chen hope that this will help ensure that judges don’t consider extralegal factors when deciding a case.

technical court

And while it could help judges learn to trust AI by decreasing the “black box” mystery, smart systems are already in use in US courts, with even more biased effects. ProPublica reported that black box systems influencing judges’ decisions disproportionately predict black Americans will become repeat offenders. So, unless these smart judicial systems are transparent and hyperconscious of their power, they risk doing more harm than good.

Bias Already Inherent in Law

religious freedomLegal rules are something that AI should be especially suited to understand since a corpus of laws can be established in a model like algorithmic rules. But since legal decisions are often based on preceding cases, not the law itself, there’s no quick and easy understanding to be gleaned from processing all the laws in effect in a particular place; that’s only part of the story.

Laws and preceding cases are (obviously) full of bias themselves. Race, gender, religion, and sexual orientation are all protected classes under U.S. federal law, but that doesn’t stop the legal and social bias against disenfranchised groups. And if our AI judges are learning from judicial precedent, they’re going to learn the bias inherent in our legal system. Perhaps the adoption of AI judges to diminish judicial backlogs will have the added benefit of forcing judges to confront existing biases, but unless training sets for the models are selected carefully, there’s no way an AI judge can be more objective than a human one.

AI judges need to be trusted for their decisions to hold any weight. And in order to build models that stakeholders can trust and understand, they need to be interpretable. 

You May Also Like

Demystifying Multimodal LLMs

Read More

At Saur, Water Leaks Are Stopped by AI

Read More

8 Core Steps for Your Predictive Maintenance Strategy

Read More

How to Address Churn With Predictive Analytics

Read More