Artificial Intelligence and the Justice System

1655

Chief Justice of India SA Bobde’s proposal for introduction of an artificial intelligence aid in the administration of the justice delivery system holds tremendous significance

By Avinash Amble

The latest theoretical advances in Artificial Intelligence (AI) and Adversarial Machine Learning (AML) are borrowed from legal reasoning. Yesterday’s AI did not use adversarial inferences, it only computed forward probability. Given a hypothesis, it would match evidentiary patterns across huge volumes of data. If we pair that with an AI that computes reverse probability, thereby generating several hypotheses for a corpus of evidence, and adds adversarial inference, it learns to resolve conflicts in a non-zero-sum game. At best, today’s AI (not yet applied beyond consumer internet) computes the legal equivalent of balance of probabilities to establish causation.

Theoretical work on causal inference was presented at the ICAIL in June 2019 which used the landmark Meneghan v Manchester case. It focused on over-determination, i.e., more than one cause leading to one outcome. Asbestos exposure was among eight other causes modelled to evaluate effect on adenocarcinoma/lung cancer. The causal effects of “what” caused the adenocarcinoma and “who” among the multiple employers caused it were determined through these causative models. Outcomes were causally superior to both primary and appeal court judgments.

Over time, there have been several advances in legal AIs—Taxman for corporate tax laws in the ’70s, Hypo for trade secret law in the ’80s, Smart Settle for e-commerce automated negotiation in the ’90s, and Family Winner for divorce settlements in the 2010s. Hypo, despite all its celebrity, has been used as a tutoring tool for law students while Smart Settle has limited use in micro insurance claim settlements.

So, why has AI not scaled out of academia into courtrooms and mediation chambers? Today’s AI stops at lexical analysis, i.e., analysis of word structure, their frequency of occurrence, etc. It does not have a semantic understanding of concepts for even everyday language. The bigger issue is mapping legal semantics and ontology to everyday language and then on to computers. AI has not understood the words yet; so, it does not know the law. AI being a rational agent, cannot consider equitable distribution of benefits.

Let us set aside AI’s lack of understanding of words, their force and effect, and look at a problem with human decision-making—that of bounded rationality. Legal reasoning assumes that all participants in conflict are “rational agents”. As Cass Sunstein demonstrates through his research on the intersection of behavioural economics and the law, human decision-making is not perfectly rational at all times. Human decision-making resolves conflicts via several heuristics (any approach to problem solving that is not guaranteed to be optimal, perfect or rational, but which is sufficient for reaching an immediate, short-term goal), which cannot be represented in today’s AI. Tomorrow’s AI, built on causal inference, might resolve conflicts completely rationally with only data—if that is the goal. Else, it has to figure out how to represent heuristics to model the real world more accurately.

—The author is an expert on Artificial Intelligence. He is an entrepreneur and inventor and has founded a research lab, Ovid