Saturday, May 24, 2025
154,225FansLike
654,155FollowersFollow
0SubscribersSubscribe

AI in the Courtroom: Promise and Peril

As India’s legal system embraces Artificial Intelligence to streamline justice delivery, questions around ethics, bias, and accountability refuse to be silenced

By Dilip Bobb

Picture this. Visit any lower court in the country and the overwhelming impression is the huge pile of cardboard covered files, whether carried by lawyers, litigants or stored in every available nook and corner for the judges to refer to and record. In fact, thanks to the huge backlog of cases, many are stored in locked steel trunks awaiting their “Liberation Day” as Donald Trump said in another context. All that could be a thing of the past, thanks to the increasing use of Artificial Intelligence (AI), the game changer to beat all previous game-changers.

The Supreme Court has taken the lead under former Chief Justice of India (CJI) DY Chandrachud to integrate AI into its workings. Some courts have actually used AI tools like ChatGPT to accentuate their judgments. The Manipur High Court said it has relied on ChatGPT and Google for additional research while adjudicating a recent case, as did the Punjab and Haryana High Court.

In March 2023, Justice Anoop Chitkara of the Punjab and Haryana High Court used ChatGPT to deny bail to Jaswinder Singh accused of assault leading to death. Justice Chitkara sought ChatGPT’s input on jurisprudence regarding bail in cases involving cruelty in assaults. The Court later clarified that ChatGPT’s input was for broader legal context, not case-specific opinion. That same month, Justice Pratibha M Singh of the Delhi High Court ruled in favour of luxury shoemaker Christian Louboutin in a trademark case. Louboutin’s legal team used ChatGPT-generated responses to demonstrate the brand’s reputation for its “spike-sole shoe style” and its trademark red soles which were being copied by another shoe brand called Shutiq.

The real benefit of AI lies in being able to assist in streamlining case management, improve judicial workflows, and provide a better user experience for litigants, advocates, and judicial officers. AI tools can ensure that crucial legal work is done with precision. With AI handling routine tasks, lawyers can dedicate more time to complex legal matters and client interaction. AI algorithms are now capable of predicting legal outcomes by analysing historical data from previous cases. Lawyers use these to build stronger cases and provide clients with better advice. This data-driven approach enhances strategy planning and improves decision-making.

In recent years, related technological advances have allowed legal teams to automate or expedite work that has traditionally been done by entry-level colleagues. For instance, first-year legal associates at law firms commonly conduct legal research and produce legal briefs for supervising attorneys. Historically, this task has been time-consuming, but now search en­gines and legal research tools powered by machine learning can sift through massive volumes of documents to find the right information in a fraction of the time it would take a human. Additionally, AI-powered text generators can produce a first draft of a legal brief in just moments based purely on a computer-aided prompt.

However, it’s important to note that without human supervision to ensure the quality and accuracy of AI-produced work products, AI has the potential to do more harm than good. For example, the possibility of hallucinations (the phenomenon by which AI chatbots may confidently provide false information in response to a prompt) can jeopardize the accuracy of a lawyer’s work. It’s crucial, then, that humans review all content produced by AI. So, while there may be less risk for AI to replace the roles of paralegals in supervisory positions, their future duties may be redefined to include the monitoring of AI-produced content.

Apart from existing AI platforms like ChatGTP and many others, there are also local initiatives towards speeding up the judicial process. Since 2021, the Supreme Court has been using an AI-controlled tool designed to process information and make it available to judges. The Court’s SUVAAS (Supreme Court Vidhik Anuvaad Software) uses AI for translating judgments, and SUPACE (Supreme Court Portal for Assis­tance in Court Efficiency) is an AI tool for assisting judges with research. There is also the greater dependence on e-Courts, which is a comprehensive digital platform established to ease access to judicial services. It allows citizens involved in court cases to check case status, dates, prevailing court orders and judgments online. The portal covers courts at various levels, including district courts, High courts and the Supreme court. Users can search for case details by case number, party name, advocate name, or FIR number.

The e-Courts portal also provides information on court fees, filing procedures, and hearing schedules. This is available across the country. The e-Court project has now reached Phase-III, of which Rs 53.57 crore is specifically earmarked for the integration of AI and Blockchain technologies across High Courts up to 2027. The project, jointly overseen by the Supreme Court and the Ministry of Law and Justice, incorporates Machine Learning (ML), Optical Character Recognition (OCR) and Natural Language Processing (NLP). The initiative aims to streamline case management, improve judicial workflows, and provide a more accessible experience for litigants, advocates, and judicial officers. It reduces the dependency on court officials for basic queries and improves user engagement with the judicial system.

AI-driven document automation and OCR technologies are enhancing the accuracy and speed of filing legal documents. These systems minimize manual data entry, improve efficiency and reduce administrative burdens on court staff. AI-powered tools allow judges and advocates alike to assess information based on historical data which streamlines their work and also predicts probable outcomes based on past judgments. The NLP tools make legal documents and judgments accessible to all, regardless of the language they speak and understand.

The Vimarsh 5G Hackathon, organized by the Department of Telecommunications and Bureau of Police Research & Development, explored AI-driven innovations for crime prevention. AI is being integrated into policing and law enforcement to enhance crime detection, surveillance, and criminal investigations. AI models analyse crime patterns, high-risk areas, and criminal behaviour, enabling law enforcement to take proactive measures. Facial recognition systems are being integrated with national criminal databases. The data-driven Crime Tracking and Intelligence Systems called CCTNS allows integration with e-Prisons and e-Forensics databases.

The 2020 Report on Artificial Intelligence and the Legal Sector by the Indian Law Commission addressed the impact of AI on the legal profession and judicial systems in India. It focuses on how AI can assist in improving access to justice, reducing case backlogs, and automating repetitive tasks. The Report stresses the importance of human oversight in AI-based decision-making to ensure that it does not lead to unjust outcomes or bias in legal judgments.

A key factor is AI’s role in the judiciary is to assist, not replace, human judges. There is a focus on human intervention to ensure fairness and equity in decisions made by AI systems. UNESCO’s Judges Initiative operating in 160 countries, is a project to train judges, lawyers, and court staff in using AI responsibly and effectively. However, the organisation emphasised that the judiciary should apply international human rights standards to the ethical concerns related to bias, discrimination, privacy, and transparency, while also leveraging AI systems to strengthen access to justice and enhance the efficiency of judicial administrations.

Currently, there are no specific laws in India with regard to regulating AI. Ministry of Electronics and information Technology (MEITY), is the executive agency for AI-related strategies and had constituted committees to bring in a policy framework for AI. The Supreme Court and High Courts have a constitutional mandate to enforce fundamental rights including the right to privacy.  In India, the primary legislation for data protection is the Information Tech­nology Act and its associated rules.

AI systems generally rely on large amounts of data to learn and make predictions. Such data may include sensitive information, such as personal or financial data. AI algorithms that require this type of data to train effectively may create problems for organizations to comply with data protection laws.

  • Potential bias in AI systems whilst training can reflect in the outcome. The results from AI can simply reflect current social, historical imbalances stemming from race caste, gender and ideology, producing outcomes that do not reflect true merit.
  • AI systems, unlike trained advocates, do not have to acquire a license to practice law and therefore will not be subject to ethical standards and professional codes of conduct. If an AI system provides inaccurate or misleading legal advice, who will be res­ponsible/accountable for it? The developer or the user?
  • The usage of AI in the judiciary also poses a problem even if judges retain ultimate decision-making authority. It is not uncommon to become overly reliant on technology-based recommendation due to automated bias.
  • Lawyers should be cautious when using generative AI for legal research. Establishing accountability for technology-related errors in the legal field can be a challenging task.
  • While AI can assist law firms in improving efficiency, it cannot substitute a lawyer’s expertise and experience.
  • AI technologies aren’t infallible, as acknowledged by tech companies themselves. OpenAI declares in its terms of use that output may not always be accurate and shouldn’t be relied upon as the sole source of truth or professional advice.
  • Inherent biases in AI systems pose significant risks. Studies in the US have revealed racial profiling and disproportionate targeting of minorities.
  • The key challenge in using AI in the legal system lies in the safety, privacy, ethics, and protection of fundamental rights like the right to life and free speech.

Unlike other industries, the application and interpretation of the law requires a complex mind and sentience. A Thomson-Reuters report 2025 came up with these findings. One of the biggest issues regarding AI-powered technologies is its ethical use. Respondents believe that AI still requires significant human oversight as well as clearly drawn boundaries regarding its use. Among those professionals who’ve yet to work with an AI tool, 43 percent expressed concerns about the quality and usefulness of the output, and 37 percent have worries about how well AI technology can protect sensitive legal data.

As former Chief Justice Chandrachud once said: “AI has improved accessibility, but also raised concerns about fair trial and data security. As we embrace these advances, we must balance innovation with judicial integrity. Efficiency means nothing without fairness, accountability and trust.”

—The writer is former Senior Managing Editor, India Legal magazine and author of “Artificial Intelligence: The Coming Revolution”

Previous article
Next article
spot_img

News Update