Indian courts have an AI problem, and the Supreme Court is not amused

When a trial court cited four judgments that AI hallucinated, SC called it misconduct, exposing a systemic failure of verification across the legal research chain


Indian courts have an AI problem, and the Supreme Court is not amused
x
Taking cognisance of the trial court deploying AI-generated, synthetic judgments, the Supreme Court said the practice has a direct bearing on the integrity of the adjudicatory process. Image: iStock
Click the Play button to hear this message in audio format

The rapid integration of artificial intelligence into professional workflows has reached a critical juncture in the Indian legal system. Recently, the Supreme Court of India took cognisance of a startling phenomenon: a trial court relying on "AI hallucinations"— artificial intelligence-generated, non-existent verdicts — while deciding a case.

This issue was brought to light during the proceedings of a civil case, serving as a wake-up call not just for the legal fraternity, but for any profession relying on automated technology.

This unprecedented situation highlights the friction between the convenience of modern AI tools and the rigid necessity of factual integrity in adjudication.

Anatomy of synthetic verdicts

The controversy stems from the Special Leave Petition Gummadi Usha Rani v. Sure Mallikarjuna Rao heard on February 27 by a Supreme Court bench comprising Justices PS Narasimha and Alok Aradhe.
The petitioners are defendants in a suit filed for an injunction. During the pending disposal of this suit, the trial court appointed an advocate commissioner to note the physical features of the disputed property. The petitioners subsequently challenged the advocate commissioner's report by raising specific objections.
On August 19, 2025, the trial court dismissed these objections. To justify its dismissal, the trial court relied on four specific legal precedents:
a) Subramani v. M Natarajan (2013) 14 SCC 95
b) Chidambaram Pillai v. SAL Ramasamy (1071) 2 SCC 68
c) Lakshmi Devi v. K. Prabha (2006) 5 SCC 551
d) Gajanan v. Ramdas (2015) 6 SCC 223
The petitioners challenged the trial court's order, contending that the judgments referred to and relied upon were non-existent and entirely fake.

When the matter reached the High Court of Andhra Pradesh in Amravati, the court considered the objection and realised that the cited judgments were indeed AI-generated. However, after merely recording a word of caution, the High Court proceeded to decide the case on its merits and dismissed the civil revision petition, affirming the trial court's decision.

SC’s stern intervention

The High Court's dismissal forced the petitioners to approach the Supreme Court, which immediately recognised the gravity of the situation.
The apex court noted that the case assumes "considerable institutional concern" not because of the actual decision taken on the merits of the property dispute, but because of the flawed process of adjudication and determination.
Taking cognisance of the trial court deploying AI-generated, synthetic judgments, the Supreme Court stated that this practice has a direct bearing on the integrity of the adjudicatory process. To halt further potential damage in this specific dispute, the court directed that the trial court shall not proceed on the basis of the advocate commissioner's report pending the disposal of the Special Leave Petition.

Misconduct vs decision-making error

The Supreme Court stated at the outset that a decision based on non-existent and fake alleged judgments is "not an error in the decision making". Instead, the Court declared that it would be considered a "misconduct and legal consequence shall follow".
Whether categorised as an error or misconduct, the foundation of the legal argument is inherently compromised when the data is fabricated.

Recognising that it is compelling to examine this issue in more detail and investigate its consequences and accountability, the Supreme Court has cast a wide net for expert assistance. The Court issued notices to top legal authorities, namely, the Attorney General, the Solicitor General, and the Bar Council of India.

The trial court judge did not, and could not have operated in an insulated, technological vacuum.

Human hands

In the typical practice environment of Indian district courts, judicial officers routinely rely on a complex ecosystem of research support. This support includes research prepared by court staff, law clerks, and, with accelerating frequency, AI-assisted legal research tools. These platforms are not niche products; they are often accessible through major, commercially available legal research platforms marketed directly to both the judiciary and the practising bar.

The Supreme Court's stark warning

Fake AI citations are misconduct, not a judicial error

Every link in the research chain failed verification

Trial court barred from proceeding on flawed report

Presenting fabricated citations violates the Advocates Act

All AI-sourced legal data must be independently verified

The path from an AI tool’s output to its integration into a final courtroom order is a human-mediated chain of events: an initial AI-generated summary, a raw citation list, or a draft paragraph passes through one or more human hands before it is finally integrated into the body of the judgment.
At every single point in that sequence—from the clerk who executed the initial search query, to the court staffer who compiled the final list, and definitively to the judge who affixed their signature to the final order— there existed a clear, unmissable opportunity for professional verification.

Systemic breakdown

The fact that this opportunity was demonstrably missed at every single touchpoint highlights a systemic breakdown in professional due diligence across the entire courtroom environment, implicating all professionals who facilitate legal research.
The provisions codified within the Advocates Act concerning professional misconduct are perfectly adequate to address the ethical and professional failures vividly demonstrated in the Gummadi Usha Rani proceedings. The act of presenting a fabricated citation to a court constitutes a false and misleading statement made in the course of legal practice. Crucially, it is legally and ethically sufficient to trigger disciplinary action.

Furthermore, the Court has appointed senior counsel Shyam Divan to assist the Court in this complex matter, allowing him to nominate an Advocate on Record for his assistance. This multi-pronged approach indicates the Supreme Court's intent to come up with technically-sound advice for the judicial fraternity to ensure that data relied upon is based on existing, not false, information.

A double-edged sword

While the immediate focus is on the judiciary, the implications of this case ripple across all sectors. AI tools are inherently designed to compile data, arrange information, and analyse it, thereby sparing humans from repetitive work and making lives easier. However, the very nature of these systems allows errors to creep in.
Furthermore, there can even be deliberate attempts to mislead users through these technologies.
The Supreme Court’s intervention is part of a broader, increasingly urgent push by the Indian judiciary to combat the menace of AI hallucinations. In November 2025, the Supreme Court’s official 'White Paper on Artificial Intelligence and Judiciary' formally identified the fabrication of cases as a primary risk, mandating that all information obtained through AI tools be independently verified under threat of strict disciplinary action.

Legal deep fakes

This institutional directive has been echoed forcefully in the courtroom; just weeks before the recent property dispute order, a Supreme Court Bench comprising Justices BV Nagarathna and Ujjal Bhuyan warned that cross-verifying citations remains a fundamental duty of every advocate, emphasising that the court will not condone negligence or the submission of legal "deep fakes."
This standard is being strictly enforced across jurisdictions, aligning with a November 2025 Delhi High Court ruling, which noted that relying on unverified case laws misleads the adjudicatory process, demonstrates a lack of candour, and falls short of the basic standard of fairness expected from officers of the Court.

All-pervasive issue

It is not the judiciary alone that is at risk. Every profession utilising technology products must be acutely mindful of these pitfalls. Professionals across all fields must exercise caution at every level of operation, from checking data sources to reviewing the final product.
Until AI companies can develop more advanced tools capable of automatically identifying foul play in AI-generated content, human vigilance remains our only true safeguard. The Supreme Court’s upcoming detailed examination of this issue could ultimately help people in other professions navigate the treacherous waters of the AI era.
Next Story