A Bench warns that citing non-existent case law generated by artificial intelligence undermines judicial integrity and may constitute misconduct rather than a mere legal error.
New Delhi: The Supreme Court of India on 2 March 2026 criticised a trial court for issuing an order that relied on judgments later found to be fabricated by artificial intelligence tools, describing the conduct as potential misconduct that strikes at the “integrity of the adjudicatory process.”
A Bench comprising Justice Pamidighantam Sri Narasimha and Justice Alok Aradhe examined the matter while addressing wider concerns about the misuse of generative AI in judicial proceedings, emphasising that such practices, if unchecked, could erode public confidence in the justice delivery system.
Case Title, Court & Bench
- Case Title: Gummadi Usha Rani & Anr. v. Sure Mallikarjuna Rao & Anr.
- Court: Supreme Court of India
- Bench: Justice Pamidighantam Sri Narasimha and Justice Alok Aradhe
- Date of Judgment: 2 March 2026
Background of the Matter
The issue reached the Supreme Court in the context of a civil revision petition arising from a property dispute in Vijayawada. A trial court’s interlocutory order dated August 2025 had referred to several legal judgments that, on later verification, could not be located in any official legal database and were therefore understood to be AI-generated, non-existent case laws.
The petitioner, Gummadi Usha Rani, challenged the trial court’s order under Article 227 of the Constitution, arguing that the reliance on these fabricated judgments undermined the legality of the decision.
The trial judge, in response to inquiries from the High Court below, acknowledged that an AI tool was used for the first time in the drafting of the order and that the cited judgments could not be authenticated in official sources.
Legal Issue
The primary legal issue before the Supreme Court was whether a judicial order that incorporates AI-generated and non-existent judicial authorities could be sustained, and whether such reliance, even if inadvertent, might constitute misconduct or otherwise vitiate the order.
Supreme Court’s Observations
In its analysis, the Supreme Court took serious exception to the trial court’s conduct, noting that:
- Judgments emanating from the judicial process must be based on verifiable legal authorities.
- The use of artificial intelligence tools to draft orders or cite precedents without verification can lead to the placement of entirely fictitious materials before the court.
- Such practice is not merely an error of law but may amount to misconduct, striking at the “integrity of the adjudicatory process.”
The Bench emphasised that judges must exercise judicial application of mind and ensure independent verification of legal citations, rather than relying uncritically on AI outputs.
Reasoning of the Court (Brief)
Although the Supreme Court’s brief order did not elaborate on the detailed reasoning behind a full judgment, it made key points:
- The incident raised a larger issue concerning the misuse of artificial intelligence in legal proceedings.
- Reliance on non-existent case law generated by AI can mislead courts and adversely affect judicial outcomes.
- The practice could amount to judicial misconduct if it compromises the foundational requirement that judicial decisions be grounded in authentic and verifiable legal precedents.
In flagging the matter for the record, the Court did not expressly set aside the trial court’s order on its face but underscored the seriousness of the conduct and its implications for judicial integrity.
Practical Implications
The Supreme Court’s intervention highlights pressing concerns as Indian courts grapple with the increasing incorporation of AI tools in legal research and drafting:
- Judicial officers and advocates must verify all legal authorities sourced from digital tools against official databases such as Supreme Court Reports (SCR) and authenticated law reports to prevent the inclusion of fictitious case law.
- The order signals that misuse or negligent reliance on unverified AI outputs can attract disciplinary scrutiny and be treated as more than mere procedural error.
- Legal professionals may need to adopt stricter internal practices and quality checks to ensure compliance with established norms of legal research and ethical standards.
- The judgement reinforces that human judgment, discernment, and professional responsibility remain central to the judicial process, even with advancing technological aids.
The judgment adds clarity on the standards expected of trial courts in using technology responsibly, emphasising that judicial orders must be anchored in verified legal authorities and that reliance on AI-generated fake judgments can amount to serious misconduct.


