© Copyright CommArc Ltd 2024
Privacy PolicyTerms & ConditionsWebsite by Friday Creative

JULY_2025

Generative AI is Being Used in the Legal Sector—and the Ramifications Are Only Just Starting to Be Felt

Generative AI is Being Used in the Legal Sector—and the Ramifications Are Only Just Starting to Be Felt

Date_

21st July, 2025

Author_

Steve Hodson

Generative AI is Being Used in the Legal Sector—and the Ramifications Are Only Just Starting to Be Felt

The legal world is no stranger to disruption, but the rise of generative AI is ushering in a new kind of challenge—one that strikes at the heart of legal credibility and professional responsibility. A new project by legal journalist and researcher Damien Charlotin is shining a spotlight on this issue with a meticulously curated website that tracks a growing phenomenon: hallucinated legal citations generated by AI and submitted in court filings.

What Is This Website Tracking?

Charlotin’s AI Hallucination Cases Website documents instances where lawyers or litigants have used generative AI tools—like ChatGPT—to draft legal documents, only to unknowingly include fabricated case law or citations. These hallucinations, while often convincingly formatted, refer to cases that simply do not exist. The database includes details such as:

The jurisdiction and court involved

The AI tool used

The nature of the hallucination

The consequences faced by the legal professionals involved

This is not a theoretical concern. Courts in the U.S., Australia, New Zealand and beyond have already sanctioned attorneys for submitting AI-generated briefs riddled with fake citations. Charlotin’s website serves as both a warning and a resource—highlighting the real-world consequences of misusing AI in legal practice.

Why Is This Happening?

Generative AI tools are designed to produce plausible-sounding text, not necessarily factual or verifiable information. When used without proper oversight, they can generate:

Non-existent case law

Misquoted statutes

Inaccurate legal reasoning

The pressure to streamline legal work, combined with the allure of AI’s speed and fluency, has led some practitioners to rely too heavily on these tools without verifying their outputs.

The Broader Ramifications

The implications of this trend are profound:

Erosion of Trust: Courts rely on the integrity of legal filings. AI hallucinations threaten that trust.

Professional Liability: Lawyers are being fined, sanctioned, and publicly reprimanded for AI misuse.

Regulatory Response: Bar associations and courts are beginning to issue guidance—and in some cases, rules—on the responsible use of AI in legal practice.

At the same time, this moment presents an opportunity. By documenting these failures, Charlotin’s website encourages the legal community to engage critically with AI, develop best practices, and push for tools that are not only powerful but also verifiable and transparent.

A Wake-Up Call for the Legal Sector

Generative AI is here to stay. But as the hallucination database makes clear, its integration into the legal system must be handled with care, scepticism, and a deep respect for the truth. The legal profession is being forced to confront a new reality: in the age of AI, due diligence doesn’t end with a well-written paragraph—it begins with verifying its source.

Share_