On AI-generated citations, sanctions, and the attorney responsibility that doesn’t disappear because a machine wrote the brief
By December 2025, courts had documented over 660 cases of attorneys submitting AI-generated citations that did not exist. The rate had accelerated to an average of four or five new cases per day by early 2026. In several instances, courts levied sanctions exceeding $100,000.
This is not a story about bad attorneys. Many of the attorneys sanctioned had sterling reputations and clean disciplinary records. It is a story about what happens when a powerful tool is used outside of its actual capabilities — and when the attorney using it does not understand the distinction between what the AI is doing and what they believe it is doing.
“AI does not know when it is wrong. That is the most important sentence in this article. Read it again.”
The Hallucination Problem Is Not Being Solved — It Is Being Managed
Every major AI platform has reduced hallucination rates significantly since 2023. Lexis+ AI reported a 17% error rate in a Stanford study — meaningfully better than general-purpose models. Harvey, trained on legal data, performs better on legal tasks than a general chatbot. The rates are improving.
But a 17% error rate in a research tool used to build a brief means roughly 1 in 6 outputs requires manual verification. For a busy solo attorney running four active matters simultaneously, that verification burden is not theoretical. It is the extra hour at 11 PM before the filing deadline.
And the verification cannot be delegated to another AI. It requires an attorney — with judgment, with access to primary sources, with the professional responsibility that attaches to everything that carries their signature.
The Distinction That Matters: Research vs. Retrieval
There is a critical difference between AI that generates legal arguments and AI that retrieves documents you already have.
Hallucination is a risk when AI is generating — producing text, constructing citations, drafting arguments from learned patterns. It is not a risk in the same way when AI is retrieving from a defined corpus of documents that already exist in your vault.
The Lex Arca Neural Librarian does not generate citations or construct arguments. It retrieves documents. When it surfaces a deposition passage, that passage exists in the file that you already have. There is no hallucination vector in retrieval from a defined document set.
This is why the distinction between litigation intelligence and legal research AI is not a marketing nuance. It is a professional responsibility distinction.
What Responsible AI Use in Litigation Looks Like
In 2026, responsible AI use in litigation means understanding which layer of the workflow each tool is operating in — and what the verification requirements are for each layer.
AI-generated research output: requires full citation verification before any filing. Period.
AI-assisted document retrieval from your own case vault: the document exists, you reviewed it during case preparation, retrieval surfaces what you already know is there.
The attorneys who are navigating this correctly are not avoiding AI. They are deploying it in the right layers and maintaining the oversight that professional responsibility requires everywhere.
“The sanction risk is real. So is the competitive disadvantage of not using AI at all. The attorneys who understand the difference between these two risk profiles are the ones building durable practices.”
The Compliance Documentation Layer
Lex Arca’s AI Compliance Certification provides attorneys with documented evidence of exactly how their AI platform operates — the architecture, the data handling, the audit trail. This is not just a client communication tool. In a world where courts and bar associations are beginning to scrutinize AI usage, having documented, verifiable AI governance for your firm is a professional asset.
The attorneys who will face the most scrutiny are the ones who used AI carelessly and cannot explain what they did. The attorneys who will navigate this environment most effectively are the ones who can produce documentation proving they used AI responsibly.
That documentation is already built into the platform. The question is whether you want it in your file before you need it — or after.
→ Trial cases are preloaded. Just show up. vault.lex-arca.com