What Happened in Mata v. Avianca?
It started with a simple personal injury claim. Roberto Mata sued Avianca after he was hit by a beverage cart on a flight from Bogotá to New York in 2019. He broke his knee. But he didn’t file the lawsuit until 2022. That’s three years later. Under the Montreal Convention, international air travel claims must be filed within two years. Avianca moved to dismiss the case. It was a clear procedural problem.
Instead of admitting the case was time-barred, the lawyers for Mata-Peter LoDuca and Steven Schwartz-turned to ChatGPT. They asked it to find cases that could help them argue for tolling the statute of limitations. What they got back looked perfect. Six cases. All cited properly. All with detailed facts, judges’ names, and rulings that supported their argument. One was Martinez v. Delta Air Lines. Another was Zicherman v. Korean Air Lines. All real-sounding. All fake.
The lawyers didn’t check. They didn’t pull up Westlaw or LexisNexis. They didn’t search PACER. They didn’t even Google the case names. They trusted the AI. ChatGPT told them the cases were real. They believed it. They filed the brief. The court accepted it. Then Avianca’s lawyers did a simple search. Nothing came up. No court records. No filings. No judges with those names. The citations didn’t exist.
Judge P. Kevin Castel wasn’t amused. He ordered the lawyers to explain themselves. When they couldn’t, he sanctioned them. $5,000. Paid to the court. Not to the client. Not to a fine fund. To the court. And he dismissed Mata’s case with prejudice. No second chances. The case was dead. Not because the injury didn’t happen. But because the lawyers lied-with AI.
Why Did the AI Lie?
Generative AI doesn’t know the truth. It doesn’t have access to legal databases. It doesn’t remember cases. It doesn’t care if something is real. It only guesses what words come next based on patterns it saw in training data-mostly text scraped from the internet before 2021. When you ask it for a legal citation, it doesn’t look it up. It writes one. It makes it up. And it does it with absolute confidence.
That’s called a hallucination. Not a mistake. Not a typo. A full fabrication that sounds like law. Stanford researchers found that large language models like ChatGPT hallucinate factual claims in 15-20% of responses when asked specialized questions. For legal citations? That number jumps to 72%. In one test, Harvard Law School found that 93% of AI-generated legal citations had serious errors: fake case names, wrong courts, made-up rulings. And the AI doesn’t say, “I’m not sure.” It says, “Here’s the ruling.”
OpenAI’s own technical report admits this. They say language models “can occasionally generate incorrect or nonsensical answers.” But lawyers don’t read technical reports. They read briefs. And when ChatGPT writes a brief in perfect legal language-with citations, footnotes, and case summaries-it sounds authoritative. Junior associates trust it. Partners don’t check. And then you get Mata v. Avianca.
What’s the Difference Between ChatGPT and Legal AI Tools?
Not all AI is the same. ChatGPT is a general-purpose tool. It’s like using Google to find a court ruling. It might help you understand the concept. But it won’t give you the real case.
Legal AI tools like Westlaw Precision, Lexis+ AI, and Casetext’s CARA are different. They’re built inside legal databases. Westlaw has over 40,000 verified sources. LexisNexis has 1.3 billion documents. These tools don’t guess. They retrieve. They cross-check. They cite from real cases. And when they generate summaries, they tag every claim with a source. You can click through and see the original opinion.
A 2023 study by the University of Chicago Law School found that when researchers asked ChatGPT-4 for specific case law, it invented cases 72% of the time. But when they used Westlaw’s AI tools, the error rate dropped to under 0.2%. That’s not a small difference. That’s the difference between getting sanctioned and staying licensed.
Thomson Reuters bought Casetext for $650 million because they understood this. Legal AI isn’t about replacing lawyers. It’s about giving them tools that don’t lie.
What Do Bar Associations Say Now?
After Mata v. Avianca, the legal world didn’t just panic. It acted.
The American Bar Association issued Formal Opinion 498 in November 2023. It said clearly: “A lawyer may use generative AI, but only if they supervise it, verify its accuracy, and maintain direct communication with the client.” No exceptions. No “it seemed right.” No “I assumed.” You must check. Every citation. Every quote. Every paragraph.
Twenty-eight state bar associations now require lawyers to disclose AI use in court filings. The New York State Bar Association passed Resolution 1207, mandating continuing legal education on AI ethics. The Federal Judiciary issued Standing Order 24-01 in January 2024, requiring all federal court filings to include an AI disclosure statement. If you used AI, you have to say so.
And the American Law Institute, a group that shapes legal standards across the U.S., approved the Principles of Law, Data, and AI in May 2024. It says this: “A lawyer’s duty of competence now includes understanding the limitations of AI tools and implementing reasonable verification procedures.”
This isn’t guidance. It’s a new standard of care. If you don’t verify AI output, you’re not just being careless-you’re violating your ethical duty. That’s malpractice.
How Should Law Firms Actually Use AI Safely?
Here’s what works, based on real firms that got burned and fixed it:
- Never use ChatGPT for citations. Use Westlaw, Lexis, or Casetext. If you’re using a general AI tool, treat it like a brainstorming partner-not a research assistant.
- Verify every output. The New York County Lawyers’ Association recommends 15 minutes per citation: check the case name in the Federal Judicial Center’s database, confirm jurisdiction with CourtListener, verify procedural history on PACER.
- Use the two-person rule. One person generates the AI draft. A second person (preferably a senior attorney) verifies every claim. 79% of top firms now require this.
- Document everything. Keep a log of what AI tool you used, what you asked, and how you verified it. If you get sanctioned, this log is your defense.
- Train your team. Associates need 8-12 hours of AI ethics training. Partners need 4 hours on supervisory responsibility. Law firms like Ballard Spahr reduced research errors by 78% after implementing this.
- Disclose AI use. If you used AI to draft a brief, say so. The court doesn’t care if you used AI. It cares if you lied.
One litigator in New York told Clio: “ChatGPT’s tone mimics legal authority so well that junior associates don’t question its outputs.” That’s the real danger. Not the AI. The trust.
What Happens If You Don’t Follow These Rules?
Sanctions. Dismissals. Loss of license. Reputation destroyed.
After Mata, another Florida attorney used ChatGPT to draft an opposition brief. He copied the same fake cases. Judge William Dimitrouleas sanctioned him $3,500. He lost his client’s case. His firm lost trust. He had to pay out of pocket.
Malpractice insurance claims related to AI errors have jumped 3.7x since 2023, according to ALM Intelligence. Firms without policies are 4 times more likely to face disciplinary action. The cost of a single mistake isn’t just money. It’s your career.
Can AI Still Be Useful in Law?
Yes-but only if you treat it like a power tool, not a magic wand.
A 2023 ABA survey found that 68% of lawyers using AI with verification protocols saved 15-22 hours a week. They used it to draft first drafts, summarize depositions, or outline arguments. But they never filed anything without checking.
Westlaw’s “Precision Verified” tool, launched in November 2023, cross-references every AI output against its 40,000+ legal sources. Accuracy? 99.97%. LexisNexis now gives every citation a “Source Confidence Score” from 1 to 100. You can see how sure the system is.
The Legal Analytics Verification Consortium, formed in September 2023, has built a shared database of “red flag” AI patterns. In testing, it caught 94% of fake citations. That’s not science fiction. That’s now.
AI isn’t going away. But the days of using it blindly are over. The law doesn’t care how smart the tool is. It cares if you’re responsible.
What’s Next?
The Supreme Court is considering amending Rule 11 to explicitly address AI-generated filings. More states will require disclosure. More firms will adopt AI policies. Solo practitioners are falling behind-only 29% have formal policies, compared to 87% of Am Law 100 firms.
If you’re a lawyer, you have two choices: adapt or get left behind. Not because AI is dangerous. But because ignoring it is. The tools are here. The rules are clear. The consequences are real.
Use AI. But never trust it.
Can I use ChatGPT for legal research at all?
You can use ChatGPT for brainstorming ideas, drafting initial outlines, or explaining legal concepts in plain language. But you cannot rely on it for case citations, statutory interpretation, or any factual claim you plan to file in court. Always verify every output through Westlaw, LexisNexis, or another verified legal database before using it.
What happens if I accidentally submit a fake citation?
Even if you didn’t mean to lie, courts treat fabricated citations as serious ethical violations. You could face sanctions, fines, dismissal of your case, or disciplinary action from your state bar. Judge Castel didn’t punish the lawyers in Mata v. Avianca for being lazy-he punished them for failing to verify what they submitted. Intent doesn’t matter. The harm does.
Are there any AI tools that are safe for legal citations?
Yes. Tools like Westlaw Precision, Lexis+ AI, and Casetext’s CARA are built on verified legal databases and include automated verification layers. These tools don’t generate fake cases-they retrieve real ones. They’re designed for legal use. General-purpose AI like ChatGPT, Gemini, or Claude are not.
Do I have to tell the court if I used AI?
In federal court, yes. Since January 15, 2024, Standing Order 24-01 requires all attorneys to disclose AI use in filings. Many state courts now have similar rules. Even if your jurisdiction doesn’t require it yet, it’s a best practice-and it protects you if something goes wrong.
How can I train my team to use AI safely?
Start with mandatory training: 8-12 hours for associates on AI hallucinations and verification protocols. For partners, 4 hours on supervisory responsibility. Implement a two-person review rule for all AI-generated content. Require an AI use log for every filing. And always verify citations through Westlaw or LexisNexis before submission.
This is why lawyers are the reason the legal system is a joke. You trust a chatbot to write your brief? You didn't even Google the case names? That's not incompetence, that's negligence dressed up as laziness. The court didn't just sanction them-they exposed the entire profession's moral decay. If you can't verify a citation, you shouldn't be licensed.
And now we're supposed to believe this is just a 'mistake'? No. It's a symptom. The legal profession has outsourced its brain to AI and now acts surprised when the AI lies. Wake up.
I get why people are mad but let's not throw the baby out with the bathwater. AI isn't the enemy here-it's the lack of training. I've used ChatGPT to draft my opening paragraphs for pro bono cases and it saved me hours. But I always check the cases. Always. I print them out, cross-reference with PACER, and have a second pair of eyes look at it. It's not about banning tools, it's about using them right.
Also, $5k is a slap on the wrist. The real punishment should've been suspension. But hey, at least they're talking about it now.
You think this is about AI? Nah. This is about Big Law being lazy and the courts being too soft. The real story? The bar associations are scared. They know if they admit AI is a tool, they lose control. So they make up rules to scare people. Meanwhile, Westlaw and Lexis are just fancy AI with a lawyer label on it. They all hallucinate. They just hide it better.
And don't even get me started on 'disclosure'. You think the judge cares if you used AI? He cares if you got the law right. If you're right, you're golden. If you're wrong, you're screwed. The rest is theater.
This is the end of the legal profession as we know it. We're not just losing jobs-we're losing truth. The law was built on precedent, on history, on human judgment. Now? A machine that doesn't know what justice is generates citations that don't exist, and we call it 'efficiency'.
What happens when the AI starts writing the rulings? When the judge's opinion is generated by a model trained on 20 years of bad briefs and fake case law? We're not just trusting AI anymore-we're outsourcing our moral compass to a neural net that doesn't even know what 'wrong' means.
I used to believe in the law. Now I just believe in the silence after the machine stops talking.