Link Copied!

法院在90天内因虚假AI引用罚款律师14.5万美元

仅在2026年第一季度,美国法院就对律师因AI生成的引用错误处以至少14.5万美元的制裁。俄勒冈州对每次虚假引用处以500美元的费用,对每次捏造的引用处以1000美元的费用。与此同时,西北大学的一项调查发现,61.6%的联邦法官自己使用AI。司法部门正在实时评估AI错误,该框架将远远超出法庭范围。

🌐
语言说明

本文以英文撰写。标题和描述已自动翻译以方便您阅读。

一个巨大的黄铜天平急剧倾斜到一侧,沉重的托盘里堆满了皱巴巴的法律文件和发光的AI生成的文本,轻的托盘里拿着一个单独的法槌,戏剧性的明暗对比法庭照明,新闻摄影风格,16:9超宽构图

Key Takeaways

  • $145,000 in 90 days: U.S. courts sanctioned lawyers at least $145,000 for Artificial Intelligence (AI)-fabricated citations in Q1 2026
  • Per-infraction pricing is here: Oregon built a tariff schedule for AI errors at $500 per fake citation and $1,000 per fabricated quotation
  • The judicial paradox: A Northwestern University survey found 61.6% of responding federal judges use AI in their own work, yet the bench is punishing lawyers who use it carelessly
  • This extends far beyond law: The per-error pricing framework courts are building will become the template for AI liability across medicine, finance, and engineering

The Bill Came Due

The first quarter of 2026 is over, and the American judiciary has mailed its invoice for AI negligence: at least $145,000 in sanctions against attorneys who let generative AI write their legal briefs and never bothered to check whether the cases it cited actually existed.

That figure, documented across federal and state courts, marks a sharp escalation from the embarrassing headlines that began when a New York attorney named Steven Schwartz became the first lawyer sanctioned for AI-fabricated citations in 2023. What was once a curiosity is now a pattern. Courts are not just punishing individual bad actors. They are building something far more significant: a pricing framework for AI errors.

One Oregon attorney accumulated $109,700 in combined sanctions and costs across multiple filings in the U.S. District Court for the District of Oregon, the largest aggregate penalty tied to a single attorney’s AI-related misconduct. In the Sixth Circuit, two Tennessee lawyers were hit with $15,000 each after submitting briefs containing over two dozen citations that were incorrect, misrepresented, or simply made up.

The raw dollar figures are not the story. The story is the structure underneath them.

The Tariff Schedule

In December 2025, the Oregon Court of Appeals published what appears to be the first per-infraction rate card for AI-fabricated legal citations.

Portland attorney Gabriel A. Watson was sanctioned $2,000 after the court found he had submitted two fabricated citations and one fabricated quotation. The math was explicit: $500 per fabricated citation, $1,000 per fabricated quotation. This was not a lump-sum penalty arrived at by judicial discretion. It was a tariff.

Other Oregon courts arrived at strikingly similar math. In Couvrette v. Wisnovsky, a case in the U.S. District Court for the District of Oregon, an attorney submitted three summary judgment briefs over five months containing 15 AI-generated fake citations and 8 fabricated quotations. The sanction: $15,500, or exactly $500 per fake citation and $1,000 per fabricated quotation. Back in the state Court of Appeals, Salem civil attorney William Ghiorso submitted at least 15 false citations and 9 quotations the court found “contrived from thin air.” His penalty came in at $10,000, below the calculated minimum of $16,500 under the Watson formula, because he demonstrated verified procedural improvements.

Read that last sentence again. A court reduced a sanction because the lawyer proved he had changed his workflow. This is not ad hoc punishment. This is a regulatory framework being assembled in real time.

The Case Files

The Q1 2026 sanctions span at least four federal circuits and multiple state courts. The cases share a common pattern: a lawyer uses ChatGPT or a similar LLM for legal research, the model generates plausible-sounding but nonexistent case citations, the lawyer files the brief without verification, and opposing counsel or the court discovers the fabrications.

Whiting v. City of Athens (Sixth Circuit, No. 25-5425): Tennessee attorneys Van R. Irion and Russ Egli submitted briefs in a consolidated appeal over a 2022 fireworks show in Athens, Tennessee. Circuit Judge John K. Bush found more than two dozen citations that were wrong, misleading, or nonexistent and ordered $15,000 in fines per attorney, plus joint coverage of the appellees’ full attorney fees and double costs. The opinion was recommended for publication. Irion had already received a five-year suspension from the Eastern District of Tennessee in August 2025 for a prior lack of candor with the court.

Greg Lake (Nebraska): Lake submitted a brief containing 57 errors out of 63 citations. The court recommended a temporary suspension of his license. A 90% error rate raises serious questions about the degree to which the brief was reviewed before filing.

Nippon Life v. OpenAI (Northern District of Illinois): Filed in March 2026, this case represents a new frontier. Graciela Dela Torre filed 21 motions, one subpoena, and eight notices using ChatGPT after being told a settled case could not be reopened. The defendants claimed approximately $300,000 in attorney fees spent responding to AI-generated filings. This is not a sanction case. It is a tort case. The cost of AI hallucinations is being litigated as a standalone injury.

On April 13, 2026, California’s State Bar announced disciplinary charges against attorneys Omid Emile Khalifeh and Steven Thomas Romeyn, and approved a disciplinary stipulation for Sepideh Ardestani involving one year of probation and a 30-day license suspension. Khalifeh faces six counts tied to a federal trademark filing containing nonexistent and irrelevant citations. Romeyn acknowledged he failed to verify citations in an Orange County brief.

The pattern is the same every time. The tool generates confident, grammatically correct fabrications. The lawyer trusts it. The court does not. AI hallucinations remain a structural problem baked into how these models work, which makes the verification burden permanent.

The 61.6% Paradox

Here is where it gets uncomfortable for the judiciary.

A Northwestern University survey published in the Sedona Conference Journal in March 2026 sampled 502 federal judges and received 112 responses. The result: 61.6% of responding federal judges reported using at least one AI tool in their judicial work. Of those, 30% used AI for legal research and 15.5% for document review.

At the same time, 45.5% of responding judges said their court had provided no AI training whatsoever.

The paradox is structural, not hypocritical. Judges who use AI for research drafts have clerks to verify outputs. Solo practitioners and small-firm attorneys, the lawyers most likely to be sanctioned, often do not. The technology is the same; the verification infrastructure is not.

Over 300 federal and state judges have now adopted AI disclosure requirements, mandating that lawyers certify whether AI was used in the preparation of filings. This is not an AI ban. It is a liability assignment. Courts are telling lawyers: you may use the tool, but you own every word it produces.

Carla Wale, associate dean at the University of Washington School of Law, summarized the current state: “I don’t think there is a consensus beyond, ‘You have to make sure it’s correct.’”

That non-consensus is about to get very expensive.

The 1970s Parallel

The legal profession has been here before, not with AI, but with medicine.

In the 1960s, a surge of medical malpractice claims flooded American courts, driven by a combination of new and more complex treatments, changes in legal doctrine that removed historic barriers to lawsuits, and the abandonment of charitable immunity for hospitals. Courts discarded the “locality rule” that had previously required expert witnesses to practice in the same community as the defendant, opening the floodgates for claims.

The result was the medical malpractice crisis of the 1970s: insurance premiums spiked as much as 750%, driving some practitioners out of medicine entirely. The legal system built a pricing framework for medical errors, with damages by injury type, caps by jurisdiction, and mandatory insurance minimums, that reshaped the entire profession.

AI hallucination sanctions are following the same arc. A new technology (generative AI, like new surgical techniques in the 1960s) creates a new category of professional error (fabricated citations, like surgical complications). Courts respond with pricing (the Oregon tariff, like malpractice damage schedules). Insurance markets adjust. And the cost structure of the profession permanently changes.

The difference is speed. Medical malpractice law took 20 years to mature. AI hallucination law is assembling in months. Damien Charlotin, a researcher at HEC Paris Smart Law Hub, maintains what may be the most comprehensive database of AI hallucination cases worldwide. According to NPR’s reporting, courts have reached a point where “ten separate courts flagged AI-fabricated filings on a single day.”

Beyond the Courtroom

The legal profession is the canary, the industry where AI errors now carry per-unit, court-enforced pricing. It will not be the last.

Consider the downstream implications:

Legal malpractice insurance: Insurers are watching the Oregon tariff and the Sixth Circuit precedent. Premium adjustments for AI-using attorneys are inevitable. Solo practitioners, the same lawyers who cannot absorb six-figure sanctions, will face the steepest increases. The AI uninsurability problem extends well beyond autonomous agents.

Access to justice: AI was supposed to democratize legal help by drafting basic documents and reducing the cost of simple legal matters for people who cannot afford a lawyer. The sanctions regime creates a paradox: the technology that could help underserved clients is the same technology that produces sanctionable errors when used without proper verification.

Other professions: If courts can price a fake legal citation at a fixed per-infraction rate, regulators can do the same for a fabricated medical entry, a hallucinated financial disclosure, or an AI-generated engineering specification. The Oregon tariff is not a legal curiosity. It is a prototype for cross-industry AI error pricing.

The Nippon Life case is the bridge. When AI-generated filings cause hundreds of thousands of dollars in response costs and the injured party sues for damages, the hallucination is no longer a professional ethics issue. It is a tort. And tort law scales.

The Bottom Line

The Q1 bill is a rounding error compared to what is coming. The Oregon tariff schedule, the Sixth Circuit’s published opinion, and California’s State Bar prosecutions are building blocks of a liability architecture that will define how every profession prices AI risk.

The judiciary is not anti-AI. More than six in ten judges use it themselves. But they are demanding something the technology’s enthusiastic adopters have consistently failed to deliver: verification. The emerging rule is simple. You can use the machine. But when the machine lies, the bill is yours, and now there is a price list.

Anyone who has built a workflow around “generate and file” without a verification step is sitting on a liability that compounds with every unread output. The courts have published the rate. $500 per fabricated citation. $1,000 per fabricated quotation. And the meter is running.

Sources

🦋 Discussion on Bluesky

Discuss on Bluesky

Searching for posts...