Link Copied!

裁判所は90日間で偽のAI引用に対して弁護士に14万5千ドルの罰金を科した

米国の裁判所は、2026年第1四半期だけで、AIが生成した引用エラーに対して弁護士に少なくとも14万5千ドルの制裁を科した。オレゴン州は、偽の引用1件あたり500ドル、捏造された引用1件あたり1,000ドルの違反ごとの料金を設定した。一方、ノースウェスタン大学の調査によると、連邦判事の61.6%がAIを自ら使用している。司法府はAIエラーをリアルタイムで評価しており、その枠組みは法廷をはるかに超えて及ぶだろう。

🌐
言語に関する注記

この記事は英語で書かれています。タイトルと説明は便宜上自動翻訳されています。

巨大な真鍮製の天秤が片側に急激に傾き、重い皿には丸まった法的文書と光るAI生成テキストがあふれ、軽い皿には1つの小槌が置かれ、劇的な明暗法廷照明、フォトジャーナリズムスタイル、16:9ウルトラワイド構図

Key Takeaways

  • $145,000 in 90 days: U.S. courts sanctioned lawyers at least $145,000 for Artificial Intelligence (AI)-fabricated citations in Q1 2026
  • Per-infraction pricing is here: Oregon built a tariff schedule for AI errors at $500 per fake citation and $1,000 per fabricated quotation
  • The judicial paradox: A Northwestern University survey found 61.6% of responding federal judges use AI in their own work, yet the bench is punishing lawyers who use it carelessly
  • This extends far beyond law: The per-error pricing framework courts are building will become the template for AI liability across medicine, finance, and engineering

The Bill Came Due

The first quarter of 2026 is over, and the American judiciary has mailed its invoice for AI negligence: at least $145,000 in sanctions against attorneys who let generative AI write their legal briefs and never bothered to check whether the cases it cited actually existed.

That figure, documented across federal and state courts, marks a sharp escalation from the embarrassing headlines that began when a New York attorney named Steven Schwartz became the first lawyer sanctioned for AI-fabricated citations in 2023. What was once a curiosity is now a pattern. Courts are not just punishing individual bad actors. They are building something far more significant: a pricing framework for AI errors.

One Oregon attorney accumulated $109,700 in combined sanctions and costs across multiple filings in the U.S. District Court for the District of Oregon, the largest aggregate penalty tied to a single attorney’s AI-related misconduct. In the Sixth Circuit, two Tennessee lawyers were hit with $15,000 each after submitting briefs containing over two dozen citations that were incorrect, misrepresented, or simply made up.

The raw dollar figures are not the story. The story is the structure underneath them.

The Tariff Schedule

In December 2025, the Oregon Court of Appeals published what appears to be the first per-infraction rate card for AI-fabricated legal citations.

Portland attorney Gabriel A. Watson was sanctioned $2,000 after the court found he had submitted two fabricated citations and one fabricated quotation. The math was explicit: $500 per fabricated citation, $1,000 per fabricated quotation. This was not a lump-sum penalty arrived at by judicial discretion. It was a tariff.

Other Oregon courts arrived at strikingly similar math. In Couvrette v. Wisnovsky, a case in the U.S. District Court for the District of Oregon, an attorney submitted three summary judgment briefs over five months containing 15 AI-generated fake citations and 8 fabricated quotations. The sanction: $15,500, or exactly $500 per fake citation and $1,000 per fabricated quotation. Back in the state Court of Appeals, Salem civil attorney William Ghiorso submitted at least 15 false citations and 9 quotations the court found “contrived from thin air.” His penalty came in at $10,000, below the calculated minimum of $16,500 under the Watson formula, because he demonstrated verified procedural improvements.

Read that last sentence again. A court reduced a sanction because the lawyer proved he had changed his workflow. This is not ad hoc punishment. This is a regulatory framework being assembled in real time.

The Case Files

The Q1 2026 sanctions span at least four federal circuits and multiple state courts. The cases share a common pattern: a lawyer uses ChatGPT or a similar LLM for legal research, the model generates plausible-sounding but nonexistent case citations, the lawyer files the brief without verification, and opposing counsel or the court discovers the fabrications.

Whiting v. City of Athens (Sixth Circuit, No. 25-5425): Tennessee attorneys Van R. Irion and Russ Egli submitted briefs in a consolidated appeal over a 2022 fireworks show in Athens, Tennessee. Circuit Judge John K. Bush found more than two dozen citations that were wrong, misleading, or nonexistent and ordered $15,000 in fines per attorney, plus joint coverage of the appellees’ full attorney fees and double costs. The opinion was recommended for publication. Irion had already received a five-year suspension from the Eastern District of Tennessee in August 2025 for a prior lack of candor with the court.

Greg Lake (Nebraska): Lake submitted a brief containing 57 errors out of 63 citations. The court recommended a temporary suspension of his license. A 90% error rate raises serious questions about the degree to which the brief was reviewed before filing.

Nippon Life v. OpenAI (Northern District of Illinois): Filed in March 2026, this case represents a new frontier. Graciela Dela Torre filed 21 motions, one subpoena, and eight notices using ChatGPT after being told a settled case could not be reopened. The defendants claimed approximately $300,000 in attorney fees spent responding to AI-generated filings. This is not a sanction case. It is a tort case. The cost of AI hallucinations is being litigated as a standalone injury.

On April 13, 2026, California’s State Bar announced disciplinary charges against attorneys Omid Emile Khalifeh and Steven Thomas Romeyn, and approved a disciplinary stipulation for Sepideh Ardestani involving one year of probation and a 30-day license suspension. Khalifeh faces six counts tied to a federal trademark filing containing nonexistent and irrelevant citations. Romeyn acknowledged he failed to verify citations in an Orange County brief.

The pattern is the same every time. The tool generates confident, grammatically correct fabrications. The lawyer trusts it. The court does not. AI hallucinations remain a structural problem baked into how these models work, which makes the verification burden permanent.

The 61.6% Paradox

Here is where it gets uncomfortable for the judiciary.

A Northwestern University survey published in the Sedona Conference Journal in March 2026 sampled 502 federal judges and received 112 responses. The result: 61.6% of responding federal judges reported using at least one AI tool in their judicial work. Of those, 30% used AI for legal research and 15.5% for document review.

At the same time, 45.5% of responding judges said their court had provided no AI training whatsoever.

The paradox is structural, not hypocritical. Judges who use AI for research drafts have clerks to verify outputs. Solo practitioners and small-firm attorneys, the lawyers most likely to be sanctioned, often do not. The technology is the same; the verification infrastructure is not.

Over 300 federal and state judges have now adopted AI disclosure requirements, mandating that lawyers certify whether AI was used in the preparation of filings. This is not an AI ban. It is a liability assignment. Courts are telling lawyers: you may use the tool, but you own every word it produces.

Carla Wale, associate dean at the University of Washington School of Law, summarized the current state: “I don’t think there is a consensus beyond, ‘You have to make sure it’s correct.’”

That non-consensus is about to get very expensive.

The 1970s Parallel

The legal profession has been here before, not with AI, but with medicine.

In the 1960s, a surge of medical malpractice claims flooded American courts, driven by a combination of new and more complex treatments, changes in legal doctrine that removed historic barriers to lawsuits, and the abandonment of charitable immunity for hospitals. Courts discarded the “locality rule” that had previously required expert witnesses to practice in the same community as the defendant, opening the floodgates for claims.

The result was the medical malpractice crisis of the 1970s: insurance premiums spiked as much as 750%, driving some practitioners out of medicine entirely. The legal system built a pricing framework for medical errors, with damages by injury type, caps by jurisdiction, and mandatory insurance minimums, that reshaped the entire profession.

AI hallucination sanctions are following the same arc. A new technology (generative AI, like new surgical techniques in the 1960s) creates a new category of professional error (fabricated citations, like surgical complications). Courts respond with pricing (the Oregon tariff, like malpractice damage schedules). Insurance markets adjust. And the cost structure of the profession permanently changes.

The difference is speed. Medical malpractice law took 20 years to mature. AI hallucination law is assembling in months. Damien Charlotin, a researcher at HEC Paris Smart Law Hub, maintains what may be the most comprehensive database of AI hallucination cases worldwide. According to NPR’s reporting, courts have reached a point where “ten separate courts flagged AI-fabricated filings on a single day.”

Beyond the Courtroom

The legal profession is the canary, the industry where AI errors now carry per-unit, court-enforced pricing. It will not be the last.

Consider the downstream implications:

Legal malpractice insurance: Insurers are watching the Oregon tariff and the Sixth Circuit precedent. Premium adjustments for AI-using attorneys are inevitable. Solo practitioners, the same lawyers who cannot absorb six-figure sanctions, will face the steepest increases. The AI uninsurability problem extends well beyond autonomous agents.

Access to justice: AI was supposed to democratize legal help by drafting basic documents and reducing the cost of simple legal matters for people who cannot afford a lawyer. The sanctions regime creates a paradox: the technology that could help underserved clients is the same technology that produces sanctionable errors when used without proper verification.

Other professions: If courts can price a fake legal citation at a fixed per-infraction rate, regulators can do the same for a fabricated medical entry, a hallucinated financial disclosure, or an AI-generated engineering specification. The Oregon tariff is not a legal curiosity. It is a prototype for cross-industry AI error pricing.

The Nippon Life case is the bridge. When AI-generated filings cause hundreds of thousands of dollars in response costs and the injured party sues for damages, the hallucination is no longer a professional ethics issue. It is a tort. And tort law scales.

The Bottom Line

The Q1 bill is a rounding error compared to what is coming. The Oregon tariff schedule, the Sixth Circuit’s published opinion, and California’s State Bar prosecutions are building blocks of a liability architecture that will define how every profession prices AI risk.

The judiciary is not anti-AI. More than six in ten judges use it themselves. But they are demanding something the technology’s enthusiastic adopters have consistently failed to deliver: verification. The emerging rule is simple. You can use the machine. But when the machine lies, the bill is yours, and now there is a price list.

Anyone who has built a workflow around “generate and file” without a verification step is sitting on a liability that compounds with every unread output. The courts have published the rate. $500 per fabricated citation. $1,000 per fabricated quotation. And the meter is running.

Sources

🦋 Discussion on Bluesky

Discuss on Bluesky

Searching for posts...