You Can’t Spell Sanction without “A” and “I”: When Unchecked AI Hallucinations Result in Court Sanctions
Courts and bid protest tribunals are increasingly sanctioning government contractors who submit filings containing AI-generated errors, including fabricated cases, inaccurate quotations, and mischaracterized authorities. Recent decisions from the Government Accountability Office (GAO), the U.S. Court of Federal Claims (COFC), the Civilian Board of Contract Appeals (CBCA), and the Armed Services Board of Contract Appeals (ASBCA), and make clear that while AI use is not always prohibited, lawyers and litigants remain responsible for verifying every citation and legal proposition before filing. Uncaught AI-generated “hallucinations” have resulted in sanctions, including admonishments, dismissal, stricken briefs and disciplinary referrals.
New Cases in GAO, COFC, CBCA, ASBCA and the Fourth Circuit
In Oready, GAO dismissed three consolidated protests as an abuse of the bid protest process, concluding that the protester repeatedly relied on non‑existent citations and fabricated decisions, despite prior warnings from GAO that further irregularities could result in sanctions. B-423649, (Sept. 25, 2025). GAO emphasized that repeated citation errors undermine the integrity and effectiveness of its forum and showed no additional sympathy for Oready, which held itself out as a “small, non-attorney team using publicly available sources . . . under tight deadlines, without legal subscription databases.” Similarly, in Raven Investigations & Security Consulting, GAO identified inconsistencies that bore the hallmarks of AI‑assisted drafting, such as nonexistent cases and quotations that could not be traced to any GAO decision. B-423447, (May 7, 2025). GAO warned that AI‑generated content used without human verification “wastes the time of all parties and GAO,” violates the statutory mandate for “inexpensive and expeditious” resolution, and may lead to sanctions. GAO views this as a rapidly escalating problem and intends to use its inherent sanctions authority to prevent abuses.
COFC has taken a similarly firm position. In Sanders v. United States, the plaintiff, misled by AI‑generated fake case law, relied on fabricated authorities to advance arguments. 176 Fed. Cl. 163, 168 (2025). The court emphasized citing non‑existent cases generated by AI constitutes an “unacceptable” “abuse of the adversary system.” Id. at 168-70. COFC underscored that such filings will not excuse jurisdictional defects, nor will they be tolerated in litigation. Federal courts nationwide have begun imposing monetary sanctions, striking briefs and referring attorneys to disciplinary authorities for similar AI‑induced errors, reinforcing the need for contractors and their counsel to use AI cautiously and verify all outputs.
The CBCA likewise sanctioned a party who cited to non-existent authorities and quoted fabricated deposition testimony in Louis J. Blazy v. Department of State. CBCA 7992 (Feb. 24, 2026). Reaffirming a party’s duty of candor and applying Board Rule 35 (standards of conduct/sanctions), the CBCA issued a formal admonishment and warned that continued misconduct could lead to harsher measures, including dismissal. The Board also noted that while it does not prohibit AI, parties remain fully responsible for the accuracy of all submissions.
The ASBCA has recently tackled unverified AI in Huffman Construction, where the ASBCA granted the Government’s motion to strike the contractor’s post‑hearing reply brief and denied leave to file a revised brief after finding that over 70% of the brief’s citations were inaccurate. ASBCA No. 62591 (Oct. 23, 2025). Counsel admitted to using AI which led to fictitious case citations, record cites that did not support the propositions asserted, and non‑existent transcript pages. As a tailored, deterrent sanction, the Board struck the reply brief in its entirety.
User Realities and Practical Applications
For government contractors, these developments present particular risks. Bid protests involve tight, regulation-driven deadlines that incentivize quick drafting. These conditions encourage AI use. GAO will not allow this time pressure to excuse citation irregularities. Aside from the risk of hallucinations, protesters who upload procurement‑sensitive information to AI tools may violate court and GAO protective orders, risk inadvertent disclosure, and potentially jeopardizing not only the protest but the contractor’s ability to pursue an opportunity. The combined effect is a clear expectation that contractors adopt structured, documented verification processes for any filing prepared with assistance from AI tools.
Internal policies should require human verification of every citation, prohibit inputting protected or proprietary information into unapproved AI tools, and document who verified each source. Outside guidelines may require counsel to comply with local AI‑related standing orders, certify verification of all citations and ensure compliance with protective orders when using AI tools. Finally, contractors should consider implementing a “red team” cite‑check process to independently confirm the validity of each cited authority before filing a court pleading.
Courts are articulating an increasingly uniform expectation: AI may assist, but humans must verify. Government contractors seeking AI-based efficiency must integrate verification into their workflows or risk dismissal, sanctions or reputational harm.
If your team is using or considering generative AI tools for drafting, research or communications, now is the time to implement robust verification and governance protocols. If you have any questions, please contact Jackson Moore, Noel Hudson or the Smith Anderson lawyer with whom you usually work.
Professionals
- Attorney
- Attorney


