Client Alerts
Avoiding False Claims Act Landmines in AI-Assisted Coding and Medical Billing
June 2025
Client Alerts
Avoiding False Claims Act Landmines in AI-Assisted Coding and Medical Billing
June 2025
By Michael J. Ruttinger
Computer-assisted coding engines, many of which now leverage generative AI, are transforming hospital billing and claim-reimbursement processes. Instead of just flagging missing modifiers, software can now read clinical notes, suggest codes, and even draft the bill itself.
This efficiency comes with an equally modern legal risk. If an algorithm nudges a coder towards a higher-paying code or applies a one-size-fits-all rule across thousands of patient encounters, every resulting claim raises the specter of a potential False Claims Act (“FCA”) violation. Recent settlements like the $23 million paid by University of Colorado Health (“UCHealth”) (discussed below) make clear that the government may follow the electronic audit trail back to the hospital’s AI-generated coding algorithm. From there, the FCA’s treble-damages and per-claim civil penalties could snowball, not to mention the potential for follow-on state attorneys general actions or consumer-fraud oriented class actions.
This alert provides a roadmap for helping hospitals harness next-generation coding tools without stepping on an FCA landmine.
FCA Scrutiny and Potential Enforcement Triggers
Every Medicare or Medicaid claim must (1) reflect the services furnished to a patient; (2) have contemporaneous documentation; and (3) not be “knowingly” false—a term the FCA describes broadly to include not just actual knowledge, but also deliberate indifference or reckless disregard for the truth. See 31 U.S.C. § 3729(b)(1)(A). In other words, hospital systems are legally responsible for confirming that each claim is clinically warranted and properly documented. Hospitals should therefore exercise caution when outsourcing parts of the coding and claim-verification processes to automated systems, like generative AI tools.
The enforcement ecosystem is three-tiered—and often overlapping. Typically, the federal Department of Justice (“DOJ”) leads the charge by intervening in whistle-blower suits or launching independent investigations under the FCA. State attorneys general may pursue parallel claims under state-level FCA equivalents, frequently coordinating with—but not dependent on—the DOJ. Meanwhile, private state-law consumer-fraud class actions may attack the same conduct from a different angle, alleging unjust enrichment or deceptive-practice violations.
In one high-profile example, the DOJ intervened in a case against UCHealth over an automated coding rule that allegedly “upcoded” emergency department encounters to the highest emergency-department CPT® code (CPT 99285) based on the frequency with which hospital personnel checked a patient’s vitals. Data analysis by the Centers for Medicare & Medicaid Services (“CMS”) identified UCHealth as a “high outlier” for its use of CPT 99285, and the government traced the issue back to the hospital’s automated rule. DOJ then took the position that UCHealth’s coding rule did not meet the requirements of the code description, leading to a $23 million settlement.
The UCHealth settlement offers a stark warning: hospital systems can be held accountable for their coding algorithms. And as AI-assisted coding becomes more prevalent, the risks may multiply. AI-assisted algorithms can enhance efficiency, but any errors within the algorithm can also systemize improper coding at scale. If an algorithm deviates from official guidelines (e.g., always billing high-level codes based on proxy metrics), regulators may consider such systemized practices evidence of knowing upcoding, bringing the FCA’s treble-damages and per-claim penalty provisions into play. The bottom line—which CMS has stated in recent guidance—is that accountability for coding accuracy does not change just because a hospital system uses AI.
Selecting a “trusted” vendor of AI coding technology is not necessarily a safety valve. Many companies that promise to sell efficient and accurate AI-assisted coding software treat their algorithms as a “black box,” stymieing attempts to understand or accuracy-check the vendors’ algorithms. And these companies themselves experience scrutiny. An investigation conducted by Texas’s Attorney General into an artificial intelligence healthcare technology company, Pieces Technologies, ended in a settlement after the Attorney General concluded that the company-developed “metrics” to show its AI products’ accuracy and reliability were false and misleading. While that investigation was consumer-fraud oriented and not an FCA case, it showcases the growing trend toward AI vendor scrutiny.
Three Landmines in AI-Assisted Coding
While potential problem areas with AI-assisted coding continue to emerge, at least three are immediately worth monitoring.
- Set-and-Forget Rules – These include coding rules or algorithms that apply one-size-fits-all logic, like UCHealth’s vitals-check trigger. They may inflate codes at scale and leave an electronic audit trail.
- Generative AI Drafting Codes from Notes – Large language models are capable of hallucinating diagnoses (i.e., making up false medical information) or misreading negations (e.g., “no evidence of pneumonia”). If a clinician blindly adopts and submits the AI’s suggestion, each error becomes a potential false claim tying back to the FCA’s “reckless disregard” standard.
- Black-Box Algorithms – Tools that promise “missed-revenue recovery” but conceal their logic can overwhelm coders with easy “add diagnosis” prompts. This may nudge users to accept codes on the promise of improved revenue, while not knowing whether the code has full documentary support—a classic setup for FCA exposure.
Hospital systems should remain wary of these common risks and implement practices to catch early warning signs before errors can multiply at scale.
Best Practices for Adopting AI-Assisted Coding While Dodging FCA Trouble
The benefits of AI-assisted coding are too promising to ignore. Thankfully, hospital systems can lean into this new technology while minimizing risks with a few proactive steps.
- Build Contract Guardrails – When negotiating with vendors, consider asking them to disclose coding logic or provide explainability dashboards. Try to negotiate indemnity, audit rights, and error-mitigation clauses into the contract itself.
- Test Algorithms Before Going Live – Consider running AI-assisted algorithms against appropriately redacted historical, or even synthetic, data and comparing the outputs to clinician-validated codes.
- Keep a Human In the Loop – For higher-severity codes, consider requiring coder sign-off that the AI suggestions match patient documentation. It may also be worth using natural-language processing to flag potential negations in the patient’s record (e.g., “rule out,” “history of”) before bills drop. And pay close attention to feedback from frontline staff, particularly if they raise accuracy concerns that stall in the IT queue.
- Establish an “Algorithm Governance Committee” – This committee should regularly review coding outliers, which can be benchmarked against CMS’s own analytics. It may include clinical leads as well as personnel from Health Information Management (HIM) and Information Technology (IT).
- Pressure-Test Controls with a Table-Top Simulation – Conduct a facilitated table-top exercise to walk legal, compliance, HIM, IT, and clinical leaders through a realistic AI-driven upcoding scenario in real time. Stepping through detection, stop-bill, investigation, disclosure, and remediation decisions—without the pressure of a live incident—helps test roles, uncover policy or contract gaps, validate communication channels, and establish responsibilities for when a real issue develops.
- Audit, Disclose, and Remediate – Voluntary compliance helps minimize the risk of severe government intervention. Hospital systems that sample claims, quantify exposure, and self-disclose where warranted are less likely to face severe penalties. Where FCA litigation arises, DOJ may reward prompt remediation with reduced multipliers.
Bottom Line
AI-assisted coding offers a huge upside, but only when hospitals and health plans apply old-school compliance discipline to this new technology. Keep humans in the loop and be ready to prove that your software improves accuracy, not just reimbursement rates. Recent settlements show that DOJ will follow the audit trail wherever the algorithm leads—and that the cost of stepping on a digital landmine now runs well into eight figures. A thoughtful government program today is far cheaper than an FCA case tomorrow.
For more information, or for assistance with designing a tabletop exercise to “pressure test” your AI-assisted coding controls, please contact Michael Ruttinger.
ADDITIONAL INFORMATION
For more information, please contact:
- Michael J. Ruttinger | 216.696.4456 | michael.ruttinger@tuckerellis.com
This Client Alert has been prepared by Tucker Ellis LLP for the use of our clients. Although prepared by professionals, it should not be used as a substitute for legal counseling in specific situations. Readers should not act upon the information contained herein without professional guidance.


