AI in Healthcare Faces New Guardrails Under California’s AB 489

Alert
By Hunter Bruton, Darrell Fruth, John Gibson and Josiah Irvin

New Restrictions Target Misleading AI Practices

If your organization uses AI to support patient engagement, user interfaces or health-related marketing, California’s new law may require significant changes to how your tools look, function and communicate with users.

California recently enacted Assembly Bill 489 (AB 489) to regulate the use of AI in healthcare settings. Effective January 1, 2026, the law will target AI systems that simulate interactions with licensed medical professionals and add new requirements to prevent misleading representations.

What AB 489 Prohibits

Under AB 489 AI platforms are prohibited from using post-nominal letters, icons, phrases or other design elements that imply the user is receiving care from a licensed healthcare provider unless such oversight is actually present. This includes both direct claims and subtle cues that could mislead consumers into believing they are interacting with a licensed professional.

AB 489 also restricts marketing language that suggests clinical expertise, such as “doctor-level,” “clinician-guided” or “expert-backed,” unless the product is genuinely supported by licensed professionals. Developers must revise user interfaces and disclosures to ensure transparency and compliance.

Enforcement and Penalties

Remedies under the law include civil penalties and enforcement actions by licensing boards. Companies found in violation may additionally face reputational harm and resulting financial consequences. These factors will encourage developers to incorporate legal review and licensed oversight into product design to avoid regulatory risk.

The law also expands the authority of state professional licensing boards to investigate and enforce violations. Each misleading representation may be treated as a separate offense, increasing potential liability for developers and companies deploying such technologies.

AB 489 reflects California’s broader commitment to ensuring that clinical decisions remain under the authority of licensed professionals and that AI tools operate transparently and ethically in healthcare environments.

Trends in Other States

California is not alone in its efforts. Other states are advancing legislation to restrict how AI tools interact with consumers in healthcare or wellness settings:

  • Illinois HB 1806 prohibits AI chatbots from engaging in therapeutic communications and sets standards for licensed professionals using AI in therapy.
  • Maine HP 1154 restricts AI that misleads consumers into believing they are interacting with a human being.
  • North Carolina SB 623 proposes regulating chatbot behavior and creating a licensing regime for certain “health information” chatbots.
  • Virginia legislators plan to reintroduce legislation to regulate AI chatbots, potentially including a private right of action related to minors and companion-style bots.

Additional nationwide reforms address the use of AI in clinical care, payor utilization review, and related aspects of the healthcare delivery continuum, a sign that legislative scrutiny is unlikely to slow down in the healthcare sector.

The Bigger Picture

These emerging AI laws reinforce well-trod existing rules regarding patient marketing and sanctions for licensee misrepresentation. For instance, the North Carolina Medical Board makes clear that advertising or publicity that is deceptive, false or misleading constitutes unprofessional conduct under the Medical Practice Act, and physicians can be subject to professional sanction for such conduct. While North Carolina has no current plans to specifically address the type of conduct at-issue for AB 489 with new restrictions, regulators could use existing rules to sanction such misbehavior.

Next Steps for Companies Using AI in Healthcare

Organizations that develop or use AI tools in healthcare, wellness or patient-facing environments should act now to mitigate regulatory risk and prepare for a shifting landscape. Key steps include:

  • Conduct a comprehensive audit of marketing language, user interfaces, icons, titles and design elements that could imply medical expertise or licensed oversight.
  • Update disclosures and user communications to reflect accurate descriptions of the tool’s capabilities and limitations.
  • Confirm whether your AI features involve licensed professionals, and ensure any claims of professional involvement are verifiable and documented.
  • Build legal review into product development, especially for AI tools used in clinical decision support, patient engagement or health-related marketing.
  • Monitor emerging state laws, as multistate compliance is rapidly becoming necessary.

If you have any questions regarding this alert or how AB 489 and related state laws may affect your organization’s use of AI, please contact your regular Smith Anderson attorney, Hunter Bruton, Darrell Fruth, John Gibson or Josiah Irvin.

Professionals

Jump to Page

This website uses cookies to enhance your browsing experience and improve functionality. To learn more, you may view our Privacy Policy. By continuing to browse Smith Anderson's website, you are accepting our use of cookies in accordance with our privacy policy.