- within Compliance, Environment and Coronavirus (COVID-19) topic(s)
- with readers working within the Environment & Waste Management industries
California's "companion chatbot" bill (SB 243) regulates AI-powered companion or social chatbots, with the bill potentially impacting some chatbots not necessarily designed specifically for purely social interactions. Note that SB 243 includes a private right of action, so it will almost certainly be an area of intense scrutiny. SB 243 imposes transparency, safety and reporting obligations on operators of "companion chatbots," with enhanced protections for minors and users expressing suicidal ideation or thoughts of self-harm. Organizations deploying consumer-facing chatbots should review whether they are subject to SB 243 and, if so, plan to incorporate new safeguards and meet reporting requirements.
Notably, SB 243 impacts not just developers of chatbots but any person or entity that makes a covered chatbot available to users (operators).
Most deployment requirements take effect January 1, 2026, with annual reporting beginning July 1, 2027.
Is My Chatbot a "Companion Chatbot"?
SB 243 applies to "companion chatbots," defined as AI-powered natural language systems capable of providing adaptive, "human-like responses" and interacting "socially" with users, including through anthropomorphic features or the ability to sustain dialogues or a relationship across multiple interactions.
SB 243 focuses on systems designed for or capable of emotional or relational interactions with users. SB 243 exempts transactional or utility-focused bots — single-interaction chatbots such as customer service, technical support, and business operations chatbots fall outside the bill's scope. Additionally, stand-alone consumer devices that process voice commands, such as digital assistants and home management devices, are not covered. Note that for those not meeting these exemptions, the purpose of the chatbot is not the determining factor of whether it meets the definition of a companion chatbot. As AI becomes more sophisticated and its capabilities naturally expand, it seems likely that a greater percentage of chatbots will meet this definition.
What About Realistic Interactive or Humanoid Video Game Characters?
Non-player characters and bots within video games are excluded so long as they are not capable of discussing mental health or sexually explicit topics and do not maintain interactions beyond the scope of the game.
Do I Have To Change My Chatbot?
Notice and Transparency
Companion chatbot operators must provide clear and conspicuous notice that the user is interacting with an AI-powered system and not a human. Additionally, where applicable, operators must conspicuously disclose that their chatbot may not be suitable for some minors.
Safety Protocols
Operators must implement mandatory safety protocols designed to prevent outputs that could promote suicidal ideation, suicide or self-harm. At a minimum, such protocols must interrupt or restrict the chatbot's engagement with such content and provide notification about or redirection to appropriate crisis resources, such as suicide prevention hotlines and crisis service providers in the event a user expresses suicidal ideation or self-harm. Details about an operator's safety protocols must be posted on the operator's website.
What if My Chatbot Might Interact with Minors?
When interacting with a known minor, additional safeguards apply. The chatbot must present clear, conspicuous alerts at least every three hours, reminding the user that they are interacting with AI, not a human, and suggest that the user take a break. The operator must also employ accountable measures designed to prevent the chatbot from producing visual material that is sexually explicit in nature or directly instructing a minor to engage in sexual conduct.
What Do I Need to Report about My Companion Chatbot?
Effective July 1, 2027, operators must submit annual reporting to the California Office of Suicide Prevention. The report must include:
- The number of times the operator issued a suicide or self-harm referral.
- A description of protocols implemented to detect, remove, and respond to users' suicidal ideation.
- An explanation of protocols used to prohibit the chatbot from responding to or engaging with users' suicidal ideation.
Practically speaking, mandatory reporting requirements require operators to implement logging and recordkeeping practices capable of accurately tallying referrals and documenting required safety responses. This will likely require discussions with chatbot designers and vendors and potentially amendments to service agreements or SOWs.
What Should I Do Now to Comply with SB 243?
Applicability
Organizations should conduct an applicability assessment to determine whether any consumer-facing chatbots may qualify as companion chatbots under the statute's relational or social engagement criteria.
Vendor Management
Recall that the definition of an "operator" includes any entity that deploys a subject chatbot. Enterprises deploying third-party-developed or -managed chatbots should revisit vendor contracts to require compliance with SB 243, assure the ability to monitor and report interactions, and make sure that there is transparency into safety protocols.
Disclosures
If applicable, update user-facing disclosures to provide clear, conspicuous AI identification and minor suitability statements.
Safety Protocols and Reporting
Design, test, and formalize safety protocols intended to detect, restrict, and redirect content involving suicidal ideation or self-harm. Remember that safety protocol details should be publication-ready for posting on the operator's website. Product and legal teams should also align on recordkeeping to support the July 1, 2027, reporting obligations, including standardized definitions of "referral," metrics collection, and audit trails.
Potential Interactions with Known Minors
Confirm the ability to implement session management features that trigger the three-hour reminders and ensure reasonable measures are in place to prevent sexually explicit visual content or any instruction to engage in sexual conduct.
What Happens if My Chatbot Violates SB 243?
SB 243 authorizes private rights of action for harms, with remedies that include injunctive relief and civil penalties of up to $1,000 per violation, plus attorneys' fees. Given the per-violation structure and the availability of private enforcement, operators face potential exposure to individual suits and aggregated litigation risk.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.