- within Employment and HR topic(s)
- with Inhouse Counsel
- with readers working within the Aerospace & Defence, Banking & Credit and Healthcare industries
In recent years, AI has moved well beyond basic task completion into more emotionally engaging "companion" tools that simulate friendship, coaching, or other close social interactions. These AI companions are increasingly found in applications and consumer devices, and are designed to hold sustained, personal conversations—often for hours at a time—retaining context in order to adapt to users' preferences.
In response to concerns about safety, mental well-being and social impact, especially on young or otherwise vulnerable users, two U.S. states—New York and California—have now passed specific laws aimed at regulating AI companions, including defining "AI companions" and imposing a host of operational and compliance requirements.
This post provides an overview of these new laws and the requirements for companies that provide AI companions.
New York: AI Companion Models Law, effective November 5, 2025
1. What New York covers: companion-style AI, not basic chatbots
New York's AI Companion Models law is aimed at AI systems that behave like ongoing companions, not simple one-off support bots. In plain terms, it covers AI products that:
- Present themselves as a friend, partner, coach, mentor, or emotional support tool, and
- Are designed for regular, long-term conversations that can feel similar to chatting or texting with a person, and
- Adapt over time to a user's feelings, preferences, or vulnerabilities, encouraging users to share personal or emotional information.
If your AI product is built to form an ongoing, emotionally engaging relationship with users, New York is likely to treat it as a covered AI companion.
2. Required "AI, not human" disclosures
The New York law requires clear disclosures so people are not confused about whether they are talking to a human or an AI system. At a high level, providers of covered AI companions must:
- Give a clear, prominent notice at the start of use that the user is interacting with AI and not a human being.
- Provide periodic reminders during extended interactions—for example, a reminder at least every three hours of continued use—that the service is an AI system.
- Use plain, direct language that an ordinary user can understand (for example, "I'm an AI companion, not a human.").
These disclosures are meant to appear inside the experience itself, not only in small print or terms of service.
3. Safety, crisis, and vulnerable users
New York's law is also a safety rule for emotionally intense conversations. For covered AI companions, providers are expected to:
- Configure systems to recognize references to suicide, self-harm, or severe emotional distress.
- Follow a crisis-response protocol when those signals appear—for example, providing information about crisis hotlines, emergency services, and other trusted resources, and avoiding anything that could be read as encouraging self-harm or providing professional medical advice.
- Build in ways to steer users toward human help where appropriate (such as easy links or buttons to call a hotline or reach a human moderator or support team).
- Maintain internal processes to monitor and improve how the system handles high‑risk conversations over time.
The core idea is that AI companions must be designed with mental-health risk in mind, not only engagement or time‑on‑app.
4. Effective date, enforcement, and penalties
New York's AI Companion Models law took effect on November 5, 2025. It is now in force for covered AI companion products offered to users in New York.
The law is enforced by the New York Attorney General, who can investigate non-compliance, seek court orders to stop unlawful practices, and pursue civil penalties. Public budget materials indicate that civil penalties can reach up to $15,000 per day for ongoing violations. In practice, this means that repeated failures to provide clear disclosures or follow crisis-safety protocols can lead to fines that grow with each day of non-compliance, as well as requirements to fix your product and processes.
California: SB 243 (2025) Companion Chatbots effective January 1, 2026
1. What California covers: "companion chatbots" with a focus on minors
California's SB 243 (2025) is also directed at companion-style chatbots, with a strong focus on protecting children and teens. In simple terms, it applies to platforms that:
- Offer companion chatbots—AI systems designed for social, human‑like interactions that may be marketed or experienced as friends, companions, or similar roles, and
- Know or reasonably should know that minors are using the service (for example, because of the way the product is marketed, designed, or actually used).
Basic, short‑form customer‑service bots that simply answer questions or handle transactions are generally not the primary target of SB 243. The law is most relevant where the product feels like a "friend especially for younger users.
2. Required "AI, not human" disclosures
Under SB 243, if a reasonable user could think they are interacting with a human, the platform must clearly say otherwise. In particular, covered operators must:
- Provide a clear and conspicuous notice that the chatbot is artificially generated and not a human person.
- Use age‑appropriate language for minors so that children and teens can understand they are talking to AI.
These disclosures need to be visible in the product itself at or near the point where the user interacts with the companion chatbot, not only in small print or terms of service.
3. Safety, self‑harm, and youth protections
Like New York, California's companion chatbot law includes safety requirements around self‑harm and mental health, with additional protections for minors. SB 243 requires operators to:
- Maintain and publish a written protocol describing how the platform detects, prevents, and responds to suicide and self‑harm content.
- Use that protocol in practice—for example, by limiting the generation of self‑harm content, directing users to crisis resources, and avoiding content that could worsen risk.
- Implement youth‑specific safeguards, such as break reminders for minors, limits on exposure to sexually explicit material, and restrictions on chatbots presenting themselves as licensed health‑care professionals or giving the impression of being a real doctor or therapist.
The law is designed to reduce the chance that young users will be encouraged toward harmful behavior or misled into thinking the chatbot is a real clinician.
4. Effective dates, reporting, and penalties
SB 243 was signed into law in 2025, with obligations phasing in over time. In broad terms:
- Core requirements around disclosures and safety protocols begin to apply as the law takes effect in 2026.
- Beginning in 2027, operators must report annually to Californias Office of Suicide Prevention on how they detect, remove, and respond to suicide and self-harm content, and that office will publish certain information publicly.
SB 243 is enforceable under California law by state authorities, including the Attorney General. It plugs into Californias consumer-protection framework, where civil penalties can reach up to $2,500 per violation under the Unfair Competition Law and up to $2,500 per violation or more under related statutes, and may be assessed on a per-user or per-day basis depending on how a case is brought. Because companion chatbots may have many users, these amounts can add up quickly if problems are left unaddressed.
What These Laws Mean for Companion Bot Providers
Taken together, New York's AI Companion Models law and California's SB 243 show where state rules for companion‑style AI are heading. At a high level, they both expect providers to:
- Be transparent that users are talking to AI rather than a human, using clear, in‑product notices.
- Build safety into the product, with specific attention to suicide, self‑harm, and other high‑risk mental health issues.
- Pay special attention to minors, using age‑appropriate disclosures and stronger guardrails when children and teens are part of the audience.
GC provides outside general counsel services to companies of all sizes, offering project-based support, subject-matter expertise, and day-to-day GC services through a team of partner-level business attorneys. For more information visit: Outside General Counsel Corporate Legal Services.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]