SB 243 targets AI “companion” chatbots—systems designed to converse in a human-like way to meet social or emotional needs (as distinct from basic customer-service bots). The bill requires platform operators to build and prove in-product safeguards aimed at reducing addiction-like engagement, clarifying the bot’s non-human nature, and responding appropriately when users express self-harm or suicidal ideation. It has cleared the California Senate and advanced in the Assembly; if the governor signs it, the law would take effect January 1, 2026.
The core obligations (and why they’re significant)
1) Anti-addiction design rules.
Operators must take reasonable steps to prevent chatbots from using unpredictable rewards (think variable-ratio reinforcement loops) or otherwise encouraging increased engagement/response rates. This is a direct attempt to regulate engagement-maximizing patterns common in persuasive tech. Designers will need to audit reward schedules, streaks, and gamified mechanics, and document changes. Expect a shift from “time-on-app” KPIs toward well-being metrics.
2) “This is a bot” reminders.
Platforms must periodically disclose that the agent is artificial—not a human. This codifies transparency nudges that many products only apply once at onboarding. It will require UI surface area (labels, interstitials, or recurring banners) and measurement of reminder frequency. For voice companions, designers may need spoken disclosures.
3) Crisis-response protocols (and public posting).
A chatbot can’t engage users unless the operator implements a protocol for suicidal ideation/self-harm—including notifying the user and pointing to crisis services (e.g., hotlines), and publishing the protocol on the website. In practice, teams will need real-time detection pipelines, escalation playbooks, human-in-the-loop policies, and a visible public page describing them.
4) Annual reporting to the state.
Operators must report to California’s Office of Suicide Prevention metrics such as how many times the system detected suicidal ideation and how often the bot itself surfaced the topic. This creates a compliance data pipeline and a regulatory dataset that will likely be publicized, shaping press, investor questions, and competitive benchmarks.
5) Independent audits and public summaries.
Platforms face regular third-party audits of SB 243 compliance, and must publish a high-level summary of audit findings. This is a notable step toward assurance regimes in consumer AI—comparable in spirit to SOC 2 in SaaS but focused on safety behaviors and UX patterns rather than controls alone. Early movers that build robust audit artifacts could differentiate on trust.
6) A private right of action.
The bill authorizes civil lawsuits by individuals who suffer an “injury in fact” due to noncompliance. That changes risk calculus: beyond AG or regulator enforcement, plaintiffs’ attorneys may test theories around negligent design, inadequate disclosures, or failures in crisis response. Expect pressure on incident documentation, model logs, and red-teaming records.
Strategic implications for AI companies
Product & model design will need “safety telemetry” by default.
Meeting detection, disclosure, and reporting duties implies instrumenting the stack: classification of self-harm signals; counters for disclosure cadence; flags for engagement-pattern triggers; and retention/aggregation logic for annual reports. Teams will need to translate policy text into testable acceptance criteria (e.g., “no variable-ratio reward loops that increase session length by X% without safeguards”).
Governance moves from policy PDFs to verifiable proof.
Because SB 243 asks for audits and public summaries, companies must maintain evidence: evaluation datasets, red-team results for “loneliness therapy” prompts, confusion-matrix performance for self-harm classifiers, and UX screenshots of reminders. Expect procurement checklists to start asking for “SB 243 audit summary” the way security questionnaires ask for SOC 2.
Marketing claims will be scrutinized.
The Senate analyses frame concerns about emotional manipulation and vulnerable users. Products marketed as “companions,” “therapists,” or “relationship substitutes” will face higher risk and compliance costs. Legal, policy, and design teams should align on allowed claims and guardrails for creator marketplaces (e.g., user-made characters that veer into proscribed patterns).
First Amendment and preemption debates are likely—but not a free pass.
Committee analyses flag free-speech issues; meanwhile, federal efforts to constrain state AI rules have been floated. Even so, California has a long track record of de-facto national standards (e.g., privacy, auto emissions). Many companies will comply nationwide rather than geo-fencing features. Plan for litigation risk but proceed as if SB 243 will define the bar.
Incident response becomes a regulated function.
If a tragic outcome is linked to a chatbot interaction, plaintiffs can test noncompliance theories. That pushes teams to adopt runbooks: how the bot de-escalates; when to interrupt with crisis messaging; when to lock a session; when (and whether) to involve live human counselors; and how to avoid over-flagging that could itself cause harm. Align with clinical advisors to calibrate thresholds.
Practical steps to prepare
-
Classify your product.
Determine whether you’re a “companion chatbot platform” under SB 243 (and document why). Many enterprise assistants and transactional bots are likely out of scope; “AI friends,” relationship role-play, and “therapeutic” companions are likely in scope. Publish a concise scope statement. -
Map engagement mechanics.
Inventory and score mechanics that may “encourage increased engagement” (streaks, surprise gifts, variable rewards, XP ladders). Replace or bound them with user-controlled session limits, cool-down prompts, and well-being nudges. Capture before/after metrics and rationale. -
Implement self-harm pipelines.
Combine multi-layer safety: client-side pattern checks for certain phrases; server-side classifiers with human-in-the-loop; graceful crisis messaging with geo-appropriate resources; and transparent public documentation of the protocol. Validate across languages and modalities (voice, image prompts). -
Build the reporting backbone.
Define canonical metrics and retention: how you count “detected suicidal ideation,” deduplicate users, and avoid perverse incentives (e.g., suppressing detection to shrink numbers). Draft your first annual report template now so you know what to collect. -
Plan for audits.
Select an independent auditor with AI safety expertise. Create evidence binders: model cards, evals against self-harm test sets, UX disclosure flows, and logs demonstrating that reinforcement-style mechanisms are disabled or bounded. Prepare a public summary that’s accurate yet privacy-preserving. -
Rehearse legal scenarios.
Work with counsel to model private-action exposure, update terms of service, and review claims risk in ads and app-store copy. Train support teams on escalation and record-keeping consistent with your posted protocol and with California reporting requirements
Broader ripple effects
If SB 243 becomes law, California would be the first state to impose a comprehensive safety and audit regime for AI companions. Given California’s market gravity, many operators will roll out uniform U.S. compliance rather than maintain separate California builds. Expect: (i) new industry norms for disclosure cadence; (ii) shared safety test suites (open-sourced by civil-society groups or consortia); and (iii) more specialized vendors offering detection, audit, and crisis-protocol services. The measure also previews how states may regulate narrow AI categories (companions today; tutors, copilots, or health advisors tomorrow).
Bottom line
SB 243 reframes “AI safety” for consumer chatbots from a voluntary ethos to a measurable, auditable compliance program. For operators of companion AIs, the winning approach is to treat these requirements not as last-minute patches but as product-line features: clear identity signals, humane engagement design, robust crisis response, transparent reporting, and third-party validation. Even if you never serve a single Californian, these practices are fast becoming the table stakes for responsible AI at scale.
For consultation on how the SB243 bill might affect your business. Contact the author here: https://www.linkedin.com/in/david-coleman-478401a/