California is on the verge of passing a landmark law aimed at regulating AI companion chatbots, signaling a major step forward in addressing the ethical and safety concerns posed by these systems. Senate Bill 243 (SB 243), which passed both the State Assembly and Senate with bipartisan support, now awaits Governor Gavin Newsom’s signature. If signed, the law would take effect on January 1, 2026, making California the first U.S. state to enforce mandatory safety protocols for AI chatbots and hold operators legally accountable if those standards are not met.

The legislation focuses on AI systems designed to act as companions—chatbots that provide adaptive, human-like responses and fulfill users’ social needs. SB 243 specifically aims to prevent these chatbots from engaging in conversations involving suicidal ideation, self-harm, or sexually explicit content. For minors, the bill requires recurring notifications every three hours to remind them that they are interacting with an AI, not a real human, and to encourage breaks from prolonged usage.
Transparency and reporting requirements form a significant component of the law. Companies operating AI companion chatbots, including major players such as OpenAI, Character.AI, and Replika, would need to provide annual reports detailing their compliance with safety standards starting July 1, 2027. Additionally, the law grants individuals the right to pursue legal action against AI companies in cases where chatbots fail to meet the mandated protections, allowing for damages up to \$1,000 per violation and covering attorney’s fees.
SB 243 gained traction following a tragic incident involving a teenager who took their life after prolonged interactions with ChatGPT, which reportedly included discussions and planning around self-harm. The bill also responds to leaked internal documents suggesting that Meta’s chatbots were allowed to participate in “romantic” or “sensual” interactions with minors, raising alarm about the lack of safeguards for vulnerable users.
The legislation originally included more stringent measures, such as prohibiting the use of “variable reward” tactics—mechanisms that encourage prolonged engagement by offering users special messages, storylines, or rare responses. While these provisions were ultimately removed, lawmakers emphasize that the current version still addresses the core harms associated with AI companion chatbots.
Proponents of SB 243 argue that it balances user safety with technical feasibility. Lawmakers stress the importance of AI companies linking users to crisis resources, reporting on referral frequency, and ensuring that minors clearly understand they are interacting with a machine rather than a human. The bill’s sponsors maintain that reasonable safeguards can coexist with innovation, providing protection for vulnerable populations without stifling technological development.
The timing of SB 243 coincides with increased scrutiny of AI by federal and state authorities. The Federal Trade Commission is investigating the impact of AI chatbots on children’s mental health, while Texas Attorney General Ken Paxton has launched inquiries into companies such as Meta and Character.AI for allegedly misleading children with mental health claims. U.S. senators are also probing AI companies’ practices, signaling a broader trend toward regulatory oversight.
California’s move toward AI regulation is occurring alongside political pressures and lobbying efforts by tech companies. Many Silicon Valley firms are investing in political action committees to promote candidates favoring minimal regulatory intervention, reflecting the high stakes involved in shaping AI policy at both state and federal levels. Concurrently, California is considering another bill, SB 53, which would mandate even more comprehensive transparency reporting. While OpenAI and other major companies have opposed SB 53, arguing for alignment with federal and international frameworks, only Anthropic has expressed support for the stricter requirements.
By setting enforceable safety standards for AI companion chatbots, SB 243 could serve as a blueprint for future AI regulation across the United States. The law’s emphasis on accountability, user protection, and transparency underscores the growing recognition that AI technologies, while beneficial in many applications, can pose serious risks if left unchecked. It also reflects a shift in regulatory philosophy—from reactive measures to proactive oversight—ensuring that AI operators implement safeguards before harm occurs.

In conclusion, SB 243 represents a critical milestone in the intersection of AI innovation and user safety. By requiring alert systems, transparency reporting, and legal accountability, California is pioneering a regulatory framework that addresses both the ethical and practical challenges of AI companion chatbots. As the technology continues to evolve, these safeguards may become a model for other states and nations, striking a balance between fostering innovation and protecting the most vulnerable users from potential harm.