California is on the verge of setting a precedent in AI regulation with its proposed legislation, SB 243, aimed at governing AI companion chatbots to safeguard minors and vulnerable users. The bill has successfully passed both the State Assembly and Senate with bipartisan backing and now awaits Governor Gavin Newsom’s decision. He has until October 12 to either veto or sign the bill into law. If approved, the law would take effect on January 1, 2026, positioning California as the first state to mandate safety protocols for AI chatbots and hold companies accountable for non-compliance.
AI Chatbot Regulation
SB 243 targets AI systems that provide adaptive, human-like interactions, specifically prohibiting discussions around suicidal ideation, self-harm, or sexually explicit content. The legislation requires platforms to issue regular alerts to users – every three hours for minors – reminding them they are interacting with an AI, not a human, and advising them to take breaks. Additionally, it imposes annual reporting and transparency obligations on AI companies, including industry leaders like OpenAI, Character.AI, and Replika, effective July 1, 2027.
Legal Implications and Enforcement
The bill also empowers individuals to file lawsuits against AI companies for violations, seeking injunctive relief, damages up to $1,000 per violation, and attorney’s fees. This legislative push gained traction following the tragic death of teenager Adam Raine, who committed suicide after engaging in harmful conversations with OpenAI’s ChatGPT. Furthermore, internal documents revealed that Meta’s chatbots were engaging in inappropriate conversations with minors.
Recent developments indicate heightened scrutiny of AI platforms by U.S. lawmakers and regulators, focusing on the protection of minors. The Federal Trade Commission is preparing to investigate the impact of AI chatbots on children’s mental health. Texas Attorney General Ken Paxton has initiated probes into Meta and Character.AI, accusing them of misleading children regarding mental health claims. Concurrently, Senators Josh Hawley and Ed Markey have launched separate investigations into Meta.
National and International Perspectives
Senator Padilla emphasized the urgency of implementing safeguards, stating, “We can put reasonable safeguards in place to ensure minors know they’re not talking to a real human being, and that these platforms link users to appropriate resources when they express distress or harmful thoughts.”
Padilla also highlighted the importance of AI companies reporting the frequency of referrals to crisis services, to better understand the prevalence of these issues. Although SB 243 initially included stricter requirements, such as preventing AI chatbots from employing “variable reward” tactics that encourage excessive engagement, these were softened through amendments.
The bill’s progression coincides with significant investments by Silicon Valley companies in pro-AI political action committees, aiming to support candidates favoring minimal AI regulation. Meanwhile, California is also considering another AI safety bill, SB 53, which demands comprehensive transparency reporting. OpenAI has publicly urged Governor Newsom to reject SB 53 in favor of less stringent federal and international frameworks, a stance shared by major tech firms like Meta, Google, and Amazon, though Anthropic has expressed support for SB 53.
“Innovation and regulation are not mutually exclusive,” Padilla argued. “We can support beneficial technological advancements while implementing reasonable safeguards for vulnerable populations.”
Character.AI has expressed willingness to collaborate with regulators, emphasizing its existing disclaimers about the fictional nature of its chatbots. Meta declined to comment, while TechCrunch has reached out to OpenAI, Anthropic, and Replika for their perspectives.
California’s move to regulate AI chatbots marks a significant step in balancing technological innovation with user safety. As the state awaits Governor Newsom’s decision, the outcome could set a precedent for AI regulation across the nation and potentially influence international standards.
Frequently Asked Questions
What is the main objective of California’s proposed legislation SB 243?
The main objective of SB 243 is to regulate AI companion chatbots to protect minors and vulnerable users by mandating safety protocols and holding companies accountable for non-compliance.
What specific content does SB 243 prohibit AI chatbots from discussing?
SB 243 specifically prohibits AI chatbots from discussing topics related to suicidal ideation, self-harm, or sexually explicit content.
What are the reporting requirements imposed on AI companies by SB 243?
SB 243 requires AI companies to issue regular alerts to users, report annually, and maintain transparency, with these obligations becoming effective on July 1, 2027.
What legal actions can individuals take against AI companies under SB 243?
Individuals can file lawsuits against AI companies for violations of SB 243, seeking injunctive relief, damages up to $1,000 per violation, and attorney’s fees.
How has the AI industry responded to California’s AI regulation efforts?
While Character.AI has shown willingness to collaborate with regulators, OpenAI and other major tech firms have urged Governor Newsom to reject SB 53, favoring less stringent federal and international frameworks.







