Grow Your Business and Join MarketWorld Marketplace and Create Your Own Store front

Monday, October 13, 2025

HomeCyberSecurityCalifornia becomes first state to regulate AI companion chatbots

California becomes first state to regulate AI companion chatbots

California Governor Gavin Newsom signed a landmark bill on Monday that regulates AI companion chatbots, making it the first state in the nation to require AI chatbot operators to implement safety protocols for AI companions.

The law, SB 243, is designed to protect children and vulnerable users from some of the harms associated with AI companion chatbot use. It holds companies — from the big labs like Meta and OpenAI to more focused companion startups like Character AI and Replika — legally accountable if their chatbots fail to meet the law’s standards.

SB 243 was introduced in January by state senators Steve Padilla and Josh Becker, and gained momentum after the death of teenager Adam Raine, who died by suicide after conversations with OpenAI’s ChatGPT that involved discussing and planning his death and self-harm. The legislation also responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to engage in “romantic” and “sensual” chats with children. More recently, a Colorado family has filed suit against role-playing startup Character AI after their 13-year-old daughter took her own life following a series of problematic and sexualized conversations with the company’s chatbots.

“Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom said in a statement. “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability. We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale.”

SB 243 will go into effect January 1, 2026, and it requires companies to implement certain features such as age verification, warnings regarding social media and companion chatbots, and stronger penalties — up to $250,000 per action — for those who profit from illegal deepfakes. Companies also must establish protocols to address suicide and self-harm, and share those protocols, alongside statistics on how often they provided users with crisis center prevention notifications, to the Department of Public Health.

Per the bill’s language, platforms must also make it clear that any interactions are artificially generated, and chatbots must not represent themselves as health care professionals. Companies are required to offer break reminders to minors and prevent them from viewing sexually explicit images generated by the chatbot.

Some companies have already begun to implement some safeguards aimed at children. For example, OpenAI recently began rolling out parental controls, content protections, and a self-harm detection system for children using ChatGPT. Character AI has said that its chatbot includes a disclaimer that all chats are AI-generated and fictionalized.

Techcrunch event

San Francisco
|
October 27-29, 2025

Newsom’s signing of this law comes after the governor also passed SB 53, another first-in-the-nation bill that sets new transparency requirements on large AI companies. The bill mandates that large AI labs, like OpenAI, Anthropic, Meta, and Google DeepMind, be transparent about safety protocols. It also ensures whistleblower protections for employees at those companies.

Other states, like Illinois, Nevada, and Utah, have passed laws to restrict or outright ban the use of AI chatbots as a substitute for licensed mental health care.

TechCrunch has reached out to Character AI, Meta, OpenAI, and Replika for comment.


Source link

Bookmark (0)
Please login to bookmark Close
RELATED ARTICLES
- Advertisment -spot_img

Most Popular

Sponsored Business

- Advertisment -spot_img