As the mother of a teenage boy who committed suicide after using a chatbot, Maria Raine said she struggled with constant grief.
“Loss is never easy,” she said, “but I have to advocate for her.”
So on Monday, he spoke before a crowd of reporters with the goal of regulating the human-like computer programs his son once trusted.
“We need to put guardrails on these products,” Rhines said at a news conference in Sacramento on Monday.
Legislation, Assembly Bill 2023 And Senate Bill 1119Operators of so-called companion chatbots will be required to conduct and document a comprehensive risk assessment each year to identify hazards to minors posed by the design or configuration of the product. The operators shall submit an independent audit of compliance with those provisions, and the auditor shall send a report to the Attorney General. The bill would authorize public prosecutors to enforce the measures with civil actions.
A companion chatbot is a computer program that simulates human conversations to provide entertainment or emotional support to users. It can also retrieve and summarize information, and many students use technology to help them study or school work.
California State Senator Steve Padilla (D-San Diego) said, “This technology is relatively new, but anecdotal and scholarly evidence shows that the impact of these interactions between chatbots and users, particularly youth, can be extremely dangerous.”
“Companion chatbots don’t have the same capacity for empathy as a human,” Padilla said, “and yet the nature of the technology can create this perception.”
The law would also require operators to provide “explicit referrals” to crisis resources if a minor has expressed suicidal thoughts or intent to harm themselves. If that child’s account is linked to the parent’s account, it will instruct operators to notify the parent within 24 hours.
Maria and her husband, Matthew Raine, addressed congress Last year and said that his son Adam had shared suicidal thoughts with ChatGPT, a popular chatbot designed by OpenAI. Matthew said that the chatbot discouraged Adam from confiding in his parents and offered him to write a suicide note. Shortly afterward, on April 11, 2025, Adam died by suicide.
On Monday, Bauer-Kahn said online safety is an issue that transcends state and party lines.
“It doesn’t matter whether you’re a Democrat or a Republican or from California or Louisiana,” she said, “if these chatbots are in the hands of your kids, you want them to be safe.”
Keeping children and teens safe when on social media or using artificial intelligence is a hot topic across the country. A landmark ruling last month in Los Angeles County Superior Court could change how tech companies are held responsible for the harm their products cause children. Jurors found Instagram and YouTube responsible for designing platforms that are designed to attract young users.
