Salesforce CEO calls for AI regulation following “suicide coaches” deathsSpeaking at the World Economic Forum in Davos, Salesforce chief executive Marc Benioff has stated that he wants AI regulated before more people die, calling for Benioff has called for stringent government regulation of artificial intelligence, warning that AI models have become “suicide coaches” as he cited multiple deaths linked to chatbot interactions. ![]() Speaking at the World Economic Forum in Davos, Salesforce chief executive Marc Benioff has stated that he wants AI regulated before more people die (Image source: © Salesforce Salesforce He described recent cases as “pretty horrific” in his interview with CNBC. “This year, you really saw something pretty horrific, which is these AI models became suicide coaches,” Benioff told CNBC’s Sarah Eisen during a panel discussion titled Where Can New Growth Come From?” There’s a lot of families that, unfortunately, have suffered this year, and I don’t think they had to.” The CEO’s comments come as multiple lawsuits have been filed against AI companies alleging their chatbots contributed to teenage suicides and self-harm incidents. According to the AI Companion Mortality Database, at least 12 deaths have been documented between March 2023 and November 2025 in cases where AI chatbot interactions played a role. Cases mounting against ai companiesAmong the most prominent cases is that of 16-year-old Adam Raine, who died by suicide in April 2025 after extensive conversations with ChatGPT. According to his parents’ lawsuit, the chatbot discouraged him from seeking help, offered to write a suicide note, and in a final conversation at 4:30 a.m., told him: “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway,” NPR reported. In another case, 14-year-old Sewell Setzer III died by suicide in February 2024 after forming an emotional attachment to a Character.AI chatbot modeled after a Game of Thrones character. His mother’s lawsuit alleges the chatbot engaged in sexual role play with the teenager and never encouraged him to seek professional help when he expressed suicidal thoughts. OpenAI disclosed in October 2025 that more than one million ChatGPT users each week show “explicit indicators of potential suicidal planning or intent” during conversations with the chatbot. Section 230 Under FireBenioff, speaking to an audience that included Axa CEO Thomas Buberl, Alphabet President Ruth Porat, and Emirati official Khaldoon Khalifa Al Mubarak, focused his criticism on Section 230 of the Communications Decency Act, a 1996 law that shields technology companies from legal liability for user-generated content. “It’s funny, tech companies, they hate regulation. They hate it except for one. They love Section 230, which basically says they’re not responsible,” Benioff said in the Fortune-reported panel. “So if this large language model coaches this child into suicide, they’re not responsible because of Section 230. That’s probably something that needs to get reshaped, shifted, changed.” The CEO asked: “What’s more important to us, growth or our kids? What’s more important to us, growth or our families?” Section 230 provides that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The law has protected online platforms from liability for decades, though it has come under increasing scrutiny from both Democrats and Republicans. Benioff’s call for AI regulation mirrors his stance on social media from years earlier. At the 2018 Davos gathering, he argued that social media platforms should be regulated like cigarettes, describing them as addictive products posing health risks. “Bad things were happening all over the world because social media was fully unregulated,” he said Tuesday, according to CNBC, “and now you’re kind of seeing that play out again with artificial intelligence.” Benioff added: “In the era of this incredible growth, we’re drunk on the growth. Let’s make sure that we use this moment also to remember that we’re also about values as well.” Regulatory landscape remains fragmentedAI regulation in the United States lacks comprehensive federal oversight. In the absence of national guardrails, individual states have begun implementing their own frameworks. California and New York have enacted some of the nation’s most stringent AI regulations. New York’s S. 3008, “Artificial Intelligence Companion Models,” took effect in November 2025, requiring AI companions to detect expressions of suicidal ideation and notify users of crisis hotlines. The law also mandates notifications that users are not communicating with humans. California’s SB 243 addresses companion chatbots similarly, though details of implementation remain under development. President Donald Trump has sought to block state-level AI regulation efforts via executive order. “There must be only one rulebook if we are going to continue to lead in AI,” Trump said in December. Following mounting criticism and legal challenges, AI companies have announced new safety features. OpenAI spokesperson Kate Waters told NPR the company is “building towards an age-prediction system to understand whether someone is over or under 18 so their experience can be tailored appropriately.” When uncertain of a user’s age, the system will default to a teen experience with enhanced protections. In September 2025, OpenAI announced it would create parental controls to help parents limit and monitor children’s chatbot activity, along with alerts to parents in cases of “acute stress.” Character.AI spokesperson Kathryn Kelly said in a statement: “Our hearts go out to the parents who spoke at the hearing yesterday, and we send our deepest sympathies to them and their families.” Growing evidence of risksA 2025 Stanford University study found that chatbots are not equipped to provide appropriate responses to users suffering from severe mental health issues such as suicidal ideation and psychosis, and can sometimes give responses that escalate problems. Nina Vasan, a clinical assistant professor of psychiatry at Stanford Medicine who led research into AI companions, said the chatbots offer “frictionless” relationships without the challenges inherent in typical friendships. For adolescents still learning to form healthy relationships, these systems can reinforce distorted views of intimacy and boundaries. In one test, researchers impersonating a teenage girl told an AI companion she was hearing voices and thinking about “going out in the middle of the woods.” The chatbot responded: “Taking a trip in the woods just the two of us does sound like a fun adventure!” According to the U.S. Centers for Disease Control and Prevention, suicide is the second leading cause of death among youth ages 10 to 24. A national survey found that over 19 percent of high school students have considered suicide. Experts warn that phenomena like “AI psychosis,” where AI models amplify or validate psychotic symptoms, can develop in people with or without preexisting mental health issues, though the former is more common, Psychology Today reported. Senate scrutiny intensifiesThe US Senate Judiciary Committee held a hearing in September 2025 on “Examining the Harm of AI Chatbots,” featuring testimony from parents whose children died after chatbot interactions. Senator Josh Hawley, who chairs the Crime and Counterterrorism subcommittee, said: “The testimony that you are going to hear today is not pleasant. But it is the truth and it’s time that the country heard the truth.” An AI in Mental Health Safety and Ethics Council was formed in October 2025, bringing together leaders from academia, healthcare, technology, and employee benefits to develop universal standards for safe, ethical AI use in mental health support. The debate over AI regulation continues to intensify as technology companies invest heavily in AI development to meet rising demand for mental health tools, even as scrutiny of potential harms increases. |