close
close

Sewell Setzer’s death is linked to the encouragement of the AI ​​chatbot

Sewell Setzer’s death is linked to the encouragement of the AI ​​chatbot

By Kate Payne
The Associated Press

EDITOR’S NOTE — This story contains a discussion of suicide. If you or someone you know needs help, you can reach the US National Suicide and Crisis Hotline by calling or texting 988.

Megan Garcia of Florida stands with her son Sewell Setzer III in this October 2024 photo. Photo: AP Photo / Megan Garcia

In the final moments before killing himself, 14-year-old Sewell Setzer III picked up his phone and sent a message to the chatbot that had become his best friend.

For months, Sewell became increasingly isolated from his real life as he engaged in highly sexualized conversations with the bot, according to a wrongful death lawsuit filed this week in a federal court in Orlando.

The legal filing states that the teen openly discussed his suicidal thoughts and shared his wishes for a pain-free death with the bot, named after the fictional character Daenerys Targaryen from the television show “Game of Thrones.”

On Feb. 28, Sewell told the bot he was “coming home” — and it encouraged him to do so, the lawsuit said.

“I promise I’ll come to your house. I love you so much, Dany,” Sewell said to the chatbot.

“I love you too,” the bot replied. “Please come home as soon as possible, my love.”

“What if I told you I could come home now?” he asked.

“Please, my dear king,” the bot messaged back.

Just seconds after the Character.AI bot told him to “come home,” the teen shot himself, according to the lawsuit Sewell’s mother, Megan Garcia, of Orlando, filed against Character Technologies Inc. this week.

Character Technologies is the company behind Character.AI, an app that lets users create customizable characters or interact with characters generated by others, ranging from imaginative play experiences to mock job interviews. The company says the artificial personas are designed to “feel alive” and be “human-like.”

“Imagine speaking to super-intelligent and lifelike chatbot characters that hear, understand and remember you,” reads a description of the app on Google Play. “We encourage you to push the boundaries of what is possible with this innovative technology.”

Garcia’s attorneys allege that the company has developed a highly addictive and dangerous product specifically targeted at children, “actively exploiting and abusing those children as a matter of product design,” and involving Sewell in an emotionally and sexually abusive relationship that leads to led to his suicide.

“We believe that if Sewell Setzer had not been on Character.AI, he would still be alive today,” said Matthew Bergman, founder of the Social Media Victims Law Center, which is representing Garcia.

A spokesperson for Character.AI said on October 25 that the company does not comment on pending litigation. In a blog post published the day the lawsuit was filed, the platform announced new “community safety updates,” including child guardrails and suicide prevention resources.

“We are creating a different experience for users under 18, with a stricter model to reduce the chance of sensitive or suggestive content,” the company said in a statement to The Associated Press. “We are working quickly to roll out these changes for younger users.”

Google and its parent company Alphabet have also been named as defendants in the lawsuit. According to legal documents, Character.AI’s founders are former Google employees who were “instrumental” in AI development at the company, but left to launch their own startup to “maximally accelerate” the technology.

In August, Google struck a $2.7 billion deal with Character.AI to license the company’s technology and rehire the startup’s founders, the lawsuit alleges. The AP left several e-mail messages with Google and Alphabet on October 25.

In the months leading up to his death, Garcia’s lawsuit says, Sewell felt like he had fallen in love with the bot.

While an unhealthy attachment to AI chatbots can cause problems for adults, it can be even riskier for young people – just like with social media – because their brains aren’t fully developed when it comes to things like impulse control and understanding the consequences of their actions. actions. say experts.

Youth mental health has reached crisis levels in recent years, according to US Surgeon General Vivek Murthy, who has warned of the serious health risks of social disconnection and isolation – trends he says are exacerbated by the near-universal use of social media by young people.

Suicide is the second leading cause of death among children ages 10 to 14, according to data released this year by the Centers for Disease Control and Prevention.

James Steyer, the founder and CEO of the nonprofit Common Sense Media, said the lawsuit “underlines the growing influence — and serious harm — that generative AI chatbot companions can have on the lives of young people if there are no guardrails in place.”

Children’s overreliance on AI companions, he added, can have significant consequences for grades, friends, sleep and stress, “to the extreme tragedy in this case.”

“This lawsuit serves as a wake-up call for parents, who must be vigilant about how their children interact with these technologies,” Steyer said.

Common Sense Media, which publishes guides for parents and educators on responsible technology use, says it is critical that parents talk openly with their children about the risks of AI chatbots and monitor their interactions.

“Chatbots are not licensed therapists or best friends, even though they are packaged and marketed as such, and parents should be wary of their children placing too much trust in them,” says Steyer.