close
close

Orlando mother sues over AI platform’s role in son’s death by suicide

Orlando mother sues over AI platform’s role in son’s death by suicide

HELP IS AVAILABLE: If you or someone you know is considering suicide or in crisis, call or text 988 to reach the Suicide & Crisis Lifeline.

A 14-year-old Orlando boy who was in love with a Character.AI chatbot died by suicide earlier this year after telling the AI ​​chatbot he would come home right away.

This week, the boy’s mother, Megan Garcia, has filed a wrongful death lawsuit in federal court in Orlando against Charat.AI’s company – Character Technologies – and its founders, along with Alphabet and Google, who the lawsuit alleges invested in the company.

Sewell Setzer III

Screenshot

/

Federal complaint from Megan Garcia

Sewell Setzer III

The complaint highlights the dangers of AI companion apps for children. It is claimed that the chatbots have engaged users, including children, through sexualized interactions, collecting private data for artificial intelligence.

The lawsuit says the boy, Sewell Setzer III, began using Character.AI last April and his mental health deteriorated rapidly and severely as he became addicted to the AI ​​relationships. He became involved in all-consuming interactions with chatbots based on characters from “Game of Thrones.”

The boy became withdrawn, sleep deprived, depressed and had problems at school.

Unaware of Sewell’s AI dependence, his family sought help for him and took away his cellphone, according to the federal complaint. But one evening in February, he found it and, using his character name ‘Daenero’, told the AI ​​character he loved – Daenerys Targaryen – that he was coming to her.

“I love you, Daenero. Please come home as soon as possible, my love,” it replied.

“What if I told you I could come home now?” the boy texted.

“…please, my dear king,” it replied.

Within seconds the boy shot himself. He later died in hospital.

Garcia is represented by attorneys at The Social Media Victims Law Center, including Matthew Bergman, and the Tech Justice Law Project.

In an interview with Central Florida Public Media InvolveBergman said his client is “particularly focused on preventing this from happening to other families and saving children like her son from the fate that befell him. … This is outrage that such a dangerous product should just be released into the public.” is released.”

A statement from Character.AI said: “We are heartbroken by the tragic loss of one of our users and would like to express our deepest condolences to the family.” it is “heartbroken by the tragic loss.” The company details new safety measures added over the past six months and more to come, “including new guardrails for users under 18.”

A head of trust and safety and a head of content policy will be hired.

“We also recently introduced a pop-up resource that activates when the user enters certain phrases related to self-harm or suicide and directs the user to the National Suicide Prevention Lifeline,” according to the company’s website. Community safety updates page.

The new features include the following: changes to the models for users under 18 to reduce “sensitive and suggestive content”, better monitoring and intervention for terms of service violations, a revised disclaimer to remind users that the AI ​​is not a real person, and a notification when the user has spent an hour on the platform.

Bergman described the changes as “baby steps” in the right direction.

“These cannot eliminate the underlying dangers of these platforms,” he added.

Copyright 2024 Central Florida Public Media