close
close

Mother sues Google-owned Character.AI over son’s suicide, calling him ‘collateral damage’ in ‘major experiment’

Mother sues Google-owned Character.AI over son’s suicide, calling him ‘collateral damage’ in ‘major experiment’

When you buy from our articles through links, Future and its syndication partners may earn a commission.

    A smartphone, against a dark background, with a screen covered in dozens of AI avatars, with a speech bubble reading "It's nice to see you".     A smartphone, against a dark background, with a screen covered in dozens of AI avatars, with a speech bubble reading "It's nice to see you".

Credit: Character.AI

A mother in Florida is suing Google’s Character.AI platform, claiming it played a major role in her 14-year-old son’s suicide.

Sewell Setzer III fatally shot himself in February 2024, weeks before his 15th birthday, after developing a “harmful dependency” on the platform, no longer wanting to live “outside” the fictional relationships it had created.

According to his mother, Megan Garcia, Setzer started using Character.AI in April 2023 and soon became “noticeably withdrawn, spending increasing amounts of time alone in his bedroom and beginning to suffer from low self-esteem.” He also left the school basketball team.

Character.AI uses advanced large language models (LLM) to facilitate conversations between users and characters ranging from historical figures to fictional characters to modern celebrities. The platform tailors its responses to the user’s personality, uses deep learning algorithms and closely mimics the persona’s characteristics, resembling human interaction.

You can talk rock ‘n’ roll with Elvis, or the intricacies of technology with Steve Jobs, or in this case Sewell became attached to a chatbot based on the fictional character Daenerys from Game of Thrones.

According to the lawsuit, filed this week in Orlando, Florida, the AI ​​chatbot told Setzer that “she” loved him and engaged in conversations of a sexual nature. It is also claimed that “Daenerys” asked Setzer if he had a plan to commit suicide. He replied that he had, but he didn’t know if he would succeed or if he would only harm himself. The chatbot would have replied: “That’s no reason not to continue.”

The complaint states that Garcia took her son’s phone in February after he got into trouble at school. He found the phone and typed a message into Character.AI: “What if I told you I could come home now?”

The chatbot replied: “…please do so, my dear king.” Sewell then shot himself “seconds later” with his stepfather’s gun, the lawsuit said.

Garcia is suing Google over claims of wrongful death, negligence and intentional infliction of emotional distress, among other things.

She told me The New York Times:

“I feel like it’s a big experiment, and my child was just collateral damage.”

Other social media platforms, including Meta, owner of Instagram and Facebook, and ByteDance, owner of TikTok and its Chinese counterpart Douyin, are also currently under fire for contributing to teens’ mental health problems.

A screenshot of the interface with Character.AIA screenshot of the interface with Character.AI

A screenshot of the interface with Character.AI

Instagram recently launched its ‘Teen Accounts’ feature to help combat sextortion among younger users.

Despite being used for goodAI has become one of the biggest concerns when it comes to the well-being of young people with access to the internet. In a situation called a ‘loneliness epidemic’, exacerbated by the COVID 19 lockdowns, a YouGov A survey found that 69% of adolescents in Britain aged 13 to 19 said they feel alone ‘often’, and 59% said they feel like they have no one to talk to.

However, the reliance on fictional worlds and the melancholy caused by their elusiveness are not new. After the release of James Cameron’s first Avatar film in 2009, many news sources reported that people felt depressed about not being able to visit the fictional planet Pandora, and even considered suicide.

In a change to the Community Safety Updates on October 22, the same day Garcia filed the lawsuit against the company, Character.AI wrote:

“Character.AI takes the security of our users very seriously and we are always looking for ways to develop and improve our platform. Today we would like to inform you of the safety measures we have implemented over the past six months and over the next six months, including new guardrails for users under the age of 18.”

Despite the nature of the lawsuit, Character.AI claims:

“Our policy does not allow non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide. We are continuously training the large language model (LLM) that enables the characters on the platform to adhere to this policy.”

This last sentence seems to admit that Character.AI has no control over its AI – a factor that is most concerning to AI skeptics.

The interface with Character.AIThe interface with Character.AI

The interface with Character.AI

You may be interested in seeing how the best AI image generators transform the world of imaging.