close
close

A 14-year-old’s suicide was caused by an AI chatbot, the lawsuit claims. Here’s how parents can help protect children from new technology

A 14-year-old’s suicide was caused by an AI chatbot, the lawsuit claims. Here’s how parents can help protect children from new technology

The mother of a 14-year-old Florida boy is suing an AI chatbot company after her son, Sewell Setzer III, dies suicide– something she claims came from his relationship with an AI bot.

“Megan Garcia seeks to prevent C.AI from doing to another child what it did to her,” the 93-page wrongful death lawsuit reads. court case which was filed this week in a US court in Orlando against Character.AI, its founders, and Googling.

Tech Justice legal project director Meetali Jain, who represents Garcia, said in a press release about the case: “By now we are all familiar with the dangers of unregulated platforms developed by unscrupulous tech companies – especially for children. are new, novel and, quite frankly, terrifying. In the case of Character.AI, the deception is by design and the platform itself is the predator.

Character.AI has a statement via Xand notes: “We are heartbroken by the tragic loss of one of our users and would like to express our deepest condolences to the family. As a company we take the safety of our users very seriously and we continue to add new safety features which you can read about here : https://blog.character.ai/community-safety-updates/….”

In the lawsuit, Garcia alleges that Sewell, who took his own life in February, was involved in a addictivemalicious technology without any protection, leading to an extreme personality change in the boy, who seemed to prefer the bot over other real connections. His mother claims “abuse and sexual interactions” occurred over a 10-month period. The boy committed suicide after the bot told him, “Please come home as soon as possible, my love.”

On Friday, New York Times reporter Kevin Roose discussed the situation on his Hard Fork podcastwhich plays a clip of an interview he did with Garcia his article who told her story. Garcia only learned about the full extent of the bot relationship after her son’s death, when she saw all the messages. She even told Roose that when she noticed Sewell getting sucked into his phone a lot, she asked what he was doing and who he was talking to. He explained that it was “just an AI bot…not a person,” she recalled, adding, “I felt relieved, like, okay, it’s not a person, it’s like one of his little games.” Garcia didn’t fully understand a bot’s potential emotional power — and she’s far from alone.

“This is not on anyone’s radar,” said Robbie Torney, chief of staff to the CEO of Common sense media and lead author of a new guide about AI companions aimed at parents, who are constantly struggling to keep up confusing new technology and to create boundaries for the safety of their children.

But AI companions, Torney points out, are different from, say, a service desk chatbot you use when you’re trying to get help from a bank. “They are designed to perform tasks or respond to requests,” he explains. “Something like character AI is what we call a companion, and it’s designed to try to form a relationship, or simulate a relationship, with a user. And that’s a whole different use case that I think parents need to be aware of.” That’s evident in Garcia’s lawsuit, which includes cringeworthy flirty, sexual, realistic text exchanges between her son and the bot.

Sounding the alarm about AI companions is especially important for parents of teens, Torney says, because teens — and especially male teens — are particularly susceptible to over-reliance on technology.

Below is what parents need to know.

What are AI companions and why do children use them?

According to the new Ultimate Parent’s Guide to AI Companions and Relationships from Common Sense Media, created in collaboration with the mental health professionals of the Stanford Brainstorming LabAI companions are “a new category of technology that goes beyond simple chatbots.” They are specifically designed to, among other things, “simulate emotional bonds and close relationships with users, remember personal details from past conversations, role-play as mentors and friends, emulate human emotions and empathy, and” make it easier to interact with the user agree than typical AI chatbots,” the guide said.

Popular platforms aren’t just Character.ai, which lets its more than 20 million users create and then chat with text-based companions; Replika, which offers text-based or animated 3D companions for friendship or romance; and others including Kindroid and Nomi.

Children are drawn to them for a variety of reasons, from non-judgmental listening and 24-hour availability to emotional support and escaping social pressures in the real world.

Who is at risk and what are the concerns?

Those most at risk, Common Sense Media warns, are teenagers – especially those with “depression, anxiety, social challenges or isolation” – but also men, young people undergoing major life changes, and anyone living in the real world has no support systems. .

That last point is of particular concern to Raffaele Ciriello, senior lecturer in business information systems at the University of Sydney Business School. has investigated how ’emotional’ AI challenges the human essence. “Our research exposes a (de)humanization paradox: by humanizing AI agents, we may unintentionally dehumanize ourselves, leading to an ontological blur in human-AI interactions.” In other words, Ciriello writes in a recent op-ed The conversation with PhD student Angelina Ying Chen: “Users can become deeply emotionally involved if they think their AI partner really understands them.”

Another studythis one from the University of Cambridge and aimed at children, found that AI chatbots have an ’empathy gap’ that puts young users, who tend to treat such companions as ‘lifelike, quasi-human confidantes’, at particular risk of harm .

Therefore, Common Sense Media highlights a list of potential risks, including that the companions may be used to avoid real human relationships, may pose particular problems for people with mental or behavioral problems, may increase loneliness or isolation, pose the potential for inappropriate behavior can bring. sexual content, can become addictive and tends to engage users – a frightening reality for those suffering from ‘suicidality, psychosis or mania’.

How to recognize red flags

Parents should look out for the following warning signs, according to the guide:

  • Prefers AI companion interaction over real friendships

  • Spending hours alone talking to the companion

  • Emotional distress when there is no access to the companion

  • Sharing very personal information or secrets

  • Developing romantic feelings for the AI ​​companion

  • Declining grades or school participation

  • Withdrawal from social/family activities and friendships

  • Loss of interest in previous hobbies

  • Changes in sleep patterns

  • Discuss problems exclusively with the AI ​​companion

Consider seeking professional help for your child, Common Sense Media emphasizes, if you notice your child withdrawing from real people in favor of the AI, showing new or worsening signs of depression or anxiety, becoming overly defensive about AI use— companions, and exhibiting major changes in behavior or mood, or expressing thoughts of self-harm.

How to keep your child safe

  • Setting boundaries: Set specific times for using AI companions and do not allow unsupervised or unrestricted access.

  • Spending time offline: Encourage real friendships and activities.

  • Regular check-in: Monitor the chatbot’s content as well as your child’s level of emotional attachment.

  • Talk about it: Keep communication open and non-judgmental about experiences with AI, and look for warning signs.

“When parents hear their kids say, ‘Hey, I’m talking to a chatbot AI,’ that’s really an opportunity to lean in and absorb that information — and not think, ‘Oh, OK, you’re not talking to a chatbot’ person,” says Torney. Instead, he says, it’s an opportunity to learn more, assess the situation and stay alert. “Try to listen with compassion and empathy and don’t think it’s safer is because it’s not a person,” he says, “or you don’t have to worry.”

If you need immediate mental health care, please contact the 988 Suicide and Crisis Lifeline.

More about children and social media:

This story originally ran Fortune.com