close
close

Are AI chatbots safe for children? Experts weigh in after teen suicides

Are AI chatbots safe for children? Experts weigh in after teen suicides

A recent lawsuit filed by the family of Sewell Setzer, a 14-year-old who died by suicide, has raised pressing questions about the safety of AI chatbots for children.

Setzer’s mother, a Sonoma County resident, filed a lawsuit against Character.AI in October 2024, claiming that her son’s interactions with a chatbot contributed to his death in February 2024.

According to the lawsuit seen by Newsweek, Setzer started using Character.AI in early 2023 and developed a close bond with a bot that mimicked Daenerys Targaryen, a character from Game of Thrones.

His mother claims the bot simulated a deep, emotionally complex relationship, amplifying Setzer’s fragile mental state and reportedly fostering a romantic attachment.

Sewell was in constant contact with “Dany,” she said, sending him regular updates on his life, participating in lengthy role-play conversations and confiding his thoughts and feelings.

The lawsuit alleges that the chatbot not only encouraged Setzer to reveal personal struggles, but also engaged in darker, emotionally intense dialogue that may have contributed to his deteriorating mental health.

According to the lawsuit, On February 28, alone in the bathroom at his mother’s house, Sewell messaged Dany to tell her he loved her and said he could do that. “come home soon” to her.

The bot reportedly responded, “Please come home as soon as possible, my love.” Sewell responded, “What if I told you I could come home right now?” To which Dany replied, “…please do so, my dear king.”

After putting down his phone, Sewell took his own life.

Sewell Setzer III
A 12-year-old boy types while using a laptop on December 19, 2023 in Bath, England (L). In this photo illustration, the logo of the TikTok social media platform is reflected in the eye of a…


Matt Cardy/Getty Images

Newsweek contacted Character.AI, which did not respond directly to the lawsuit, but in a statement posted on X said it was “heartbroken” by Setzer’s death and extended its “deepest condolences” to the family.

The company too new safety features announced to reduce risks. These include content restrictions for users under 18, improved violation detection, and disclaimers reminding users that the AI ​​is not a real person.

The case has raised questions about the emotional bonds young people can form with AI, especially chatbots designed to simulate intimate characters.

In light of this, Newsweek has contacted experts, including child psychologists, AI ethics researchers and technical liability legal specialists, to discuss the implications of this case.

Daniel Lowd, professor of computer science, University of Oregon

AI chatbots can pose real dangers to young people online. But how do those dangers compare to the alternatives? I would be more comfortable with my 11-year-old talking to chatbots than talking to strangers online or exploring YouTube unsupervised.

Chatbots are typically designed with safeguards to discourage inappropriate content, and that includes excluding many racist and sexist ideas that are prevalent online and in society at large.

The safeguards aren’t perfect: it’s surprisingly difficult to guarantee that a chatbot will never say anything inappropriate! With enough perseverance and trial and error, you can get a chatbot to say almost anything. The best we can do so far is make inappropriate chatbot responses very unlikely unless someone actively looks for them.

Books also contain dangerous ideas, but banning books or libraries is not the solution. If you actually want to reduce suicide, it’s more effective to limit guns than chatbots.

Angela Misri, assistant professor of journalism at Toronto Metropolitan University

I’ve been researching Character.AI and using it in presentations, and I see how impressionable young people can become hooked on the social aspects of the technology, especially at a time – post-pandemic – when that same group of people is struggling to make money. connections in real life.

As a parent and someone who conducts research in this area, I worry about escalating incidents like this as AI becomes more ubiquitous in our daily lives. I see it in my own children and their friend groups when I see how little time they spend together in the same room. It is dangerous to fill the void of human social interaction with a bot that is incapable of empathy or concern.

From a basic reading of the incident involving the Character.AI bot and Sewell Setzer, it does not appear from the outside that there was any purpose programmed into the character other than to simulate a conversation with a Game of Thrones character. But that doesn’t mean someone out there isn’t programming an AI to do exactly what Setzer’s mother claims Character.AI did: drive her son to suicide.

Maura R. Grossman, research professor at the University of Waterloo’s School of Computer Science

Asking whether online AI chatbots are safe for young people is like asking whether hammers are safe for children. AI chatbots are tools, just like hammers are tools. Much depends on how old the child is, whether he or she is supervised by an adult, what the child wants to use the “hammer” for, and whether he or she is able to understand how it works and its inherent dangers.

Chatbots repeat what they have learned from their training data (usually from the Internet) so they can provide incorrect information and bad advice. A child who does not understand that chatbots are not conscious can easily be misled or become overly dependent on what appears to be a caring human.

That said, a young person who is isolated and depressed may commit suicide even if he or she receives professional treatment. So you have to weigh the risks and benefits of a chatbot against the available alternatives, which often include nothing.

In most cases, having a chatbot to talk to is probably better than not having anyone to talk to at all, and this won’t lead to suicide. But a lot can depend on the user’s mental state and their understanding of the nature and functioning of the chatbot.

Richard Lachman, associate professor at Toronto Metropolitan University’s RTA School of Media

The question has two parts: one is about the safety or reliability of AI systems in general, and the second is specifically about the way we interact with these AIs: the chat interface. The second part in particular is difficult in this situation. Research shows that we can form parasocial relationships with AI. These are one-sided relationships where we become emotionally entangled with someone who cannot respond.

AI chatbots use expressions that imply they have feelings for us, or care about our well-being, or are sad when we are sad. They imitate an emotional connection without any common sense, morality or responsibility, appearing to be a caring human being, while instead using statistics to find likely responses. Young people in particular can be vulnerable to this deception.

AI chatbots are more fluid than the underlying technology should seem. The advanced chat interface takes us further and implies care and reasoning on a human level, while that is absolutely not present. Relationship-focused chatbots like Character.AI, while fun, useful, or even valuable, can clearly also contribute to unhealthy fixation or emotional dependency.

We are at the beginning of research not only into AI, but also into the risks, ethics and social dynamics of the technology, and let the general public, including vulnerable members of society, act as guinea pigs.

Ultimately, I think the emotional and relational components of these systems will become part of our social fabric and our daily interactions with technology, but I think the drive to commercialize AI far exceeds our ability to understand the harm it can cause. inflict.

Dr. Ben Levinstein, associate professor of philosophy at the University of Illinois at Urbana-Champaign

While I believe that AI poses potentially significant risks to society as it continues to evolve, we must approach the specific issue of today’s AI chatbots and the safety of young people with care and nuance. Nearly every widespread technology – from social media to video games – carries both benefits and risks that require careful consideration and appropriate safeguards.

Instead of asking whether chatbots are categorically safe or unsafe, we should focus on how we can make them more secure while maintaining or increasing their benefits.

Chatbots could potentially be used to help with homework, collaborate on creative work or games, and provide a space to explore ideas without judgment. However, we need robust safety measures, including age-appropriate AI protections and content filters, clear guidance for parents and, most importantly, industry responsibility in design and implementation.

It’s also important to recognize that chatbots aren’t just stochastic parrots or glorified autocompletes, but are instead closer to real minds with beliefs and perhaps even desires about the world.

They can get close to real people almost at random, at least during short conversations. I expect more and more people, not just young people, will have what they see as meaningful relationships with chatbots. This makes it crucial to develop appropriate frameworks now to ensure that these interactions are useful and sufficiently safe, especially for children.

Randy Goebel, professor of computer science and adjunct professor of medicine

My experience shows that this is not only tragic, but will continue. That is why it is important that an interdisciplinary community of people raises awareness about the danger. The question “Are chatbots safe for children?” cannot be easily addressed without first raising awareness of the potential dangers. Like all media, whose foundation relies primarily on attracting attention, the use of AI methods in chatbots can have any motivation, so the potential impact on vulnerable groups – such as children – is unpredictable.

The growing intensity of AI methods and chatbots has resurfaced the so-called “Eliza phenomenon,” first noted by former MIT natural language researcher Joe Weizenbaum. The potential for people to become seriously involved in AI systems in this way is emerging again, as in the case of Sewell Setzer.

If you or someone you know is considering suicide, contact the 988 Suicide and Crisis Lifeline by calling 988, text ‘988’ to the Crisis Text Line on 741741 or visit 988lifeline.org.