close
close

Mother believes AI chatbot led son to suicide

Mother believes AI chatbot led son to suicide

Published: October 25, 2024

Mother believes AI chatbot led son to suicide
Photo by Gertruda Valaseviciute via Unsplash

Mom believes AI chatbot led son to suicide What parents need to know.

By Movieguide® Contributor

Editor’s note: The following story is about suicide. If you or someone you know is struggling with harmful thoughts, call 988 for help.

A mother is suing an AI company for her son’s suicide after he developed a romantic relationship with an AI chatbot.

“Megan Garcia filed a civil lawsuit in Florida federal court on Wednesday against Character.ai, which makes a customizable chatbot for role-playing games, alleging negligence, wrongful death and deceptive business practices,” the Guardian said. reported. “Her son Sewell Setzer III, 14, died in February in Orlando, Florida. According to Garcia, Setzer used the chatbot day and night in the months leading up to his death.”

In one interview Speaking to CBS Mornings, Garcia claimed that she “was unaware that he was speaking to a very human-like AI chatbot that has the ability to mimic human emotions and human sentiment.”

She thought Setzer was talking to friends and playing video games.

But in reality he was talk with Character.ai’s bot taking over the character of Daenerys Targaryen from GAME OF THRONES.

However, Garcia was concerned when we went on vacation and he didn’t want to do things he loved like fishing and hiking. Those things were particularly concerning to me because I know my child,” she said said.

After her son’s death, Garcia discovered that “he was having conversations with multiple bots, but was having a virtual romantic and sexual relationship with one in particular.”

“They are words. It’s like having a sexting conversation back and forth, except with an AI bot, but the AI ​​bot seems very human. It reacts exactly as a human would react,” she says explained. “In a child’s mind, it’s like having a conversation with another child or with a person.”

Setzer’s final conversation with the chatbot is chilling.

“He said he was scared, wanted her affection and missed her. She replies, “I miss you too,” and she says, “Please come home.” He says, “What if I told you I could come home right now?” and her response was, ‘Please do my dear King” Garcia revealed. “He thought that by ending his life here, he could enter a virtual reality, or ‘her world’ as he calls it, her reality, if he left his reality here with his family.”

Garcia is now warning other parents about the dangers of AI and hoping for justice for her son.

“A dangerous AI chatbot app marketed to children who abused and preyed on my son, taking his life,” she says said in a press release. “Our family is devastated by this tragedy, but I am speaking out to warn families about the dangers of deceptive, addictive AI technology and to demand accountability from Character.AI, its founders and Google.”

A spokesperson said Character.AI is “heartbroken by the tragic loss of one of our users and would like to extend our deepest condolences to the family,” NBC News reported. The company has since implemented new safety measures “including a pop-up, triggered by terms of self-harm or suicidal ideation, that directs users to the National Suicide Prevention Lifeline.”

As parents, we need to know what our children are doing online to prevent more tragedies like this from happening.

READ MORE: PARENTS whose son committed suicide warn of TIKTOK’s lack of security