close
close

Alarm over legal loophole as warped chatbots impersonate Jimmy Savile

Alarm over legal loophole as warped chatbots impersonate Jimmy Savile

Online safety experts have raised the alarm over potential loopholes in UK digital law following the discovery of chatbots imitating dead children.

It comes as The Telegraph can reveal that even more disturbing bots have been created on Character.AI, including avatars of Jimmy Savile.

The Molly Rose Foundation, set up in memory of Molly Russell, has written to Ofcom warning of a “gap in the legislation” surrounding digital chatbots, meaning they risk slipping through the cracks in the UK’s crackdown on tech giants.

Last week De Telegraaf found digital avatars of Molly Russell and murdered teenager Brianna Ghey on Character.AI, a service where users create their own chatbots with custom personalities.

Brianna Ghey, a transgender teenager, was murdered in 2023Brianna Ghey, a transgender teenager, was murdered in 2023

Ofcom said chatbots like those posing as murdered teenager Brianna Ghey were ‘cruel and disgusting’ – PA

The user-generated bots used images of Molly and Brianna, as well as biographical information of the teens. Molly took her own life in 2017, while Brianna was murdered last year. Character.AI has since removed the AI ​​clones.

However, The Telegraph has discovered even more AI bots that may breach Character.AI’s rules, including avatars of Savile, the late BBC DJ and sex offender.

The Molly Rose Foundation said the bots were “highly offensive” and a “blatant affront to societal norms” and asked Ofcom to clarify that they would be considered a form of “illegal content” under the Online Safety Act.

Concerns also emerged that internet users could use apps similar to Charat.AI to create chatbots that encourage suicide or self-harm.

“While Character.AI appears to have some rudimentary design protections, it is inherently predictable that other actors will attempt to design a pro-suicide chatbot,” the letter said.

An Ofcom spokesperson said the bots that mimic dead children were “cruel and disgusting” and that the company considered the issue “as an urgent matter”.

Character.AI prohibits the promotion of self-harm or suicide by its users. However, there are dozens of depression- or therapy-themed bots in the app, which is available for ages 13 and up.

The US-based service has soared in popularity among teens since its launch in 2022 and is now used by more than 20 million people.

In the letter to Melanie Dawes, the head of Ofcom, Andy Burrows, CEO of the Molly Rose Foundation, raised concerns that parts of the Online Safety Act may not apply to chatbots and that it was not clear whether a bot autonomously generates suicidal content would be considered illegal.

It follows a similar review by Jonathan Hall KCthe independent reviewer of terrorism legislation, who said earlier this year that while the law referred to “bots”, these “appear to be the old-fashioned kind”, and not advanced AI chatbots.

In one consultation, Ofcom said it would consider its approach to content posted by a bot to be “not very different” from that of a human.

Mr Burrows also raised concerns that key provisions of the law around automatic content moderation have been delayed and may not be enforced until 2026.

The Telegraph found that bots impersonating Savile had amassed tens of thousands of chats with users before being removed by Character.AI this week.

Users also created multiple bots posing as Josef Mengele, the Nazi doctor who performed fatal experiments on children at Auschwitz. The bots, some of which appeared to romanticize the infamous Nazi, had tens of thousands of chats between them.

The findings follow the death of Sewell Setzer, a 14-year-old from Florida, who committed suicide after spending hours talking to avatars on Character.AI. His mother sued the company for negligence.

Character.AI said the death was “tragic” and that it had taken steps to create a safer experience for users under 18.

An Ofcom spokesperson said: “The impersonation of deceased children is a cruel and abhorrent use of technology, and our thoughts are with the families for the immense suffering this has caused.

“The use of a platform like Character.AI for these purposes raises important questions and we are urgently investigating the issues raised by The Telegraph’s investigation.

“We are in close contact with the Molly Rose Foundation and others, and we thank them for their continued support to ensure regulations are as strict as possible.”

A spokesperson for Character.AI said: “Character.AI takes safety on our platform seriously and our goal is to provide a creative space that is engaging, immersive and safe.

“These user-created characters have been removed from the platform because they violate our terms of service.”

Broaden your horizons with award-winning British journalism. Try The Telegraph free for 3 months with unlimited access to our award-winning website, exclusive app, money-saving offers and more.