close
close

Law Enforcement Races to Stop AI Child Porn | News, sports, jobs

Law Enforcement Races to Stop AI Child Porn | News, sports, jobs



WASHINGTON (AP) — A child psychiatrist who altered a first day of school photo he saw on Facebook to make a group of girls appear naked. A US Army soldier is accused of taking images of children he knew were being sexually abused. A software engineer charged with generating hyper-realistic sexually explicit images of children.

Law enforcement agencies in the US are cracking down on the disturbing spread of child sexual abuse images created through artificial intelligence technology – from manipulated photos of real children to graphic images of computer-generated children. Justice Department officials say they are aggressively pursuing offenders who exploit AI tools, while states are doing everything they can to ensure that people who generate “deepfakes” and other harmful images of children can be prosecuted under their laws.

“We need to signal early and often that it is a crime, that it will be investigated and prosecuted if the evidence supports it,” said Steven Grocki, chief of the Justice Department’s Child Exploitation and Obscenity Division, in an interview with The Associated Press. press. ‘And if you think otherwise, you are fundamentally wrong. And it’s only a matter of time before someone calls you to account.”

The Justice Department says existing federal laws clearly apply to such content, and recently filed what is believed to be the first federal case involving solely AI-generated images — meaning the children depicted are not real but virtual. In another case, federal authorities in August arrested a U.S. soldier stationed in Alaska, accused of posting innocent photos of real children he knew through an AI chatbot to make the images sexually explicit.

The prosecutions come as child advocates are pressing urgently to curb the misuse of technology to prevent a flood of disturbing images that officials fear could make it harder to save real victims. Law enforcement officials worry that investigators will waste time and resources identifying and tracking down exploited children who don’t actually exist.

Lawmakers, meanwhile, are passing a raft of legislation to ensure local prosecutors can file charges under state law for AI-generated “deepfakes” and other sexually explicit images of children. Governors in more than a dozen states this year signed laws cracking down on digitally created or altered child sexual abuse images, according to a study by the National Center for Missing & Exploited Children.

“We are playing catch-up as law enforcement on a technology that, quite frankly, is evolving much faster than we are,” said Ventura County District Attorney Erik Nasarenko.

Nasarenko pushed for legislation signed by Gov. Gavin Newsom last month that would make it clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office was unable to prosecute eight cases involving AI-generated content between December and mid-September because California law had required prosecutors to prove the images depicted a real child.

AI-generated images of child sexual abuse could be used to groom children, law enforcement officials say. And even if they are not physically abused, children can be deeply affected if their image is altered to appear sexually explicit.

“It felt like a part of me was taken away. Even though I wasn’t physically abused,” says 17-year-old Kaylin Hayman, who starred in the Disney Channel show “Just Roll with It” and helped pass California law after falling victim to “deepfake” images.



Today’s latest news and more in your inbox