AI-Generated Child Abuse Material Could ‘Overwhelm’ the Internet, UK Group Warns



UK-based internet watchdog firm Internet Watch Foundation (IWF) is again sounding the alarm about the rapid spread of AI-generated child sexual abuse material (CSAM). In a new report released Wednesday, the group reported that over 20,254 AI-generated CSAM images were found on a single darkweb forum in just one month—and that a flood of such abhorrent content could “overwhelm” the internet.

As generative AI image generators become more advanced, the ability to create realistic replicas of human beings has grown by leaps and bounds. AI image generators like Midjourney, Runway, Stable Diffusion, and OpenAI’s Dall-E are just a few of the platforms capable of conjuring lifelike images.

These cloud-based platforms, which are widely accessible to the public, have implemented substantial restrictions, rules, and controls to prevent their tools from being used by nefarious actors to create abusive content. But AI enthusiasts regularly hunt for ways to circumvent these guardrails.

“It’s important that we communicate the realities of AI CSAM to a wide audience because we need to have discussions about the darker side of this amazing technology,” foundation CEO Susie Hargreaves said in the report.

Saying its “worst nightmare” had come true, the IWF said it is now tracking instances of AI-generated CSAM of real victims of sexual abuse. The UK group also highlighted images of celebrities being de-aged and manipulated to appear as abuse victims, as well as manipulated pictures of famous children.

“As if it is not enough for victims to know their abuse may be being shared in some dark corner of the internet, now they risk being confronted with new images of themselves being abused in new and horrendous ways not previously imagined,” Hargreaves said.

One major problem with the proliferation of life-like AI-generated CSAM is that it could distract law enforcement resources from detecting and removing actual abuse, the IWF states.

Founded in 1996, the foundation is a non-profit organization dedicated to monitoring the internet for sexual abuse content, specifically that targets children.

In September, the IWF warned that pedophile rings are discussing and trading tips on creating illegal images of children using open-source AI models that can downloaded and run locally on personal computers.

“Perpetrators can legally download everything they need to generate these images, then can produce as many images as they want, offline, with no opportunity for detection,” the IWF said.

The UK group called for international collaboration to fight the scourge of CSAM, proposing a multi-tiered approach, including changes to relevant laws, updating law enforcement training, and establishing regulatory oversight for AI models.

For AI developers, the IWF recommends prohibiting the use of their AI for creating child abuse material, de-indexing related models, and prioritizing the removal of child abuse material from their models.

“This is a global issue which requires countries to work together and ensure that legislation is fit for purpose,” Hargreaves said in a statement previously shared with Decrypt, noting that the IWF has been effective in limiting CSAM in its home country.

“The fact that less than 1% of criminal content is hosted in the UK points to our excellent working partnerships with UK police forces and agencies, and we will actively engage with law enforcement on this alarming new trend, too,” Hargreaves said. “We urge the UK prime minister to put this firmly on the agenda at the global AI safety summit being hosted in the UK in November.”

While the IWF says takedowns of darkweb forums hosting illegal CSAM in the UK are happening, the group said removal could be more complicated if the website is hosted in other countries.

There are numerous concerted efforts to combat the abuse of AI. In September, Microsoft President Brad Smith suggested using KYC policies modeled after those employed by financial institutions to help identify criminals using AI models to spread misinformation and abuse.

The State of Louisiana passed a law increasing the penalty for the sale and possession of AI-generated child pornography in July that said anyone convicted of creating, distributing, or possessing unlawful deepfake images depicting minors could face a mandatory five to 20 years in prison, a fine of up to $10,000, or both.

In August, the U.S. Department of Justice updated its Citizen’s Guide To U.S. Federal Law On Child Pornography page. In case there was any confusion, the DOJ emphasized that images of child pornography are not protected under the First Amendment and are illegal under federal law.





Source link

About The Author

Scroll to Top