NEW YORK (AP) — The already-alarming proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos, a watchdog agency warned on Tuesday.
In a written report, The U.K.-based Internet Watch Foundation urges governments and technology providers to act quickly before a flood of AI-generated images of child sexual abuse overwhelms law enforcement investigators and vastly expands the pool of potential victims.
If it isn’t stopped, the flood of deepfake child sexual abuse images could bog investigators down trying to rescue children who turn out to be virtual characters.
“That is just incredibly shocking.”Sexton said his charity organization, which is focused on combating online child sexual abuse, first began fielding reports about abusive AI-generated imagery earlier this year.
It particularly targets the European Union, where there's a debate over surveillance measures that could automatically scan messaging apps for suspected images of child sexual abuse even if the images are not previously known to law enforcement.
Persons:
“ We're, ”, Dan Sexton, ’, isn’t, Sexton, who’ve, “, ” Sexton, they're, David Thiel, Kamala Harris, Susie Hargreaves, ” ___ O'Brien, Barbara Ortutay, Kim
Organizations:
Internet Watch Foundation, Court, IWF, European Union, Technology, Stanford Internet Observatory, U.S, Associated Press
Locations:
South Korea, Busan, Spain, London, Providence , Rhode Island, Oakland , California, Seoul