The rapid development in artificial intelligence has come with the dilemma of misuse of the tools. Abuse of AI could lead to a worrying increase in child sexual abuse material found online by 2025, according to the security watchdog.
According to the Internet Watch Foundation (IWF), the watchdog collected 8,029 AI-generated images and videos of realistic child abuse material, representing a 260-fold increase in videos.
The seriousness of online videos cannot be denied. Of the 8,029, 3,443 videos were classified as Category A, the term for the most serious material under UK law.
Only 43 percent of videos were found to be non-AI, demonstrating the growing role of technology in generating and disseminating violent content.
IWF Chief Executive Kerry Smith said, “Advances in technology should never come at the expense of child safety and well-being. While AI can offer much in the positive sense, it is horrifying to consider that its power could be used to devastate a child’s life. This content is dangerous.”
According to IWF analysts, criminals on the dark web are increasingly excited about how advances in AI are facilitating the creation and manipulation of child sexual abuse material (CSAM).
There is significant interest in “agent” systems – AI capable of performing complex tasks autonomously – that could further enhance or automate their nefarious activities.
The UK government has empowered tech companies and child protection agencies to scrutinize generic AI tools and ensure their credibility to prevent the creation of such disturbing content.
Last year, the government announced a ban on creating and distributing AI models designed to generate CSAM.
Smith also called for higher standards for technologies to protect people’s lives from online abuse.
The survey published by IWF also revealed that 8 in 10 UK adults wanted the UK government to introduce legislation to ensure the safety of AI systems.
