Understanding the Issue
A recent survey highlights a troubling trend: many minors are using generative AI technology to create inappropriate images of their peers. One in ten children reported involvement in generating non-consensual nude images, raising alarms among parents and educators. The survey conducted by Thorn, a non-profit focused on child protection, involved 1,040 minors aged 9 to 17. It revealed alarming behaviors linked to the misuse of AI tools in schools, particularly with “nudify” apps that allow for the creation of fake nude images.
Key Findings
- One in ten children admitted to using AI to create inappropriate images.
- One in seven minors shared self-generated child sexual abuse material (CSAM).
- The survey was conducted online between November 3 and December 1, 2023.
- Concerns have been raised about the effectiveness of partnerships between Thorn and major tech companies in combating AI-generated content.
The Bigger Picture
These findings emphasize the urgent need for proactive measures to address the risks associated with AI misuse among minors. The incidents reported, including students creating harmful images of teachers and classmates, illustrate the real-world consequences of this digital abuse. Society must engage in open discussions about the dangers of AI technologies, set clear boundaries regarding acceptable behavior, and educate both children and adults on the potential harms. Addressing these issues is crucial to prevent further escalation and protect vulnerable individuals in the digital age.











