The Challenge of AI-Generated CSAM
The rise of generative AI has created new challenges in the fight against online child sexual exploitation. AI can now produce two types of illegal child sexual abuse material (CSAM): deepfakes of real children and synthetic images of virtual children. These AI-generated materials pose significant threats, including potential use in sextortion schemes and overwhelming law enforcement resources.
Key Points on the Issue:
- AI-generated CSAM reports to the National Center for Missing and Exploited Children (NCMEC) are increasing
- Law enforcement struggles to prioritize cases involving imminent danger to children
- The private sector is developing technological solutions to combat CSAM
- Regulators are calling for expert commissions to evaluate AI risks in child exploitation
The Safety by Design Approach
To address these challenges, companies can implement a “safety by design” framework. This proactive approach involves:
- Embedding safety considerations into product development from the start
- Providing users with tools to manage their own safety
- Enhancing transparency and accountability in community standards
By adopting safety by design, companies can help prevent the creation and spread of AI-generated CSAM before it becomes widely disseminated. This approach aligns with emerging global legislation, such as the EU’s Digital Services Act and proposed U.S. laws like the STOP CSAM Act and the Kids Online Safety Act.











