The proliferation of explicit deepfakes of celebrities like Taylor Swift has brought to the forefront the dark reality of generative AI. The technology, designed to generate new content, can be used to create harmful and explicit material, leaving victims vulnerable to sexual exploitation. The lack of laws and prosecution power to hold perpetrators accountable is alarming, with only about a dozen states having laws in place to tackle this issue. As lawmakers scramble to put laws in place, researchers are working to develop forensic tools to identify and prosecute these crimes. A key aspect of this is understanding what is missing in AI-generated images, such as metadata, location information, and device identifiers. By focusing on these gaps, investigators can build a strong case against offenders. It is crucial that lawmakers and law enforcement work together to hold perpetrators accountable and protect victims from the devastating consequences of generative AI used for harmful purposes.

Exposing the Dark Side of Generative AI
The social ramifications of generative AI used for horrible acts are already here – now society must come to grips with it and lawmakers must act quickly.
1–2 minutes










