The Issue at Hand
Google has implemented significant changes to its search algorithm, drastically reducing the visibility of non-consensual explicit imagery (NCEI), including deepfake nudes. This move comes in response to growing concerns about the proliferation of AI-generated explicit content and its impact on individuals and society.
Key Developments
- Google’s new ranking system has cut exposure to fake explicit images by over 70% for searches targeting specific individuals.
- The search engine now prioritizes news articles and non-explicit content related to deepfakes and their societal impact.
- Google is extending measures used for real unwanted explicit images to synthetic ones, including duplicate removal and filtering similar queries.
- Websites with high volumes of successful takedown requests will face demotion in search results.
Implications and Limitations
While these changes represent a significant step forward in combating NCEI, some limitations remain. Google has not implemented warning messages for searches seeking sexual deepfakes of adults, unlike those in place for searches related to child exploitation. The company continues to balance its role in regulating internet content with preserving access to legitimate material. Despite these efforts, the challenge of controlling the spread of AI-generated explicit content persists, highlighting the ongoing need for technological and policy solutions to address this evolving issue.











