The study delves into the misuse of generative AI (GenAI) systems, offering a comprehensive taxonomy of tactics based on media reports and real-world examples. Researchers analyzed 200 media reports from January 2023 to March 2024, uncovering key patterns in GenAI misuse during this period.
The findings highlight three main areas of concern:
1. Manipulation of human likeness and falsification of evidence are the most common tactics, often used to influence public opinion, enable scams, or generate profit.
2. Most reported cases involve easily accessible GenAI capabilities, requiring minimal technical expertise, indicating low barriers to entry for misuse.
3. New forms of misuse have emerged, blurring the lines between authenticity and deception in political outreach, self-promotion, and advocacy.
This research underscores the urgent need for increased awareness and regulatory measures to address GenAI misuse. It emphasizes the importance of developing robust safeguards and ethical guidelines to prevent the exploitation of these powerful tools. The study’s insights are crucial for policymakers, technology developers, and the public to understand and mitigate the risks associated with generative AI technologies.











