Overview of Findings
Concerns about generative AI’s role in elections were prominent at the beginning of the year. However, by year-end, Meta reported that these fears largely did not materialize on its platforms. The company analyzed content related to major elections across various countries and concluded that generative AI had minimal influence during these events.
Key Insights
- Meta found that AI-generated content related to elections accounted for less than 1% of all fact-checked misinformation.
- The Imagine AI image generator played a significant role by rejecting 590,000 requests to create politically sensitive images in the lead-up to the elections.
- Coordinated networks attempting to spread disinformation saw only slight improvements in their use of AI for content generation.
- Meta disrupted around 20 covert influence operations globally, emphasizing that many of these networks lacked genuine audiences and relied on fake engagement metrics.
Significance of the Report
This report is important as it highlights the effectiveness of Meta’s existing policies in managing potential threats from generative AI during critical electoral periods. The findings suggest that while the technology poses risks, the measures in place have been adequate in mitigating these concerns. Moreover, Meta’s focus on behavior rather than content allows for a more robust defense against disinformation campaigns. As the landscape of social media continues to evolve, ongoing evaluations of policies will be crucial to maintaining integrity in future elections.











