In the 2022 election, Adrian Perkins, the mayor of Shreveport, Louisiana, was targeted by a satirical TV commercial created with artificial intelligence (AI) that depicted him as a high school student being scolded by a principal. The ad was labeled as being created with “deep learning computer technology,” and Perkins believes it contributed to his loss. This incident highlights the growing concern about the use of AI-generated misinformation in political campaigns, particularly in local and state races where resources are limited. As AI technology becomes more accessible and widespread, it poses a significant threat to the integrity of democratic elections.
The use of AI in politics is a double-edged sword. On one hand, it can streamline mundane tasks and save time and resources for campaigns. On the other hand, it can be used to create convincing misinformation that can sway voters. The lack of regulation and oversight in this area is alarming, and experts warn that AI-generated misinformation can have a significant impact on close races.
While some lawmakers have proposed legislation to regulate AI in politics, Congress has yet to take action. Meanwhile, local candidates are already facing criticism for deploying AI in misleading ways, such as using AI-generated headshots or fake news stories. The lack of familiarity with candidates and the decline of local news outlets make voters more susceptible to believing fake information.
Experts agree that regulating integrity is a significant challenge, and it’s unclear how to differentiate between what’s true and what’s not. As AI technology continues to evolve, it’s essential to address these concerns and find ways to protect the integrity of democratic elections.











