The rise of generative AI in political advertising has sparked a wave of legislative action across the United States. States are rapidly enacting laws to regulate the use of AI-generated content in political campaigns, with approaches ranging from outright bans to mandatory disclosures.
Generative AI, defined as technology capable of creating realistic text, audio, image, or video content based on learned patterns, has raised concerns about its potential to mislead voters. Many states refer to AI-generated content as “synthetic media” or “deep fakes,” particularly when it involves audio or visual elements.
Several states have already implemented laws addressing AI in elections:
- Michigan, Utah, Wisconsin, Texas, Idaho, New York, Arizona, Oregon, and New Mexico have statutes in place
- New Hampshire and Massachusetts are working on similar legislation
- Florida’s law becomes effective on July 1, 2024
The most common approach is requiring disclosures on AI-generated content. For example, Utah mandates specific statements like “This video content generated by AI” for visual media. Florida requires a disclosure stating “Created in whole or in part with the use of generative artificial intelligence (AI)” for political ads portraying actions that didn’t occur.
Some states have taken more stringent measures. Texas imposes criminal penalties for publishing “deep fake” videos close to elections with intent to influence results. However, the constitutionality of such laws is being challenged, with one Texas court already ruling the statute unconstitutional.
These new regulations raise important questions about their effectiveness and potential loopholes. Will disclosures adequately prevent harm? How will they address scenarios like robocalls or viewers missing the disclosure? The upcoming Florida elections in 2024 will likely provide insights into the real-world impact of these laws on political advertising.











