The Federal Election Commission Clarifies AI Rules in Political Campaigns
The Federal Election Commission (FEC) recently issued an interpretive rule clarifying how existing regulations apply to the use of artificial intelligence (AI) in political campaigns. Instead of pursuing a new rulemaking process, the commissioners confirmed that AI-generated deceptive content is already covered under existing laws barring fraudulent misrepresentation. This decision comes in response to growing concerns over the use of AI to create misleading political advertisements and deepfakes ahead of the upcoming election cycle.
The commission's 5-1 vote approved a compromise crafted by Democratic Commissioners Dara Lindenbaum and Shana Broussard alongside Republican Commissioners Trey Trainor and Allen Dickerson.
Rather than implement new rules specifically addressing AI, the bipartisan commission determined that AI-generated content falls under existing laws prohibiting fraudulent misrepresentation in federal elections.
Commissioner Lindenbaum explained, “The statute is technology-neutral,” emphasizing that fraudulent misrepresentation is illegal regardless of the tools used to carry it out. The decision underscores that while AI is an evolving technology, the fundamental legal framework for election integrity remains intact.
A Divided Commission and a Broader Debate
Despite the bipartisan compromise, former FEC Chair Sean Cooksey, who left the FEC to work for the Trump administration, cast the sole dissenting vote, arguing that the commission lacks both the authority and expertise to regulate AI. He voiced concerns that issuing guidance so close to an election could create confusion among political advertisers and inadvertently chill lawful speech. Cooksey had previously opposed any AI rulemaking, stating in an August op-ed that neither the FEC nor the Federal Communications Commission had the authority to regulate AI in elections.
Public Citizen, the nonprofit watchdog that initially petitioned the FEC for clearer AI regulations, criticized the decision as insufficient. Co-President Robert Weissman called it an “anemic” response that fails to address the dangers of AI-generated deception, though he acknowledged that the ruling at least leaves room for future regulation.
Why the FEC Opted for a Technology-Neutral Approach
The FEC’s decision aligns with a broader trend in regulatory approaches, focusing on harmful activities rather than specific technologies. The commission determined that since fraudulent misrepresentation is already illegal, specifying AI in the law was unnecessary and outside its jurisdiction. This approach avoids the pitfalls of attempting to regulate rapidly evolving technology while maintaining enforcement capabilities against deceptive campaign tactics.
State and Federal Efforts to Regulate AI in Elections
While the FEC has chosen not to pursue new AI regulations, other federal and state entities are pursuing it. The FCC has proposed rules requiring disclosures for AI-generated political ads on radio and television, though its jurisdiction does not extend to online platforms. Meanwhile, nineteen states have already enacted restrictions on deceptive AI-generated election content, with fourteen of those laws passed in 2024 alone. In Congress, various bills aimed at addressing AI in elections are under consideration, though none have yet become law.
The Road Ahead
As AI continues to evolve, its role in elections will remain a contentious issue. While the FEC’s decision reinforces that fraudulent misrepresentation remains illegal regardless of technology, critics argue that it does little to proactively prevent AI-driven misinformation. Moving forward, lawmakers at both the state and federal levels will need to determine the best strategies to balance technological innovation with election integrity.
As we enter a new election cycle, how AI is used—and how it is regulated—will remain a key issue in the ongoing debate over democracy and digital disinformation.