In recent years, the digital landscape has profoundly reshaped election cycles, adding new layers of complexity to campaign strategies and voter engagement. For some, it’s hard to remember a time when political campaigns didn’t rely heavily on social media, let alone the internet. Especially for the youngest generation of the eligible voter base, digital content has been central to digesting election information, serving as a primary means for candidates to communicate their platforms and for constituents to interact with the discourse associated with political issues at the core of campaigns.
Technology has broadened access to information, breaking down barriers and encouraging civic participation. However, with increased digital engagement comes a flood of information that can be overwhelming to parse through and, at times, difficult to verify. Now, AI is adding an entirely new dimension to this dynamic, presenting both incredible opportunities and unprecedented risks to the democratic process. When grappling with AI’s impact on elections, three core issues rise to the forefront: First Amendment considerations, data policy implications, and national security risks.
First Amendment Considerations
A challenging aspect of AI in elections is balancing free speech with the need to prevent harmful disinformation. While the First Amendment protects free speech, AI-generated content, such as deepfakes or manipulated narratives, blurs the line between protected expression and potentially dangerous disinformation. Platforms face pressure to moderate content, but this raises concerns around censorship, bias, and the integrity of political advertising. As AI becomes more adept at generating realistic content, establishing guidelines that uphold free speech while addressing misinformation is critical.
Data Policy Considerations
Data is at the heart of modern campaigning, but with AI’s increasing role, the boundaries around data privacy and ownership are becoming less clear. Campaigns often rely on third-party data analysis, raising concerns about how voter data is used and whether it might extend beyond its intended purpose. This isn’t solely an AI issue, but AI exacerbates these concerns by introducing new questions around training data, transparency, and accountability.
National Security
Safeguarding elections from foreign influence has always been a priority, but AI tools have expanded the arsenal available to malicious actors. Foreign entities can weaponize AI to produce divisive content, conduct disinformation campaigns, and target specific voter demographics, which poses significant risks to national security. AI-powered bots can launch large-scale, automated campaigns designed to manipulate public opinion, which can ultimately destabilize trust in election outcomes.
Current Snapshot: Policy Landscape for AI in Elections
As AI will continue to play a larger role in elections, policymakers and regulatory bodies are stepping up efforts to address its implications:
Federal Election Commission (FEC): FEC has been focused on exploring AI’s role in elections particularly in regards to AI-driven political ads and misinformation. The FEC regulates campaign advertising disclosures, and recently clarified rules around AI generated content. The recent interpretive rule from the FEC that extends the ban on fraudulent campaign misrepresentation to AI-generated content is a significant step in addressing the evolving landscape of election interference. SIIA has previously advocated for the application of existing laws, like the FEC’s fraudulent misrepresentation doctrine, to curb such misuse. By clarifying that AI-generated content is covered under these rules, the FEC has reinforced its commitment to maintaining election integrity.
Federal Communications Commission (FCC): The FCC has recently taken steps to address the growing influence of AI in election-related communications. In July 2024, the agency proposed new transparency requirements that would require broadcasters who already report on political ads to disclose when AI is used in these ads, including on-air disclosures. The proposal aims to create consistency across states with varying disclosure laws, without limiting the use of AI-generated content. In August, the FCC extended its efforts to AI-generated robocalls and robotexts, proposing measures for consumer protection and defining standards for AI-based calls while supporting assistive technologies. Recently, the FCC issued a $6 million fine for illegal election-related robocalls using deepfake technology in New Hampshire, marking a firm stance against AI-driven election interference.
Congressional Action: Federal legislation has not been passed into law. In the spring, Senator Amy Klobuchar introduced three AI election bills for markup. One would prepare election administrators for AI, the second would ban deepfakes of federal political candidates, and the third would require disclosures on AI-manipulated political ads. The Senate Rules Committee advanced all three in May, although some concerns were raised by Republicans and industry around the risk of hindering innovation and free speech. However, the two bills about voter-facing AI-generated content failed to pass a unanimous consent full Senate vote in July.
Meanwhile, Senator Warner requested responses from companies about what they are doing to combat this issue. He requested specific answers about the actions that companies are taking following up on the commitments made at the Munich Security Conference. In August, Sen. Warner shared responses from the companies, and argued that companies lacked specificity on how they enforce their policies, citing concerns that companies offered little indication of detailed and sustained efforts to engage local media, civic institutions, and election officials and equip them with resources to identify and address misuse of generative AI tools in their communities. He also argued there is still a lack of robust and standardized information-sharing mechanisms within the ecosystem.
Senator Warner also wrote a letter to Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly that calls attention to concerns around voter intimidation plots that suppress turnout and invoke mistrust among voters. He urged robust action from CISA to increase its resources and grow its collaborative efforts to track these efforts. The Senator also called on CISA to facilitate communication between election offices and social media platforms, but this is an effort the agency has moved away from due to politicization.
State Initiatives: Eighteen states have recently enacted laws to curb the use of deepfakes in elections, reflecting a strong trend among states toward regulating AI-driven deception in political messaging. While five states had passed legislation prior to 2024, the past year saw an additional 13 states enact laws. Notably, Governor Newsom signed three new bills in California this year: AB 2655 requiring large online platforms to label or remove misleading digitally altered content related to elections and provide reporting mechanisms; AB 2839 expanding restrictions against distributing deceptive AI content within a broader timeframe, also enabling officials to seek injunctions for non-compliance; and AB 2355 mandating clear disclosures on electoral advertisements using AI-generated or manipulated content. Recent actions across states signal a mounting trend towards addressing transparency within election-related AI use.
Ongoing Industry Efforts: Every Year is an Election Year
For technology companies operating in foreign jurisdictions, every year is an election year, as they have been on the front lines of adapting to the digital elections landscape across the world. The tech industry is playing a critical role in safeguarding elections by increasing transparency in political advertising and helping voters better understand the sources behind political messages. Many companies have pledged to address AI-driven election risks through initiatives such as the AI Elections Accord. This agreement establishes guidelines for companies to manage AI’s potential to mislead voters and outlines industry standards for transparency. Such efforts demonstrate that the private sector is committed to protecting the democratic process while responsibly advancing technology.
Looking Ahead: Collaboration and AI Literacy Are Key
The challenge AI presents to our democratic system is complex and multifaceted. Protecting elections will require an ongoing collaboration between government and the technology industry, along with investment in initiatives that promote digital literacy to help voters navigate the digital landscape and recognize deceptive content.
The Software & Information Industry Association (SIIA) is committed to advancing policy solutions that protect the integrity of the democratic process while fostering innovation. Our focus is on promoting AI’s responsible use, ensuring that it strengthens democracy rather than undermines it. We will continue to advocate for policies and industry practices that strike the right balance, helping to shape a future where technology enhances, rather than hinders, the democratic process.