NEW Policy Blog Templates (42)

Elections in the AI Era: Snapshot of Current Efforts from Government and Industry

In recent years, the digital landscape has profoundly reshaped election cycles, adding new layers of complexity to campaign strategies and voter engagement. For some, it’s hard to remember a time when political campaigns didn’t rely heavily on social media, let alone the internet. Especially for the youngest generation of the eligible voter base, digital content has been central to digesting election information, serving as a primary means for candidates to communicate their platforms and for constituents to interact with the discourse associated with political issues at the core of campaigns.

Technology has broadened access to information, breaking down barriers and encouraging civic participation. However, with increased digital engagement comes a flood of information that can be overwhelming to parse through and, at times, difficult to verify. Now, AI is adding an entirely new dimension to this dynamic, presenting both incredible opportunities and unprecedented risks to the democratic process. When grappling with AI’s impact on elections, three core issues rise to the forefront: First Amendment considerations, data policy implications, and national security risks. 

First Amendment Considerations

A challenging aspect of AI in elections is balancing free speech with the need to prevent harmful disinformation. While the First Amendment protects free speech, AI-generated content, such as deepfakes or manipulated narratives, blurs the line between protected expression and potentially dangerous disinformation. Platforms face pressure to moderate content, but this raises concerns around censorship, bias, and the integrity of political advertising. As AI becomes more adept at generating realistic content, establishing guidelines that uphold free speech while addressing misinformation is critical.

Data Policy Considerations

Data is at the heart of modern campaigning, but with AI’s increasing role, the boundaries around data privacy and ownership are becoming less clear. Campaigns often rely on third-party data analysis, raising concerns about how voter data is used and whether it might extend beyond its intended purpose. This isn’t solely an AI issue, but AI exacerbates these concerns by introducing new questions around training data, transparency, and accountability.

National Security

Safeguarding elections from foreign influence has always been a priority, but AI tools have expanded the arsenal available to malicious actors. Foreign entities can weaponize AI to produce divisive content, conduct disinformation campaigns, and target specific voter demographics, which poses significant risks to national security. AI-powered bots can launch large-scale, automated campaigns designed to manipulate public opinion, which can ultimately destabilize trust in election outcomes.

Current Snapshot: Policy Landscape for AI in Elections

As AI will continue to play a larger role in elections, policymakers and regulatory bodies are stepping up efforts to address its implications:

Federal Election Commission (FEC): FEC has been focused on exploring AI’s role in elections particularly in regards to AI-driven political ads and misinformation. The FEC regulates campaign advertising disclosures, and recently clarified rules around AI generated content. The recent interpretive rule from the FEC that extends the ban on fraudulent campaign misrepresentation to AI-generated content is a significant step in addressing the evolving landscape of election interference. SIIA has previously advocated for the application of existing laws, like the FEC’s fraudulent misrepresentation doctrine, to curb such misuse. By clarifying that AI-generated content is covered under these rules, the FEC has reinforced its commitment to maintaining election integrity.

Federal Communications Commission (FCC): The FCC has recently taken steps to address the growing influence of AI in election-related communications. In July 2024, the agency proposed new transparency requirements that would require broadcasters who already report on political ads to disclose when AI is used in these ads, including on-air disclosures. The proposal aims to create consistency across states with varying disclosure laws, without limiting the use of AI-generated content. In August, the FCC extended its efforts to AI-generated robocalls and robotexts, proposing measures for consumer protection and defining standards for AI-based calls while supporting assistive technologies. Recently, the FCC issued a $6 million fine for illegal election-related robocalls using deepfake technology in New Hampshire, marking a firm stance against AI-driven election interference.

Congressional Action: Federal legislation has not been passed into law. In the spring, Senator Amy Klobuchar introduced three AI election bills for markup. One would prepare election administrators for AI, the second would ban deepfakes of federal political candidates, and the third would require disclosures on AI-manipulated political ads. The Senate Rules Committee advanced all three in May, although some concerns were raised by Republicans and industry around the risk of hindering innovation and free speech. However, the two bills about voter-facing AI-generated content failed to pass a unanimous consent full Senate vote in July. 

Meanwhile, Senator Warner requested responses from companies about what they are doing to combat this issue. He requested specific answers about the actions that companies are taking following up on the commitments made at the Munich Security Conference. In August, Sen. Warner shared responses from the companies, and argued that companies lacked specificity on how they enforce their policies, citing concerns that companies offered little indication of detailed and sustained efforts to engage local media, civic institutions, and election officials and equip them with resources to identify and address misuse of generative AI tools in their communities. He also argued there is still a lack of robust and standardized information-sharing mechanisms within the ecosystem.

Senator Warner also wrote a letter to Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly that calls attention to concerns around voter intimidation plots that suppress turnout and invoke mistrust among voters. He urged robust action from CISA to increase its resources and grow its collaborative efforts to track these efforts. The Senator also called on CISA to facilitate communication between election offices and social media platforms, but this is an effort the agency has moved away from due to politicization. 

State Initiatives: Eighteen states have recently enacted laws to curb the use of deepfakes in elections, reflecting a strong trend among states toward regulating AI-driven deception in political messaging. While five states had passed legislation prior to 2024, the past year saw an additional 13 states enact laws. Notably, Governor Newsom signed three new bills in California this year: AB 2655 requiring large online platforms to label or remove misleading digitally altered content related to elections and provide reporting mechanisms; AB 2839 expanding restrictions against distributing deceptive AI content within a broader timeframe, also enabling officials to seek injunctions for non-compliance; and AB 2355 mandating clear disclosures on electoral advertisements using AI-generated or manipulated content. Recent actions across states signal a mounting trend towards addressing transparency within election-related AI use.

Ongoing Industry Efforts: Every Year is an Election Year

For technology companies operating in foreign jurisdictions, every year is an election year, as they have been on the front lines of adapting to the digital elections landscape across the world. The tech industry is playing a critical role in safeguarding elections by increasing transparency in political advertising and helping voters better understand the sources behind political messages. Many companies have pledged to address AI-driven election risks through initiatives such as the AI Elections Accord. This agreement establishes guidelines for companies to manage AI’s potential to mislead voters and outlines industry standards for transparency. Such efforts demonstrate that the private sector is committed to protecting the democratic process while responsibly advancing technology.

Looking Ahead: Collaboration and AI Literacy Are Key 

The challenge AI presents to our democratic system is complex and multifaceted. Protecting elections will require an ongoing collaboration between government and the technology industry, along with investment in initiatives that promote digital literacy to help voters navigate the digital landscape and recognize deceptive content.

The Software & Information Industry Association (SIIA) is committed to advancing policy solutions that protect the integrity of the democratic process while fostering innovation. Our focus is on promoting AI’s responsible use, ensuring that it strengthens democracy rather than undermines it. We will continue to advocate for policies and industry practices that strike the right balance, helping to shape a future where technology enhances, rather than hinders, the democratic process.

Supreme Court

SIIA Files Amicus Brief Supporting Access to Educational Technology

The Software & Information Industry Association (SIIA) has filed an amicus brief to support the continuation of the Schools and Libraries Universal Service Support program, or E-Rate, which helps provide affordable telecommunications services for schools and libraries. In its brief, SIIA stresses the E-Rate program’s critical role in bridging the digital divide, especially for rural and economically disadvantaged areas, by enabling widespread access to essential digital tools and learning resources. SIIA argues that the E-Rate program’s success demonstrates its indispensable value to schools and libraries, and warns that invalidating the current structure of the program would harm millions of students, educators, and library patrons across the nation who rely on its support.
NEW Policy Blog Templates (41)

In the News: The American Privacy Rights Act’s Hidden AI Ban

Author: Paul Lekas and Anton Van Seventer

Source: Techdirt

The release of a bipartisan draft of the American Privacy Rights Act (APRA) reinvigorated the effort to pass a federal consumer privacy law, only to sputter and stall amid concerns raised from across the political spectrum. All that is gone, however, is not forgotten: it is only a matter of time before Congress returns its institutional gaze to consumer privacy. When it does, Congress should pay careful attention to the implications of the APRA’s policy choices on AI development.

The APRA proposed to regulate AI development and use in two key ways. First, it required impact assessments and audits on algorithms used to make “consequential decisions” in areas such as housing, employment, healthcare, insurance, and credit, and provided consumers with rights to opt-out of the use of such algorithms. House drafters subsequently struck these provisions. Second, perhaps more importantly – and the focus of this article – the APRA also prohibited the use of personal data to train multipurpose AI models. This prohibition is not explicit in the APRA text. Rather, it is a direct implication of the “data minimization” principle that serves as the bedrock of the entire bill.

Data Minimization as a Framework for Consumer Privacy

Data minimization is the principle that data collection should be limited to only what is required to fulfill a specific purpose, and has both procedural and substantive components. Procedural data minimization, which is a hallmark of both European Union and United States privacy law, focuses on disclosure and consumer consent. Virginia’s Consumer Data Protection Act, for example, requires data collected and processed to be “adequate, relevant, and reasonably necessary” for its purposes as disclosed to the consumer. Privacy statutes modeled on procedural data minimization might make it difficult to process certain kinds of personal information, but ultimately with sufficient evidence of disclosure, they tend to remain agnostic about the data’s ultimate use.

Substantive data minimization goes further by limiting the ability of controllers to use consumer data for purposes beyond those expressly permitted under the law. Maryland’s Online Data Privacy Act, enacted earlier this year, is an example of this. The Maryland law permits covered businesses to collect, process or share sensitive data when it is “reasonably necessary and proportionate to provide or maintain a specific product or service requested by the consumer.” Although Maryland permits consumers to consent to additional uses, practices that are by default legal under Virginia’s and similar statutes — such as a local boat builder using data on its current customers’ employment or hobbies to predict who else in the area is likely to be interested in its business — would generally not be permissible in Maryland.

The APRA adopts a substantive data minimization approach, but it goes further than Maryland. The APRA mandates that covered entities shall not collect or process covered data “beyond what is necessary, proportionate, and limited to provide or maintain a specific product or service requested by the individual to whom the data pertains,” or alternatively “for a purpose other than those expressly permitted.” The latter category would then permit data to be used only for purposes explicitly authorized in the legislation — described as “permitted purposes” — but does not permit consumers to consent to additional uses, or even to several such “permitted purposes” at the same time.

The APRA proposes what is essentially a white list approach to data collection and processing. It does not permit personal data to be used for a range of socially-beneficial purposes, such as detecting and preventing identity theft, fraud and harassment that are essential to a functioning economy. And because the development of AI models is not among the permitted purposes, no personal data could be used to train AI models – even if consumers were to consent and even if the information was never disclosed. In contrast, current U.S. laws permit collection and processing of personal data subject to a series of risk-based regulations.

The substantive data minimization approach reflected in the APRA represents a potential sea change in norms for consumer privacy law in the United States. Each of the 19 state consumer privacy laws now in effect has by and large adopted a procedural data minimization approach in which data collection and processing is presumptively permissible. They have generally avoided substantive minimization restrictions. Even Maryland, the most stringent of these, has stopped well short of the APRA’s proposal to restrict data collection and processing to only those uses specified in the bill itself.

The GDPR’s Minimization Approach

The APRA’s approach to data minimization has more in common with the EU General Data Protection Regulation (GDPR) than with U.S. state privacy laws. The GDPR follows a substantive data minimization model, allowing collection only for a set of “specified, explicit, and legitimate” purposes. Unlike the APRA, however, a data controller may use data if a consumer provides affirmative express consent. As such, compliance practitioners typically advise companies operating in Europe that intend to “reuse” data for multiple purposes, such as to train multimodal AI models, to simply obtain a consumer’s consent to use any data sets that would undergird future technological development of these models.1

Even with the permission to use data pursuant to consumer consent, the GDPR framework has been largely criticized for slowing innovation that relies on data. Some have attributed the slow pace of European AI development, compared to the United States and China, to the GDPR’s restriction of data use. Notably, enforcement actions by EU regulators, as well as general uncertainty over the legality of training multimodal AI under the GDPR, have already forced even large companies operating in the EU to altogether stop offering their consumer AI applications within the jurisdiction.

How the APRA Would Cut Off AI Development

The APRA, if enacted in its current form, would have a starker impact on AI development than even the GDPR. This is because the APRA would not permit any “reuse” of data, nor permit the use of data for any purpose outside the bill’s white list, even in cases where a consumer affirmatively consents.

That policy choice moves the APRA from the GDPR’s already restrictive framework into a new kind of exclusively substantive privacy regulation that will hamstring AI development. Multifaceted requests by end users form the foundation of generative AI. Flexibility in consumer applications is these models’ purpose and promise. If data collected and processed for one purpose may never be reused for another purpose regardless of consumer consent or even a clear criteria, training and offering multipurpose generative AI applications is rendered facially illegal. The AI developer that could comply with the GDPR by obtaining affirmative consent in order to enable the reuse of data for multiple productive applications could not do so under the APRA.

The downsides of training entire AI models to serve only one purpose will have negative effects on both safety and reliability. Responsible AI practices include a multitude of safeguards that build off each other and their underlying data set to optimize machine learning applications for accuracy, consumer experience, and even data minimization itself. These improvements would not be feasible if every model used for a new purpose is forced to “start from scratch.” For example, filtering for inaccurate data and efforts to avoid duplicative datasets, both of which depend on well-developed training data, would be rendered ineffective. Consumers would also need to reset preferences, parameters and data output safeguards for each model, leading to user fatigue.

Moreover, the APRA approach would prevent developers from building AI tools designed to enhance privacy. For example, the creation of synthetic data based on well-developed datasets that is then substituted instead of consumers’ personal data — a privacy-protective goal — is impossible in the absence of well-developed underlying data. Paradoxically, consumers’ personal data would instead need to be duplicated to serve each model and each purpose.

The sole provision in the APRA that would generally permit personal data to be used in technological development is a specific permitted purpose that allows covered entities to “develop or enhance a product or service.” This subsection, however, applies only to de-identified data. Filtering out all personal data from AI training data sets presents an impossible challenge at scale. Models are not capable of distinguishing whether, for example, a word is a name, or what data may be linked to it. Implementing filters attempting to weed out all personal data from a training data set would inevitably also remove large swaths of non-personal data – a phenomenon known as “false positives.” High false positive rates are especially detrimental to training data sets because they refer to the removal of large amounts of valuable training data that are not personal data, leading to unpredictable and potentially biased results.

Even if this were feasible, filtering all personal data out from training data would itself lower the quality of the data set, further biasing outputs. Furthermore, many AI models include anti-bias output safeguards that would also be diminished in the absence of the data they use to control for bias. Thus, a lack of relevant training data can bias outputs, yet so too can an inherently biased model whose output safeguards are rendered ineffective because they lack the necessary personal information to accomplish their task. Unfortunately, both of these harms are almost certain to materialize under a regime that wholly eschews personal information from inclusion in training data.

Where to Go From Here

As the APRA falters and Congress looks forward to a likely redraft of federal privacy legislation, it is critical to avoid mothballing domestic AI development with a poorly-scoped overhaul of U.S. privacy norms. For several years preceding the APRA’s introduction, privacy advocates have advanced a narrative that the U.S. experiment with “notice and choice,” or notifying consumers and presenting an opportunity to opt out of data collection, has failed to protect consumer data. Improving this framework in a way that gives consumers greater control over their data is possible, and even desirable, via federal legislation. Yet a framework built around permitting only predetermined uses of data would have unintended, unforeseen and potentially disastrous consequences both for domestic technological development and U.S. competitiveness on the world stage.

1 The GDPR does not generally permit data collected for one permitted purpose to be used for others, except as subject to vague criteria. Although the law includes a series of criteria to do so, these criteria are. They include 1) a link between the new and original purpose, 2) the context of collection, “in particular regarding the relationship between data subjects and the controller,” 3) the nature and sensitivity of the personal data, 4) the possible consequences of the new processing to data subjects, and 5) appropriate technical safeguards. The GDPR also specifically articulates that this criteria also may not include contextual considerations, rendering compliance uncertain in the majority of cases.

Paul Lekas is Senior Vice President and Head of Global Public Policy and Government Affairs at the Software & Information Industry Association (SIIA). Anton van Seventer is Counsel for Privacy and Data Policy at SIIA.

NEW Policy Blog Templates (39)

SIIA Details AI Priorities for Remainder of 118th Congress

In this letter, The Software & Information Industry Association (SIIA) urges Congress to prioritize and pass essential AI legislation before the 118th Session concludes. SIIA commends bipartisan efforts in both the Senate and House to shape U.S. AI policy and emphasizes the importance of legislation like the Future of AI Innovation Act and the AI Advancement and Reliability Act. These bills aim to establish a coordinated federal body for AI safety, thereby protecting national security and maintaining U.S. leadership in AI innovation. SIIA also highlights the importance of additional legislation to support AI research, workforce development, standards, and to address challenges posed by AI in sensitive areas like elections and deepfakes.

SIIA advises Congress against passing the ENFORCE Act, which imposes export controls on AI models, warning that such a measure could hinder U.S. competitiveness. Instead, it advocates for sufficient funding to support initiatives like the U.S. AI Safety Institute and the National AI Research Resource. These resources, SIIA argues, are critical for the U.S. to sustain its global AI leadership, ensuring robust infrastructure for testing, accountability, and collaboration. SIIA offers its continued support to Congress in advancing a balanced approach to AI legislation.

NEW Policy Blog Templates (38)

SIIA Applauds the Release of Education Toolkit for AI from the U.S. Department of Education

The following statement can be attributed to Sara Kloek, Vice President, Education and Children’s Policy, the Software & Information Industry Association (SIIA).

 

We’re pleased to see today’s release of “Empowering Education Leaders: A Toolkit for Safe, Ethical, and Equitable AI Integration” from the U.S. Department of Education. This document provides actionable guidance that can be tailored to individual schools, districts, and states.

SIIA’s Principles for the Future of AI in Education were cited on page 52 as an example of a document that can help leaders “advocate for responsible AI development and deployment as well as responsible use.” We agree and are hopeful these principles can continue to be a resource.

Many of the recommendations will need additional insights from the education community such as the use of impact assessments, testing AI, and other suggestions outlined for education leaders to manage risks. We look forward to working with our colleagues to address how best to implement these processes in an education setting.