NEW Policy Blog Templates (45)

SIIA Letter to FTC on Rytr Order

The Software & Information Industry Association (SIIA) urges the FTC to withdraw the proposed consent order against Rytr LLC, citing concerns that the action extends beyond the FTC’s Section 5 authority. SIIA argues that this move could create a troubling precedent by imposing means and instrumentalities liability on AI tool providers, despite no evidence of actual harm. They highlight the FTC’s traditional application of this liability to products inherently tied to deceptive practices, unlike Rytr’s generative AI tool, which has both lawful and potentially unlawful uses.

SIIA also warns of the chilling effect this enforcement could have on the AI industry, potentially stifling innovation by penalizing providers of multi-use tools. SIIA advocates for a more targeted approach, focusing on harmful applications of AI rather than general-purpose tools. We urge the FTC to concentrate on clear-cut cases of fraud and deception, as seen in other Operation AI Comply actions, to maintain a balanced regulatory environment that supports AI’s responsible development.

NEW Policy Blog Templates (44)

Did the FTC just declare war on generative AI?

Policymakers in the United States and abroad are grappling with how to advance innovation of inherently multi-use generative AI technologies while building guardrails to mitigate the risk of misuse and malicious activity. Despite differences of opinion among policymakers, there is near consensus that the right way to proceed with regulation is to focus on high-risk applications of AI, and to advance measures around transparency, testing, and evaluation to mitigate risks associated with low-risk applications. Reflecting this, legislative approaches to liability have focused squarely on those who use AI in ways that create harm, or those that develop AI tools that are intended to do so. No one in any jurisdiction has gone as far as to effectively ban generative AI because it could be used to generate speech that could be used to deceive a third party.

Until now, the FTC’s attention to AI as an emerging area has nonetheless led to enforcement based on fact patterns typical of Section 5 – the FTC Act authority that governs unfair or deceptive acts or practices and is used to protect consumers from fraud, schemes, and lax business practices. This is consistent with the April 2023 joint statement of FTC Chair Lina Khan and heads of three other federal agencies that affirmed “[e]xisting legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.” 

That may be changing. 

In September, the FTC announced a series of actions as part of the Operation AI Comply law enforcement sweep. Four of these involve traditional deceptive schemes that fall well within the scope of the FTC’s Section 5 authority, such as a “robot lawyer” that failed to deliver on lofty claims, and business opportunity schemes that made false promises about how AI could help consumers get rich. The Commission approved each of these on a unanimous 5-0 vote.

The fifth action in this package, however, breaks new ground. That action involved Rytr, a company that offers an AI tool to generate written content in dozens of “use cases” – things like “Email” and “Blogs” and “Testimonial & Review.” The FTC claims that by offering the “Testimonial & Review” use case (now discontinued, in response to the FTC action), Rytr “provided the means and instrumentalities to its users and subscribers to generate written content for consumer reviews that was false and deceptive” and “engaged in an unfair business practice by offering a service that was intended to quickly generate unlimited content for consumer reviews and created false and deceptive written content for consumer reviews.” (Analysis of Proposed Consent Order to Aid Public Comment, at 80566; see also In the Matter of Rytr LLC, Complaint, ¶¶ 15-17.)  

The Commission approved the Rytr action on a 3-2 vote with strong dissents from Commissioners Melissa Holyoak and Andrew Ferguson.  The concerns raised in these dissents go to the very heart of the FTC’s case against Rytr.

For starters, as Commissioner Holyoak explains, “the complaint does not allege that users actually posted any draft reviews. Since the Commission has no evidence that a single draft review was posted, the complaint centers on alleging speculative harms that may have come from subscribers with access to unlimited output from across Rytr’s use cases, which included draft reviews.” (Holyoak at 2.) Speculative harms of this sort do not satisfy the Section 5 requirement that “the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” (Id.)

This is only the tip of the iceberg. More troubling, the Rytr action represents a departure from the FTC’s AI-related enforcement. Unlike the FTC’s prior actions involving AI, which involve either the sorts of schemes or deceptions that have been the FTC’s bread and butter or misrepresentations made to consumers (such as about the protection and use of personal data), the Rytr case involves none of this. Instead, it represents an effort to extend what is known as “means and instrumentalities” liability to actors who provide tools that could be used to the detriment of consumers. 

The FTC has traditionally applied means and instrumentalities liability narrowly in two situations. As explained by Commissioner Ferguson, the first is when a product or service “is inherently deceptive” or “has no purpose apart from facilitating” a violation of Section 5. (Ferguson at 3.) This theory has been used “to pursue makers of push cards and punch boards custom-made for retailers to use in illegal marketing schemes” as well as “suppliers of mislabeled art.” (Id. at 3-4.) The second situation involves “suppliers of misleading marketing materials that someone down the supply chain uses to deceive consumers,” such as in a pyramid scheme. (Id.)

Means and instrumentalities is a form of direct liability, which requires active participation and knowledge of wrongfulness, as distinct from secondary theories of liability, which are not available to the FTC for Section 5 claims. (See, e.g., FTC, Trade Regulation Rule on Impersonation of Government or Businesses, Supplemental Notice of Proposed Rulemaking, at 15077, 15082 n.94 (Mar. 1, 2024).) The FTC has recently sought to expand the means and instrumentalities doctrine to reach acts or practices not inherently deceptive or misleading that have the potential to enable Section 5 violations by others. 

This arose in connection with the FTC’s Impersonation Rule, published in March. The final Rule rejected a proposal to extend liability to means and instrumentalities used to impersonate government, businesses, and their officials or agents, though the FTC initiated a supplemental rulemaking to consider expanding the Impersonation Rule in this way. In endorsing means and instrumentalities liability for impersonation, FTC Chair Khan stated that it would enable liability, for example, for “a developer who knew or should have known that their AI software tool designed to generate deepfakes of IRS officials would be used by scammers to deceive people about whether they paid their taxes.”

What’s notable about Chair Khan’s hypothetical is that it involves an AI tool designed to enable deception. That is consistent with how the FTC has traditionally invoked means and instrumentalities, as Commissioner Ferguson explains in his dissent. But it is at odds with the thrust of the Rytr action, which appears to be the first time the Commission has invoked means and instrumentalities to pursue a product or service that is not “necessarily deceptive like mislabeled art, or useful only in facilitating someone else’s section 5 violation like lottery punch boards.”. (Ferguson at 5.) Indeed, the Rytr tool “has both lawful and unlawful potential uses. A consumer could use it to draft an honest and accurate review. Or a business could use it to write a false review.” (Id.) 

While the Commission chose to pursue only a sliver of the capabilities of Rytr’s generative AI tool, it’s not clear what ultimately separates the one problematic use case – “Testimonial & Review” – from the others. One could just as easily use a function for generating “Email” to prepare a fictitious review. 

And what makes the situation of Rytr unique from that of the many generative AI tools available to the public that provide users with unstructured prompts? Take ChatGPT, for example. In preparing this post, I asked ChatGPT to generate five fictional customer reviews for a seller of blue jeans. In virtually no time, ChatGPT delivered. “I’ve tried a lot of different brands, but these blue jeans are hands down the best,” the first review started. “The fit is perfect, especially around the waist and thighs, which is usually a problem area for me. The material feels durable yet soft, and they haven’t faded even after multiple washes. Shipping was fast too! I’m definitely getting another pair.” (The other fictional reviews addressed additional facets of the jean-buying experience, including color, customer service, durability, affordability, shipping, and so on, none of which I prompted.) It is hard to see how this is any different from the concerns raised by the FTC in the Rytr action. 

But perhaps this action, involving an under-the-radar company, is meant as a test case to explore how far the FTC can extend means and instrumentalities liability without new congressional authority. Commissioner Ferguson expressed concern with this possibility, calling the Rytr action “a dramatic extension” of the doctrine that treats the “sale of a product with lawful and unlawful potential uses as a categorical Section 5 violation because someone could use it to write a statement that could violate Section 5.” (Ferguson at 5.) The same could be said, he continues, “of an almost unlimited number of products and services: pencils, paper, printers, computers, smartphones, word processors, typewriters, posterboard, televisions, billboards, online advertising space, professional printing services, etc. On the Commission’s theory, the makers and suppliers of these products and services are furnishing the means or instrumentalities to deceive consumers merely because sometime might put them to unlawful use.” (Id.)

Commissioner Holyoak, too, recognizes the harmful precedent that this action could set. As she writes: “Today’s complaint suggests to all cutting-edge technology developers that an otherwise neutral product used inappropriately can lead to liability—even where, like here, the developer neither deceived nor caused injury to a consumer.” (Holyoak at 5.)

Commissioner Ferguson is correct that “Congress has not given [the FTC] the power to regulate AI” distinct from its authority to “enforce[e] the prohibition against unfair or deceptive acts or practice.” It is beyond dispute that this authority permits the FTC to investigate appropriate wrongdoing in connection with AI products and services, as the April 2023 joint statement makes clear. It comes down to what is appropriate wrongdoing within the scope of the FTC’s authority. The FTC should be going after schemes and deception and misrepresentations of the sort represented in the other Operation AI Comply cases. The two dissenting commissioners wholeheartedly endorse actions of this type. “But,” again in Ferguson’s words, the FTC “should not bend the law to get at AI. And we certainly should not chill innovation by threatening to hold AI companies liable for whatever illegal use some clever fraudster might find for their technology.” (Ferguson at 10.)

NEW Policy Blog Templates (43)

SIIA Submits Letter Raising Concerns Over UN Cybercrime Convention

In a recent letter to key U.S. officials, the Software & Information Industry Association (SIIA) and the Computer & Communications Industry Association (CCIA) express serious concerns about the proposed United Nations Convention Against Cybercrime. While supporting its objective of addressing cybercrime, the organizations warn that the draft Convention could hinder cybersecurity and AI safety, limit essential research, and pose significant risks to U.S. technology and data. Specifically, they argue that Articles 7-10 lack provisions to protect good-faith research, potentially weakening global cybersecurity resiliency by discouraging vulnerability testing. They also highlight that the Convention may empower foreign governments to justify data expropriation, compromising U.S. leadership in technology innovation.

SIIA and CCIA further caution that the Convention’s expansive definitions and lack of safeguards could legitimize censorship and increase privacy risks. By mandating broad data-sharing and surveillance measures, the Convention opens the door for abuses by autocratic regimes, posing a threat to free expression and the open internet. The letter urges the U.S. to reconsider its stance on this Convention and to advocate for an alternative approach that aligns more closely with existing frameworks, such as the OECD Declaration and Budapest Convention, to effectively combat cybercrime while preserving human rights and innovation.

NEW Policy Blog Templates (42)

Elections in the AI Era: Snapshot of Current Efforts from Government and Industry

In recent years, the digital landscape has profoundly reshaped election cycles, adding new layers of complexity to campaign strategies and voter engagement. For some, it’s hard to remember a time when political campaigns didn’t rely heavily on social media, let alone the internet. Especially for the youngest generation of the eligible voter base, digital content has been central to digesting election information, serving as a primary means for candidates to communicate their platforms and for constituents to interact with the discourse associated with political issues at the core of campaigns.

Technology has broadened access to information, breaking down barriers and encouraging civic participation. However, with increased digital engagement comes a flood of information that can be overwhelming to parse through and, at times, difficult to verify. Now, AI is adding an entirely new dimension to this dynamic, presenting both incredible opportunities and unprecedented risks to the democratic process. When grappling with AI’s impact on elections, three core issues rise to the forefront: First Amendment considerations, data policy implications, and national security risks. 

First Amendment Considerations

A challenging aspect of AI in elections is balancing free speech with the need to prevent harmful disinformation. While the First Amendment protects free speech, AI-generated content, such as deepfakes or manipulated narratives, blurs the line between protected expression and potentially dangerous disinformation. Platforms face pressure to moderate content, but this raises concerns around censorship, bias, and the integrity of political advertising. As AI becomes more adept at generating realistic content, establishing guidelines that uphold free speech while addressing misinformation is critical.

Data Policy Considerations

Data is at the heart of modern campaigning, but with AI’s increasing role, the boundaries around data privacy and ownership are becoming less clear. Campaigns often rely on third-party data analysis, raising concerns about how voter data is used and whether it might extend beyond its intended purpose. This isn’t solely an AI issue, but AI exacerbates these concerns by introducing new questions around training data, transparency, and accountability.

National Security

Safeguarding elections from foreign influence has always been a priority, but AI tools have expanded the arsenal available to malicious actors. Foreign entities can weaponize AI to produce divisive content, conduct disinformation campaigns, and target specific voter demographics, which poses significant risks to national security. AI-powered bots can launch large-scale, automated campaigns designed to manipulate public opinion, which can ultimately destabilize trust in election outcomes.

Current Snapshot: Policy Landscape for AI in Elections

As AI will continue to play a larger role in elections, policymakers and regulatory bodies are stepping up efforts to address its implications:

Federal Election Commission (FEC): FEC has been focused on exploring AI’s role in elections particularly in regards to AI-driven political ads and misinformation. The FEC regulates campaign advertising disclosures, and recently clarified rules around AI generated content. The recent interpretive rule from the FEC that extends the ban on fraudulent campaign misrepresentation to AI-generated content is a significant step in addressing the evolving landscape of election interference. SIIA has previously advocated for the application of existing laws, like the FEC’s fraudulent misrepresentation doctrine, to curb such misuse. By clarifying that AI-generated content is covered under these rules, the FEC has reinforced its commitment to maintaining election integrity.

Federal Communications Commission (FCC): The FCC has recently taken steps to address the growing influence of AI in election-related communications. In July 2024, the agency proposed new transparency requirements that would require broadcasters who already report on political ads to disclose when AI is used in these ads, including on-air disclosures. The proposal aims to create consistency across states with varying disclosure laws, without limiting the use of AI-generated content. In August, the FCC extended its efforts to AI-generated robocalls and robotexts, proposing measures for consumer protection and defining standards for AI-based calls while supporting assistive technologies. Recently, the FCC issued a $6 million fine for illegal election-related robocalls using deepfake technology in New Hampshire, marking a firm stance against AI-driven election interference.

Congressional Action: Federal legislation has not been passed into law. In the spring, Senator Amy Klobuchar introduced three AI election bills for markup. One would prepare election administrators for AI, the second would ban deepfakes of federal political candidates, and the third would require disclosures on AI-manipulated political ads. The Senate Rules Committee advanced all three in May, although some concerns were raised by Republicans and industry around the risk of hindering innovation and free speech. However, the two bills about voter-facing AI-generated content failed to pass a unanimous consent full Senate vote in July. 

Meanwhile, Senator Warner requested responses from companies about what they are doing to combat this issue. He requested specific answers about the actions that companies are taking following up on the commitments made at the Munich Security Conference. In August, Sen. Warner shared responses from the companies, and argued that companies lacked specificity on how they enforce their policies, citing concerns that companies offered little indication of detailed and sustained efforts to engage local media, civic institutions, and election officials and equip them with resources to identify and address misuse of generative AI tools in their communities. He also argued there is still a lack of robust and standardized information-sharing mechanisms within the ecosystem.

Senator Warner also wrote a letter to Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly that calls attention to concerns around voter intimidation plots that suppress turnout and invoke mistrust among voters. He urged robust action from CISA to increase its resources and grow its collaborative efforts to track these efforts. The Senator also called on CISA to facilitate communication between election offices and social media platforms, but this is an effort the agency has moved away from due to politicization. 

State Initiatives: Eighteen states have recently enacted laws to curb the use of deepfakes in elections, reflecting a strong trend among states toward regulating AI-driven deception in political messaging. While five states had passed legislation prior to 2024, the past year saw an additional 13 states enact laws. Notably, Governor Newsom signed three new bills in California this year: AB 2655 requiring large online platforms to label or remove misleading digitally altered content related to elections and provide reporting mechanisms; AB 2839 expanding restrictions against distributing deceptive AI content within a broader timeframe, also enabling officials to seek injunctions for non-compliance; and AB 2355 mandating clear disclosures on electoral advertisements using AI-generated or manipulated content. Recent actions across states signal a mounting trend towards addressing transparency within election-related AI use.

Ongoing Industry Efforts: Every Year is an Election Year

For technology companies operating in foreign jurisdictions, every year is an election year, as they have been on the front lines of adapting to the digital elections landscape across the world. The tech industry is playing a critical role in safeguarding elections by increasing transparency in political advertising and helping voters better understand the sources behind political messages. Many companies have pledged to address AI-driven election risks through initiatives such as the AI Elections Accord. This agreement establishes guidelines for companies to manage AI’s potential to mislead voters and outlines industry standards for transparency. Such efforts demonstrate that the private sector is committed to protecting the democratic process while responsibly advancing technology.

Looking Ahead: Collaboration and AI Literacy Are Key 

The challenge AI presents to our democratic system is complex and multifaceted. Protecting elections will require an ongoing collaboration between government and the technology industry, along with investment in initiatives that promote digital literacy to help voters navigate the digital landscape and recognize deceptive content.

The Software & Information Industry Association (SIIA) is committed to advancing policy solutions that protect the integrity of the democratic process while fostering innovation. Our focus is on promoting AI’s responsible use, ensuring that it strengthens democracy rather than undermines it. We will continue to advocate for policies and industry practices that strike the right balance, helping to shape a future where technology enhances, rather than hinders, the democratic process.

Supreme Court

SIIA Files Amicus Brief Supporting Access to Educational Technology

The Software & Information Industry Association (SIIA) has filed an amicus brief to support the continuation of the Schools and Libraries Universal Service Support program, or E-Rate, which helps provide affordable telecommunications services for schools and libraries. In its brief, SIIA stresses the E-Rate program’s critical role in bridging the digital divide, especially for rural and economically disadvantaged areas, by enabling widespread access to essential digital tools and learning resources. SIIA argues that the E-Rate program’s success demonstrates its indispensable value to schools and libraries, and warns that invalidating the current structure of the program would harm millions of students, educators, and library patrons across the nation who rely on its support.