NEW Policy Blog Templates (47)

SIIA Submits Comments on DOE’s FASST Initiative

The Software & Information Industry Association (SIIA) has submitted comments to the Department of Energy (DOE) on the transformative potential of the Frontiers in AI for Science, Security, and Technology (FASST) initiative. SIIA emphasizes FASST’s role in advancing U.S. economic and national security by leveraging AI and data resources, especially through DOE’s 17 national laboratories. SIIA also supports the Department of Energy AI Act of 2024, which would formalize this initiative, and calls for robust industry-DOE collaboration to maximize the program’s impact on scientific discovery and application.

In its comments, SIIA highlights key areas for DOE’s focus, including data interoperability, enhanced privacy measures, and a comprehensive data ecosystem with multi-cloud support. The association also urges DOE to prioritize AI workforce development and to foster a public-private partnership for governance. By incorporating best practices in data management, computational infrastructure, and AI governance, SIIA believes FASST can set a new standard for government-led AI initiatives and maintain U.S. leadership in AI innovation.

NEW Policy Blog Templates (46)

SIIA Congratulates Donald Trump on Re-Election

The following statement can be attributed to Chris Mohr, President, the Software & Information Industry Association.

SIIA congratulates Donald Trump on winning re-election as President, and the Republican Party on securing a majority in the Senate. We look forward to working with the Trump Administration and the new Senate leadership to develop fair and bipartisan tech policies that protect the business of information, ensure the efficiency of our market economy, and advance US technological leadership and national security.

NEW Policy Blog Templates (45)

SIIA Letter to FTC on Rytr Order

The Software & Information Industry Association (SIIA) urges the FTC to withdraw the proposed consent order against Rytr LLC, citing concerns that the action extends beyond the FTC’s Section 5 authority. SIIA argues that this move could create a troubling precedent by imposing means and instrumentalities liability on AI tool providers, despite no evidence of actual harm. They highlight the FTC’s traditional application of this liability to products inherently tied to deceptive practices, unlike Rytr’s generative AI tool, which has both lawful and potentially unlawful uses.

SIIA also warns of the chilling effect this enforcement could have on the AI industry, potentially stifling innovation by penalizing providers of multi-use tools. SIIA advocates for a more targeted approach, focusing on harmful applications of AI rather than general-purpose tools. We urge the FTC to concentrate on clear-cut cases of fraud and deception, as seen in other Operation AI Comply actions, to maintain a balanced regulatory environment that supports AI’s responsible development.

NEW Policy Blog Templates (44)

Did the FTC just declare war on generative AI?

Policymakers in the United States and abroad are grappling with how to advance innovation of inherently multi-use generative AI technologies while building guardrails to mitigate the risk of misuse and malicious activity. Despite differences of opinion among policymakers, there is near consensus that the right way to proceed with regulation is to focus on high-risk applications of AI, and to advance measures around transparency, testing, and evaluation to mitigate risks associated with low-risk applications. Reflecting this, legislative approaches to liability have focused squarely on those who use AI in ways that create harm, or those that develop AI tools that are intended to do so. No one in any jurisdiction has gone as far as to effectively ban generative AI because it could be used to generate speech that could be used to deceive a third party.

Until now, the FTC’s attention to AI as an emerging area has nonetheless led to enforcement based on fact patterns typical of Section 5 – the FTC Act authority that governs unfair or deceptive acts or practices and is used to protect consumers from fraud, schemes, and lax business practices. This is consistent with the April 2023 joint statement of FTC Chair Lina Khan and heads of three other federal agencies that affirmed “[e]xisting legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.” 

That may be changing. 

In September, the FTC announced a series of actions as part of the Operation AI Comply law enforcement sweep. Four of these involve traditional deceptive schemes that fall well within the scope of the FTC’s Section 5 authority, such as a “robot lawyer” that failed to deliver on lofty claims, and business opportunity schemes that made false promises about how AI could help consumers get rich. The Commission approved each of these on a unanimous 5-0 vote.

The fifth action in this package, however, breaks new ground. That action involved Rytr, a company that offers an AI tool to generate written content in dozens of “use cases” – things like “Email” and “Blogs” and “Testimonial & Review.” The FTC claims that by offering the “Testimonial & Review” use case (now discontinued, in response to the FTC action), Rytr “provided the means and instrumentalities to its users and subscribers to generate written content for consumer reviews that was false and deceptive” and “engaged in an unfair business practice by offering a service that was intended to quickly generate unlimited content for consumer reviews and created false and deceptive written content for consumer reviews.” (Analysis of Proposed Consent Order to Aid Public Comment, at 80566; see also In the Matter of Rytr LLC, Complaint, ¶¶ 15-17.)  

The Commission approved the Rytr action on a 3-2 vote with strong dissents from Commissioners Melissa Holyoak and Andrew Ferguson.  The concerns raised in these dissents go to the very heart of the FTC’s case against Rytr.

For starters, as Commissioner Holyoak explains, “the complaint does not allege that users actually posted any draft reviews. Since the Commission has no evidence that a single draft review was posted, the complaint centers on alleging speculative harms that may have come from subscribers with access to unlimited output from across Rytr’s use cases, which included draft reviews.” (Holyoak at 2.) Speculative harms of this sort do not satisfy the Section 5 requirement that “the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” (Id.)

This is only the tip of the iceberg. More troubling, the Rytr action represents a departure from the FTC’s AI-related enforcement. Unlike the FTC’s prior actions involving AI, which involve either the sorts of schemes or deceptions that have been the FTC’s bread and butter or misrepresentations made to consumers (such as about the protection and use of personal data), the Rytr case involves none of this. Instead, it represents an effort to extend what is known as “means and instrumentalities” liability to actors who provide tools that could be used to the detriment of consumers. 

The FTC has traditionally applied means and instrumentalities liability narrowly in two situations. As explained by Commissioner Ferguson, the first is when a product or service “is inherently deceptive” or “has no purpose apart from facilitating” a violation of Section 5. (Ferguson at 3.) This theory has been used “to pursue makers of push cards and punch boards custom-made for retailers to use in illegal marketing schemes” as well as “suppliers of mislabeled art.” (Id. at 3-4.) The second situation involves “suppliers of misleading marketing materials that someone down the supply chain uses to deceive consumers,” such as in a pyramid scheme. (Id.)

Means and instrumentalities is a form of direct liability, which requires active participation and knowledge of wrongfulness, as distinct from secondary theories of liability, which are not available to the FTC for Section 5 claims. (See, e.g., FTC, Trade Regulation Rule on Impersonation of Government or Businesses, Supplemental Notice of Proposed Rulemaking, at 15077, 15082 n.94 (Mar. 1, 2024).) The FTC has recently sought to expand the means and instrumentalities doctrine to reach acts or practices not inherently deceptive or misleading that have the potential to enable Section 5 violations by others. 

This arose in connection with the FTC’s Impersonation Rule, published in March. The final Rule rejected a proposal to extend liability to means and instrumentalities used to impersonate government, businesses, and their officials or agents, though the FTC initiated a supplemental rulemaking to consider expanding the Impersonation Rule in this way. In endorsing means and instrumentalities liability for impersonation, FTC Chair Khan stated that it would enable liability, for example, for “a developer who knew or should have known that their AI software tool designed to generate deepfakes of IRS officials would be used by scammers to deceive people about whether they paid their taxes.”

What’s notable about Chair Khan’s hypothetical is that it involves an AI tool designed to enable deception. That is consistent with how the FTC has traditionally invoked means and instrumentalities, as Commissioner Ferguson explains in his dissent. But it is at odds with the thrust of the Rytr action, which appears to be the first time the Commission has invoked means and instrumentalities to pursue a product or service that is not “necessarily deceptive like mislabeled art, or useful only in facilitating someone else’s section 5 violation like lottery punch boards.”. (Ferguson at 5.) Indeed, the Rytr tool “has both lawful and unlawful potential uses. A consumer could use it to draft an honest and accurate review. Or a business could use it to write a false review.” (Id.) 

While the Commission chose to pursue only a sliver of the capabilities of Rytr’s generative AI tool, it’s not clear what ultimately separates the one problematic use case – “Testimonial & Review” – from the others. One could just as easily use a function for generating “Email” to prepare a fictitious review. 

And what makes the situation of Rytr unique from that of the many generative AI tools available to the public that provide users with unstructured prompts? Take ChatGPT, for example. In preparing this post, I asked ChatGPT to generate five fictional customer reviews for a seller of blue jeans. In virtually no time, ChatGPT delivered. “I’ve tried a lot of different brands, but these blue jeans are hands down the best,” the first review started. “The fit is perfect, especially around the waist and thighs, which is usually a problem area for me. The material feels durable yet soft, and they haven’t faded even after multiple washes. Shipping was fast too! I’m definitely getting another pair.” (The other fictional reviews addressed additional facets of the jean-buying experience, including color, customer service, durability, affordability, shipping, and so on, none of which I prompted.) It is hard to see how this is any different from the concerns raised by the FTC in the Rytr action. 

But perhaps this action, involving an under-the-radar company, is meant as a test case to explore how far the FTC can extend means and instrumentalities liability without new congressional authority. Commissioner Ferguson expressed concern with this possibility, calling the Rytr action “a dramatic extension” of the doctrine that treats the “sale of a product with lawful and unlawful potential uses as a categorical Section 5 violation because someone could use it to write a statement that could violate Section 5.” (Ferguson at 5.) The same could be said, he continues, “of an almost unlimited number of products and services: pencils, paper, printers, computers, smartphones, word processors, typewriters, posterboard, televisions, billboards, online advertising space, professional printing services, etc. On the Commission’s theory, the makers and suppliers of these products and services are furnishing the means or instrumentalities to deceive consumers merely because sometime might put them to unlawful use.” (Id.)

Commissioner Holyoak, too, recognizes the harmful precedent that this action could set. As she writes: “Today’s complaint suggests to all cutting-edge technology developers that an otherwise neutral product used inappropriately can lead to liability—even where, like here, the developer neither deceived nor caused injury to a consumer.” (Holyoak at 5.)

Commissioner Ferguson is correct that “Congress has not given [the FTC] the power to regulate AI” distinct from its authority to “enforce[e] the prohibition against unfair or deceptive acts or practice.” It is beyond dispute that this authority permits the FTC to investigate appropriate wrongdoing in connection with AI products and services, as the April 2023 joint statement makes clear. It comes down to what is appropriate wrongdoing within the scope of the FTC’s authority. The FTC should be going after schemes and deception and misrepresentations of the sort represented in the other Operation AI Comply cases. The two dissenting commissioners wholeheartedly endorse actions of this type. “But,” again in Ferguson’s words, the FTC “should not bend the law to get at AI. And we certainly should not chill innovation by threatening to hold AI companies liable for whatever illegal use some clever fraudster might find for their technology.” (Ferguson at 10.)

NEW Policy Blog Templates (43)

SIIA Submits Letter Raising Concerns Over UN Cybercrime Convention

In a recent letter to key U.S. officials, the Software & Information Industry Association (SIIA) and the Computer & Communications Industry Association (CCIA) express serious concerns about the proposed United Nations Convention Against Cybercrime. While supporting its objective of addressing cybercrime, the organizations warn that the draft Convention could hinder cybersecurity and AI safety, limit essential research, and pose significant risks to U.S. technology and data. Specifically, they argue that Articles 7-10 lack provisions to protect good-faith research, potentially weakening global cybersecurity resiliency by discouraging vulnerability testing. They also highlight that the Convention may empower foreign governments to justify data expropriation, compromising U.S. leadership in technology innovation.

SIIA and CCIA further caution that the Convention’s expansive definitions and lack of safeguards could legitimize censorship and increase privacy risks. By mandating broad data-sharing and surveillance measures, the Convention opens the door for abuses by autocratic regimes, posing a threat to free expression and the open internet. The letter urges the U.S. to reconsider its stance on this Convention and to advocate for an alternative approach that aligns more closely with existing frameworks, such as the OECD Declaration and Budapest Convention, to effectively combat cybercrime while preserving human rights and innovation.