NEW Policy Blog Templates (62)

A Second Trump Administration Should Rein in FTC Overreach

In 2025, President-elect Donald Trump will have the opportunity to set a pro-consumer, pro-innovation, and pro-growth agenda at the outset of his second administration. As the president-elect prepares to shape key federal agencies, a top priority of the new administration should be to examine recent actions taken by the Federal Trade Commission (FTC) that overstep the agency’s authority and return the FTC to its original mandate.

Since its creation over 100 years ago, the FTC has worked to protect consumers from unfair business practices without unduly burdening companies and economic growth. But in recent years, top political appointees at the FTC have taken actions that both exceed the agency’s authority and ironically undercut its stated goal. Take, for example, the decision made by the FTC to take action against Rytr, a company working to develop new applications of generative AI technology, based on the agency’s assessment that one of Rytr’s products could be used by bad actors. Not only does this action extend well beyond the FTC’s authority to combat “unfair or deceptive practices,” but it also creates a troubling precedent by punishing American innovators without evidence of actual harm or bad intent.

This overreach on the part of the FTC ultimately threatens to discourage American companies from working to develop new, innovative technologies that result in new products for consumers, improved public services, and economic growth. Additionally, by targeting small startup companies like Rytr, the agency risks creating a chilling effect on innovation that disincentivizes investment into research and development, which would only further entrench large industry players.

Unfortunately, this overstep is not an isolated event. In other instances, the FTC has elected to break long-standing agency norms designed to prevent hasty – and harmful – mistakes. Earlier this year, the agency released a report looking at the privacy practices of several video-sharing companies. While this issue merits examination, a quick look at the FTC’s report reveals serious concerns about the agency’s findings, and it’s clear that the report, as well as the FTC’s recommendations in the document, risk undermining policy debates surrounding consumer privacy.

For one, the FTC’s report uses data from 2019 and 2020. The tech industry moves quickly, and five-year-old information about the data collection practices of a select few companies isn’t reflective of the current privacy and security standards many leading American companies have put in place to better protect consumer data. In fact, many social media platforms and streaming services have implemented practices that address many of the concerns raised by the FTC in their report, such as bans on targeted advertising to minors, enhanced user consent mechanisms, and greater transparency in data collection policies.

The FTC’s report used these outdated findings to make far-reaching recommendations that go well beyond existing law. This kind of advocacy is especially concerning given the threats the policy changes proposed in the report pose to consumers and the digital economy. For example, the FTC endorsed controversial policy proposals found in legislation like the Kids’ Online Safety Act (KOSA) that would expand the authority of the FTC to moderate content online. In Congress, this idea has raised serious First Amendment concerns, and lawmakers on both sides of the aisle have taken issue with giving political appointees control of what online content children should and shouldn’t be allowed to see. Other ideas endorsed in the report, including proposed far-reaching restrictions on advertising practices, have raised concerns among policymakers who warn they risk hurting small businesses by taking away key tools that small business owners use to find and advertise to potential new customers.

Of course, the FTC has an important role to play in protecting American consumers. However, government overreach threatens to hurt consumers, small businesses, and U.S. economic interests. The next presidential administration can and should seize the opportunity it has to curtail harmful agency actions and return the FTC to its original mandate.

Policy Blog Templates (19)

In the News: DOJ asks judge to break up Google

In the article DOJ asks judge to break up Google, Politico today quoted SIIA’s Chris Mohr on DOJ’s proposal for remedies in its litigation with Google

“The extreme remedies recommended by the Department of Justice benefit no one other than Google’s business competitors while harming both the consumers who really like Google services and the developers who benefit from the current ecosystem,” 

NEW Policy Blog Templates (61)

Inaugural Convening of International Network of AI Safety Institutes Kicks Off In San Francisco

November 20, 2024 marks the launch of the International Network of AI Safety Institutes convening in San Francisco. The gathering over the coming days underscores the importance of global collaboration and sustained funding to address the challenges posed by rapidly advancing AI technologies. With governments and industry stepping up to fund research and enhance cooperation, this global initiative marks a critical step toward safeguarding AI’s development and deployment. The initial members of the International Network of AI Safety Institutes are Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States. The stated goal of the convening, according to the U.S. Department of Commerce, is to kickstart the Network’s technical collaboration ahead of the AI Action Summit in Paris in February 2025.

New Initiatives Announced

During the launch, it was announced that the U.S., South Korea, and Australia joined forces with philanthropic organizations to commit over $11 million to advance AI safety research. The funding will focus on critical areas such as detecting synthetic content and mitigating its associated risks. The largest annual contribution comes from the U.S. Agency for International Development (USAID), which has allocated $3.8 million to support AI safety initiatives in partner countries abroad. South Korea has pledged $7.2 million over the next four years for research and development aimed at addressing synthetic content risks. Additional support is coming from Australia’s national science agency, the John S. and James L. Knight Foundation, the Omidyar Network, and the AI Safety Fund—an independent grant-making organization supported by leading AI companies including SIIA member company Google.

The U.S. AISI also announced the formation of the Testing Risks of AI for National Security (TRAINS) Taskforce, which will be chaired by the U.S. AI Safety Institute and brings together partners from across the U.S. Government to identify, measure, and manage the emerging national security and public safety implications of AI. According to a release from NIST, the Taskforce will enable coordinated research and testing of advanced AI models across critical national security and public safety domains, such as radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, conventional military capabilities, and more. Initial representation will be present from the following federal agencies:

  • the Department of Defense, including the Chief Digital and Artificial Intelligence Office (CDAO) and the National Security Agency;
  • the Department of Energy and ten of its National Laboratories;
  • the Department of Homeland Security, including the Cybersecurity and Infrastructure Security Agency (CISA); and
  • the National Institutes of Health (NIH) at the Department of Health and Human Services.

This effort is a catalyst for the whole-of-government approach to AI safety, as directed by the recent National Security Memorandum on AI, and is a starting point for more important work that can be built upon overtime to expand membership across the federal government. 

Why This Matters: U.S. AISI’s Future Remains Unclear 

The International Network of AI Safety Institutes aims to align participating countries on best practices for testing and evaluating AI models. This week’s summit marks the network’s first official gathering, with the U.S. AI Safety Institute serving as its inaugural chair. However, the future of the U.S. AI Safety Institute, established within the National Institute of Standards and Technology (NIST) through President Biden’s 2023 AI Executive Order, remains uncertain. President-elect Donald Trump has pledged to repeal the executive order, putting the institute’s long-term prospects in jeopardy.

Despite widespread support, legislative efforts to formally establish the U.S. AI Safety Institute have yet to succeed in Congress. As the international community begins to align on shared AI safety priorities, the uncertain future of the U.S. AISI could present challenges to sustained leadership in this space.

SIIA and industry at-large has engaged in advocacy for authorizing a body in the federal government to lead a coordinated approach to AI safety and security issues with a focus on frontier models. This includes the Future of AI Innovation Act in the Senate and the AI Advancement and Reliability Act in the House which would allow the U.S. government to continue engaging in AI safety, thereby protecting national security and maintaining U.S. leadership in AI innovation. We believe authorizing this body is essential to avoid ceding global leadership to foreign jurisdictions, maintain strong relationships with U.S. firms, and preempt state legislation that may lead to onerous regulations and liability risk in the absence of federal regulation.

 

SIIA as a proud member of the AI Safety Consortium (AISIC) looks forward to continuing engagement with the U.S. AISI. 

NEW Policy Blog Templates (49)

SIIA Statement Regarding DOJ Remedy Recommendation for U.S. v Google

The following statement from the Software and Information Industry Association (SIIA) in response to yesterday’s Bloomberg report yesterday that the Department of Justice (DOJ) plans to ask for radical remedies in U.S. v. Google can be attributed to SIIA President Chris Mohr. This entire case has been premised on a disturbing and unsupported premise: a company that has achieved success through superior innovation can nonetheless be held liable under the antitrust laws for reasons beyond consumer welfare.  As a result, the extreme remedies recommended by the Department of Justice benefit no one other than Google’s business competitors while harming both the consumers who really like Google services and the developers who benefit from the current ecosystem. By abandoning the consumer welfare standard, the DOJ has entered the business of picking winners and losers – to the detriment of American consumers. We hope that clearer minds prevail.
NEW Policy Blog Templates (48)

SIIA Files Amicus Brief Regarding Class Certification Standards in Consumer Fraud Cases

The Software & Information Industry Association (SIIA) recently joined in filing an amicus brief that addresses critical issues in class-action litigation, particularly the Ninth Circuit’s adoption of a “common-course-of-conduct” test for consumer-fraud cases. This test, which is typically used in securities-fraud cases, has expanded the circumstances under which consumer-fraud class actions can be certified. The brief argues that this approach undermines Rule 23’s predominance requirement, which serves as a vital safeguard against abusive class actions. The amicus brief contends that by focusing solely on the defendant’s conduct, the Ninth Circuit overlooks the individual questions of materiality and reliance that are often central to consumer-fraud claims.

The brief emphasizes that when applying the predominance requirement with rigor, consumer-fraud claims rarely satisfy the criteria for class certification due to the individualized nature of key issues like materiality and reliance. The brief also highlights that the Ninth Circuit’s misapplication of securities-fraud precedent results in an over-simplified analysis of class certification, making it too easy for consumer-fraud classes to be certified. SIIA’s amicus participation aims to clarify these issues and ensure that class actions are not improperly certified, preserving the integrity of the legal framework for consumer protection.