Policy Blog Templates (26)

SIIA Statement on Court Decision in Google Search Case

The following statement can be attributed to Chris Mohr, President, Software & Information Industry Association:

Today’s federal district court decision against Google, if upheld, sends a message that if you build a better mousetrap, the DOJ will beat a path to your door. The loadstar of U.S. antitrust law is the consumer welfare standard. But it is hard to see how Judge Mehta’s decision today in any way benefits consumers. Survey after survey show that consumers trust Google Search as a reliable resource to access information about the world. Consumers have many options from which to choose. As our own research has shown, consumers increasingly scour the internet through a variety of sources beyond traditional engines and the market for online search has never been more competitive. The court itself conceded both that Google had earned its users’ trust and that its continuing innovation was instrumental in drawing in new users.  As the great jurist Learned Hand once wrote, “the successful competitor, having been urged to compete must not be turned upon when he wins.”

We are deeply concerned about the rationale behind this decision. Google has announced that it intends to appeal, and we are confident that this appeal will be successful.

Policy Blog Templates (30)

Exploring AI’s Impact on Legal Protections: Chris Mohr Represents SIIA at USPTO Roundtable

Chris Mohr, President of the Software and Information Industry Association (SIIA), recently participated in a USPTO roundtable discussion on the intersection of AI technology and legal protections for individuals’ reputations and identities, including name, image, voice, and likeness (NIL). This discussion explored whether existing legal frameworks are sufficient to address the challenges posed by AI, especially in the context of NIL and other identity indicators.

Mohr highlighted the importance of AI accountability and governance, emphasizing the need for technology that produces accurate, unbiased, and reliable information. He also stressed the significance of respecting intellectual property (IP) laws and cautioned against premature changes to copyright laws, particularly in relation to the ongoing litigation concerning the use of copyrighted works in training generative AI models.

The roundtable offered an opportunity to examine how AI intersects with IP and privacy concerns, particularly the implications of AI-generated digital replicas. Mohr discussed the potential harms that can arise from such uses, including economic and privacy injuries, and called for a nuanced, case-by-case approach to any regulatory changes. He concluded by emphasizing the need for careful consideration of existing laws and the potential gaps that AI technologies might introduce, advocating for thoughtful, targeted solutions.

Video Here

Policy Blog Templates (63)

Recap on Implementation of the AI Executive Order at 270 Days

July 26, 2024, marked nine months since the Biden-Harris Administration took a step toward positioning the United States at the forefront of artificial intelligence (AI) innovation and governance by enacting Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Executive Order set in motion a comprehensive strategy to use the tools of the federal government to harness the transformative power of AI while also addressing the challenges it poses.

One catalyst to the Executive Order stemmed from voluntary commitments signed by 15 leading U.S. AI companies last year. This initiative has now been further strengthened by Apple’s recent endorsement, the Administration announced. Apple is a member of the Software and Information Industry Association (SIIA), along with three other companies that have previously signed on to the commitments: Amazon, Google and Meta.

The Administration also shared that federal agencies reportedly completed all of the 270-day actions to the Executive Order on schedule. The following is a snapshot of some of the AI action items completed by the federal government in recent months.

Guidance on AI Risk Mitigation

The Executive Order directed agencies to undertake measures to address the safety and security risks associated with AI. All of these were completed at the 270-day mark.

The National Institute of Standards and Technology’s (NIST) finalized a series of guidance documents that build upon the AI Risk Management Framework. These include a framework for managing generative AI risks, which SIIA provided feedback on in the drafting phase.

The Artificial Intelligence Safety Institute, established within NIST earlier this year, issued draft guidance on preventing the misuse of dual-use foundation models. A public comment period has been opened to provide feedback, of which SIIA intends to submit comments.

The Department of Energy (DOE) and the National Science Foundation (NSF) have also expanded AI testbeds, reinforcing the U.S. infrastructure to evaluate and enhance AI model safety and security. These initiatives are crucial in preventing potential threats to critical infrastructure and national security.

The Administration also issued a call to action from the Gender Policy Council and Office of Science and Technology Policy directed at technology companies to combat image-based sexual abuse, including synthetic content generated by AI. This is an issue that SIIA has been actively engaged on, both with the Administration and separately in Congress.

Initiatives to Build a Talent Pipeline for the Future

Industry and government are able to collaborate more effectively to solve the problems of tomorrow when we invest in expertise and resources across both sectors. Recognizing the need for specialized skills and increased demand for AI expertise, the Executive Order launched a government-wide AI Talent Surge, bringing hundreds of AI professionals into federal service. This initiative, bolstered by the Presidential Innovation Fellows AI cohort and the DHS AI Corps, ensures that the government is well-equipped to tackle AI-related challenges. The recent announcement of nearly $100 million in funding from across the technology ecosystem to support public interest technology further underscores the administration’s efforts to build a sustainable talent pipeline. This investment not only enhances the government’s capacity but also supports a robust and diversified technology ecosystem—a goal shared by SIIA.

Fostering Responsible AI Innovation

The Executive Order has spurred initiatives to promote responsible AI innovation. The award of resources through the National AI Research Resource (NAIRR) pilot led by the National Science Foundation is shaping up to be a pivotal step in that direction. Over 80 research teams now have access to computational and other AI resources, an effort that enables the research community to address pressing issues such as deepfake detection and next-generation medical diagnostics, highlighting the transformative potential of AI and the importance of involving the U.S. research community.

The Department of Education also released a guide on AI in education, which directs the Department to develop resources to support the safe, responsible, and nondiscriminatory uses of AI in education. SIIA is proud to have played a role in the development of the guide and appreciates the inclusion as a key stakeholder by the Department of Education. The core message of this document is “Shared Responsibility for Building Trust,” and it highlights SIIA’s Principles for the Future of AI in Education, released last October, as a key resource for developing AI-enabled ed tech products.

These initiatives are designed to ensure that AI advancements are collaborative efforts and include various sectors and stakeholders in conversations around responsible AI.

Enhancing U.S. Leadership Globally

The Executive Order has also prompted actions designed to enhance global standards for AI governance. NIST’s strategy for U.S. engagement in global AI standards, the State Department’s guidance on managing AI-related human rights risks, and the launch of a global network of AI Safety Institutes are some of the key public steps that have been taken pursuant to the EO. Additionally, the U.S.-led Political Declaration on the Responsible Military Use of AI and Autonomy has now been endorsed by fifty-five nations, which outlines norms for the responsible development, deployment and use of military AI capabilities. These U.S.-led initiatives are intended to demonstrate a commitment to promoting ethical norms and safe practices in AI worldwide given the vast national security and global defense ecosystem.

Conclusion

Overall, SIIA applauds the administration’s comprehensive approach to advance responsible development and use of AI by the government, in the United States, and through international engagement. The Executive Order, supported by the voluntary commitments from leading companies and the recent inclusion of Apple, reflects a strong, collective commitment to advancing AI in a manner that prioritizes safety, equity, and innovation. As we look to the future, the SIIA remains dedicated to supporting policies and initiatives that foster a thriving, secure, and responsible AI ecosystem domestically and globally. We encourage ongoing collaboration amongst government, industry, academia and civil society to ensure that the United States continues to lead the world in harnessing the promise of AI while fostering thoughtful approaches to managing its risks.

Policy Blog Templates (62)

FTC Substantive Antitrust Rulemaking after Loper Bright

The Supreme Court’s decision in Loper Bright Enterprises v. Raimondo, which overturned Chevron, raises fresh questions about the scope, if any, of the Federal Trade Commission’s (FTC) antitrust rulemaking authority.

Background

Section 5 of the FTC Act prohibits “unfair methods of competition” (UMC).  It is uncontested that the FTC, based on Section 5, can bring cases to stop individual companies from engaging in UMC.  This is sometimes referred to as case-by-case adjudication.  The more pertinent question, however, is whether the FTC Act, specifically Section 6(g) together with Section 5, provides the FTC the authority to enforce the UMC prohibition by issuing administrative rules, also called substantive antitrust rulemaking.

The FTC’s current majority has taken the position that Section 6(g), indeed, authorizes it to engage in substantive antitrust rulemaking.  This is clear in their 2022 Policy Statement on the scope of UMC under Section 5 and a simultaneously issued statement by Chair Khan.  Finally, the recently promulgated Rule banning the use of noncompete agreements nationwide is based on this alleged authority.

Even under the law that existed prior to the Supreme Court’s recent decision in Loper Bright, there were serious questions raised about that position, though.  According to former FTC Commissioner Noah Phillips, these questions sound in both constitutional separation of powers concerns and basic canons of statutory interpretation.  Moreover, in a 2020 law review article, the FTC’s current Chair also seemed to concede that UMC rulemaking, at least in part, was dependent on what was then known as Chevron deference.  This referred to a Supreme Court doctrine that said that where a statutory term was vague or ambiguous, federal courts should, generally, defer to an agency’s interpretation, provided the interpretation was reasonable.

Loper Bright and the FTC

But Chevron was overruled last month.  And in Loper Bright, the Court held that “[t]he Administrative Procedure Act requires courts to exercise their independent judgment in deciding whether an agency has acted within its statutory authority, and courts may not defer to an agency interpretation of the law simply because a statute is ambiguous.”  That said, the Court also explicitly refused to question prior cases that had relied on the Chevron framework.

So, without Chevron, but with cases decided under its framework still in place, where does that leave FTC substantive antitrust, as well as other types of federal administrative agency, rulemaking?  There is little doubt that it will take some time for the full effects of Loper Bright to be felt, and, even then, it is not clear how much it will change the way courts approach agencies’ interpretations of vague statutory terms.  Some think it will change little, others that the changes could be quite profound.

Below are links to a few of the comments on this important and complicated administrative law issue.

Daniel J. Gilman: After Loper Bright, FTC Awaits Its Turn At-Bat, Truth on the Market, July 2, 2024. Available at https://truthonthemarket.com/2024/07/02/after-loper-ftc-awaits-its-turn-at-bat/  

Policy Blog Templates (60)

SIIA Applauds Senate Commerce Committee for Critical Votes on AI Legislation

The following statement can be attributed to Paul Lekas, Senior Vice President, Global Public Policy & Government Affairs, Software & Information Industry Association.

The Software & Information Industry Association (SIIA) applauds the Senate Commerce Committee’s dedication to keeping the United States at the forefront of responsible AI innovation, reflected in the Committee advancing critical AI legislation earlier today. These bills will support the needs of U.S. businesses and innovators; help to improve public trust in AI through safety, security, and standards; and further U.S. leadership in shaping global AI policy.

Specifically, SIIA endorses the following bills reported favorably by the Committee:

  • 4178, the Future of AI Innovation Act, will formally establish the U.S. Artificial Intelligence Safety Institute – a necessary step towards maintaining U.S. leadership in developing a framework around AI safety and security. (More here.)
  • 2714, the CREATE AI Act, will authorize the NAIRR to supercharge U.S. innovation and AI competition by making data and compute resources available for cutting-edge applications. (More here.)
  • 4394, the National Science Foundation Artificial Intelligence Act, will incentivize public-private partnerships to empower students to benefit from cutting edge AI technologies. (More here.)
  • 4769, the Validation and Evaluation for Trustworthy Artificial Intelligence Act, will enhance development of standards for responsible AI. (More here.)
  • 3162, the TEST AI Act, will authorize the Department of Energy and NIST to create testbeds to further development and testing for trustworthy AI systems.
  • 3849, the Promoting United States Leadership in Standards Act, will provide support to strengthen engagement of U.S. companies in development of international technical standards.
  • 4596, the Artificial Intelligence Public Awareness and Education Campaign Act, will promote public engagement and education around AI. (More here.)

As an organization with a broad coalition of industry voices, SIIA is committed to collaborating with lawmakers, industry stakeholders, and the public to ensure that AI legislation reflects the best interests of the U.S. innovation ecosystem across all sectors.

We encourage the Senate to take up these bills as soon as possible. We look forward to continuing our engagement with Congress as these bills progress and remain dedicated to advocating for AI policies that promote innovation, strengthen U.S. leadership, enhance government adoption, and foster meaningful public-private collaboration.