Copy of Media Library (60)

Administration Must Reverse Course on Digital Trade – Our National Security and Economic Interests Demand It

The Office of the United States Trade Representative (USTR) recently stunned virtually everyone by pulling out of digital trade negotiations at the World Trade Organization (WTO). The decision was preceded by disagreements within the Executive Branch and left otherwise administration-friendly members of Congress apoplectic

This is in addition to the fact that digital services exports accounted for more than 75 percent, or $594 billion, of all services exports in 2021, and supported more than 8 million jobs. Given the harm the decision will do to U.S. national security and economic interests, the only plausible interpretation seems to be that it is an expression of misguided ideology.

Officially, USTR explained the abrupt change in posture as necessary to provide “policy space” for domestic debates about digital regulation, including competition law, to unfold. But an even cursory look at subsequent statements from supportive members of Congress leave no doubt that the real reason was a desire to hobble “Big Tech.” Left unsaid – or unthought – is that the country that would stand to benefit the most from this is China. And look no further than the December 20, 2023 announcement that the WTO is close to formalizing digital trade rules that will support discriminatory data regimes globally and further cede U.S. leadership in this area. 

Coming on the heels of the recently issued Executive Order on Artificial Intelligence, which again stressed the importance of U.S. leadership in global technology, pursuing a parallel policy seeking to undermine the very companies that the administration will be relying on to execute its AI strategy is mind-boggling. 

Supporters of USTR’s unilateral move seem oddly obsessed with weakening digital trade rules in order to allow Congress maximum flexibility to shape domestic legislation and regulations as it sees fit. But that argument misunderstands foundational international trade rules and their interplay with domestic U.S. law. First, the rules on non-discrimination prohibit members of the WTO from discriminating against products or services originating in other WTO member countries. They have nothing to do with domestic legislation in the United States aimed at U.S. companies. Second, when writing legislation implementing trade agreements, Congress always includes language stipulating that, if there is a conflict between trade agreement provisions and U.S. law, U.S. law prevails.

All of the substantial pieces of U.S. legislation recently contemplated in the antitrust space, none of which seem likely to pass anyway, were directed at U.S.-owned companies. So, saying that there is a need for “space” for Congress to act is a red herring. Rather, the more likely reason animating efforts to weaken digital trade rules is an attempt to clear the way for foreign governments to discriminate, at will, against a handful of U.S. companies, as the Digital Markets Act, Digital Services Act, and Data Act in Europe do. But that is not an acceptable role for Congress or the administration to play.   

The decision to walk away from previous U.S. positions in WTO e-commerce talks, of course, also follows the curious joint March 22, 2023, letter from the Department of Justice’s Antitrust Division and the Federal Trade Commission (FTC), where Assistant Attorney General Kanter and FTC Chair Khan reportedly sought to enlist the head of USTR, Katherine Tai, in their efforts to undermine a long-standing bipartisan agreement on the U.S. government’s approach to international rules on competition and digital trade.

While USTR consults with other federal government agencies, including the DOJ, on questions of trade policy through the Trade Policy Review Group, neither the FTC nor the DOJ have a formal role in the development of U.S. trade policy. And for good reason: their jurisdictions are focused entirely on the enforcement of domestic law. “The mission of the Antitrust Division is to promote competition by enforcing the antitrust laws…” And “[t]he FTC’s mission is protecting the public from deceptive or unfair business practices and from unfair methods of competition through law enforcement….”

Simply put, the Antitrust Division and the FTC have no more relevant expertise or experience related to U.S. foreign trade policy than USTR does when it comes to the enforcement of antitrust law. Curiously for a non-classified letter, the exact contents of Mr. Kanter and Ms. Khan’s missive have also been surprisingly hard to come by, which only adds to the impression that their efforts to meddle in U.S. trade policy were, at best, highly dubious.

The decision to abandon digital trade talks at the WTO was deeply misguided. Congress and the administration have plenty of room to explore ways to regulate in response to challenges posed by the digital transformation without undermining U.S. economic and national security interests. That the decision also left some of our closest allies in the lurch and emboldened China only compounds the seriousness of the mistake. It would behoove USTR Tai to reverse course posthaste. Absent that, President Biden must do it for her.

Copy of Media Library (3)

What is Rights & Safety Impacting AI

Written by: Suher Adi

The White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence signed on October 30th, 2023, was a welcomed approach to AI policy in the United States. In line with the Executive Order, the White House Office of Management and Budget (OMB) released a request for comments on their Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Memorandum. The memorandum discussed a variety of topics regarding the development, use and procurement of AI technologies for the federal government. One of the major takeaways, which builds on the Executive Order, was the classification of AI Risks. 

The Draft AI Memo outlines two categories, “safety-impacting AI” or “rights-impacting AI” that would require those AI systems to meet a set of “minimum practices” as outlined in the Memo. The Memo defines “safety-impacting AI” as “AI that has the potential to meaningfully impact the safety of human life, climate and the environment, critical infrastructure, and classified strategic resources.” The Draft AI Memo then outlines qualifications for “safety-impacting” AI, which include the transportation or management of industrial waste and “the design, construction, or testing of industrial equipment, systems, or structures that, if they failed, would pose a meaningful risk to safety.” These presumed “safety-impacting AI” purposes are very broad in scope, triggering the minimum additional requirements needed for the implementation of the specific AI technology. By triggering these additional requirements, it would likely disincentivize the government from adopting new technologies that may benefit the people it serves. 

Additionally, the Memo defines “rights-impacting AI” as “AI whose output serves as a basis for decision or action that has a legal, material, or similarly significant effect” on a variety of rights and privileges. However in section 5.b.ii, which outlines how to determine if an AI is “rights-impacting” or “safety-impacting,” there is a list of types of impacts that would determine this classification. For “rights-impacting AI,” this list includes a number of activities such as detecting emotions or thought to loan allocation and decision making. As it reads currently in the draft memo, the qualification for “rights-impacting AI” is very broad and would classify virtually all AI systems as being high risk. By providing such a high classification for not “rights-impacting AI” use cases, it will make it difficult or impossible for federal agencies to adopt AI tools as intended through the Executive Order. 

Risk classification models are important to the implementation of AI and is a welcome step, mirroring that of the EU AI Act. However, the EU AI Act has a narrower definition and use cases associated with the most “high risk” applications utilizing a tiered approach.While some use cases for rights-impacting AI align with particular categories that high-risk applications would likely be found, not all use cases within that category would pose the same risk. 

Because of this, we have suggested that OMB adopt more of a risk-based approach to promote certainty and avoid inadvertently capturing a variety of low risk activities. There has been a lot of work done to advance risk assessment methodology and leveraging that work – especially the  NIST AI Risk Management Framework – should be at the foundation for the U.S. AI policy and guidance.

In addition, to better approximate a risk-based approach, SIIA has recommended refining the definitions of safety-impacting and rights-impacting AI. Limiting safety-impacting AI systems to those that pose a risk of loss of human life or serious physical injury ensures the scope is appropriate to uses that directly can lead to harming the safety of others. Additionally, SIIA has recommended focusing rights-impacting AI systems to exclude circumstances where an AI system is incidental or ancillary to making the consequential decision. Rights-impacting AI should be scoped to decisions that are already regulated by existing laws, such as non-discrimination or consumer protection, to ensure clarity around the type of activity regulated in this circumstance.

Using AI to make decisions about government services and benefits is a clear example of what should be considered “rights-impacting AI” and require a greater degree of scrutiny and oversight. As with programs like SNAP benefits for low income households, if the government were to use AI to help with determining who is removed from the program or determining who meets the requirements for access to the program, this use of AI should have higher scrutiny. It is the potential exclusion from services and benefits that makes this use case rights-impacting and should then trigger the additional minimum requirements as discussed in the OMB memo.

Copy of Media Library (2)

SIIA’s 2024 Policy Priorities: Safeguarding Children, Fostering Responsible AI, and Shaping the Future of Digital Governance

The Software & Information Industry Association (SIIA) outlines its 2024 Policy Priorities in a comprehensive document spanning various key areas. SIIA places a strong emphasis on protecting children’s online safety and privacy, particularly in response to concerning bills at both federal and state levels. The “Keep Kids Safe and Connected” campaign aims to ensure internet safety for children while advocating against potential privacy vulnerabilities and negative impacts on marginalized communities arising from legislative proposals like KOSA.

In the realm of education, SIIA acknowledges the vast potential of artificial intelligence (AI) and emphasizes the need for a robust framework that balances innovation with risk mitigation. The organization collaborates with the Ed tech industry, having established the “Principles for the Future of AI in Education” in 2023. SIIA is committed to working with government leaders to promote responsible AI adoption in educational environments.

Our 2024 Policy Priorities addresses broader issues, such as the call for responsible AI regulation, modernization of data policy, and the importance of a comprehensive federal privacy legislation to create uniform standards across states. Additionally, SIIA advocates for a healthy intellectual property system, supporting patent quality, copyright protection, and defending against non-practicing entities.

Other policy priorities include advancing digital trade, promoting international coordination on digital policies, and engaging with the European Union on various regulatory frameworks. SIIA underscores the significance of cybersecurity measures, digital identity technologies, and policies to counter synthetic media and uphold democratic values in the online environment. We remains committed to collaboration across sectors to address critical challenges and promote innovative solutions.

Copy of Media Library (58)

Request for Revision of FTC Rulemaking Process – Negative Option Rule, Project No. P064202

SIIA and various organizations express their concerns and objections to the Federal Trade Commission’s (FTC) proposed amendments to the Negative Option Rule. We argue that the FTC’s procedures do not comply with the required public participation and dispute resolution standards outlined in Section 18 of the FTC Act.

The letter outlines several procedural shortcomings in the FTC’s approach, including the absence of a rebuttal comment period, limitations on the submission of disputed issues of material fact, and the Commission’s determination that there are no such disputes. The organizations assert that these shortcomings contradict the statutory mandate and undermine public participation.

We calls for the withdrawal of the Final Hearing notice and urges the FTC to issue an initial hearing notice consistent with previous rulemakings and federal law. The organizations argue that the FTC’s process has limited public input, eliminated opportunities for cross-examination and rebuttal evidence, and failed to adhere to the intent of the Magnuson-Moss Act and the FTC Improvements Act of 1980.

Ultimately, We request that the FTC address their concerns, withdraw the Informal Hearing notice, and provide a fair and transparent rulemaking process that allows for meaningful public participation.

Copy of Media Library (56)

SIIA’s Opposition to New Jersey Bill A.5750/S.4215 on Children’s Online Protection

The Software & Information Industry Association (SIIA) opposes New Jersey bill A.5750/S.4215, designed to protect children online. The SIIA argues that the bill faces constitutional challenges due to its age verification and parental consent requirements. They claim these measures are impractical, potentially restricting access for underprivileged youth and hindering positive online development. Additionally, the SIIA criticizes the inclusion of a private right of action, suggesting it may lead to disproportionate legal costs. While supporting the goal of safeguarding minors, the SIIA urges reconsideration of the bill’s provisions to ensure constitutional compliance and practical feasibility.