Policy Blog Templates (7)

ICYMI: POLITICO Pro’s Morning Tech 5/22

As seen in POLITICO Pro’s Morning Tech 5/22

SECTION 230 IN THE SPOTLIGHT

A House Energy and Commerce subcommittee is set to review draft legislation from Chair Cathy McMorris Rodgers and ranking member Frank Pallone that aims to sunset Section 230 of the Communications Decency Act. This law currently shields internet platforms from liability for most user-generated content. The proposed bill sets an 18-month deadline for tech companies to negotiate reforms with lawmakers, or they will lose this crucial liability protection.

  • Tech Industry Response: Major tech industry groups, including TechNet and the Software and Information Industry Association, strongly oppose changes to Section 230, claiming it protects innovation and competition, especially for startups and small businesses.

AI EXPORT CONTROL BILL GETS ITS MARKUP

The House Foreign Affairs Committee will also meet to discuss the ENFORCE Act, which could impose extensive export controls on AI systems deemed critical to national security. This includes requiring licenses for such systems. The markup was delayed to accommodate last-minute negotiations with the Biden administration.

Tech Industry Concerns:

  • Industry Opposition: The Software and Information Industry Association has expressed serious concerns, arguing that a licensing regime could impair U.S. competitiveness and give an advantage to foreign technologies that aren’t bound by the same standards. SIIA suggests leveraging the administration’s AI executive order and funding the U.S. AI Safety Institute instead.
Policy Blog Templates (27)

SIIA Applauds Secretary Raimondo’s Strategic Vision for the AI Safety Institute

The following statement can be attributed to Paul Lekas, Senior Vice President, Software & Information Industry Association.

We applaud Secretary Raimondo’s ambitious strategic vision for the U.S. Safety Institute (U.S. AISI). The U.S. AISI, housed within the National Institute of Standards and Technology (NIST), will serve as the cornerstone for advancing the science of AI safety and responsible AI innovation in the United States. This is essential to enable the United States to continue leading in AI innovation – and do so in a way that reflects the interests of civil society, academia, industry, and the public sector. Recognizing the global dimension of AI, we are also encouraged by the launch of the International Network of AI Safety Institutes. The Network expands on crucial collaborations among the United States, United Kingdom, Japan, Canada, Singapore, and the European Union, and provides the framework necessary to promote cross-border collaboration and align AI policy based on science, safety, and democratic values.

The U.S. AISI will be critical for the United States’ ability to shape global policy around AI. We encourage Congress to support the strategic vision by providing the Department of Commerce with the authorities and appropriations needed for the U.S. AISI to lead domestically and internationally, such as by passing the   Future of AI Innovation Act.

As a proud member of the U.S. AISI Consortium, SIIA looks forward to working closely with the Commerce Department to support NIST’s valuable and long-standing efforts to promote AI safety while fostering U.S. innovation.

Policy Blog Templates (25)

SIIA Statement on the Seoul Declaration and Frontier AI Safety Commitments

The following statement can be attributed to Paul Lekas, Senior Vice President, Software & Information Industry Association.

We applaud the meaningful global engagement coming out of the AI Seoul Summit 2024, building on progress from the inaugural 2023 UK AI Safety Summit. It is encouraging to see world leaders collaborating to prioritize safe, innovative and inclusive AI as seen through the joint Seoul Declaration signed by 10 countries and the EU. The declaration’s emphasis on aligning risk-based approaches is particularly important when striving to bridge gaps amongst countries, and we appreciate the emphasis on multi-stakeholder collaboration within ongoing and future AI safety discussions.

Industry is actively working to ensure risk mitigation is at the forefront of developing and deploying AI models, especially in the context of high-risk use cases. The Frontier AI Safety Commitments sets the groundwork for implementing best practices in responsible frontier AI safety. We are proud that three of our member companies – Amazon, Google and Meta – are among those that have agreed to the Commitments. We are also encouraged that two leading AI companies from China and the UAE have joined the Commitments. Industry has an integral role in advancing safe and trustworthy AI and we look forward to continuing to promote efforts to align AI safety, governance and transparency across borders.

Policy Blog Templates (23)

Colorado Enacts Expansive AI Consumer Protection Bill, But More Work Is Needed

With Congress taking a careful approach to broad-scale AI regulation, states have become increasingly emboldened to pursue their own paths to fill in what are perceived as gaps. Colorado’s SB 205, signed into law by Governor Jared Polis on May 17, marks the first to cross the finish line and potentially set the stage for others.

 

The aim of SB 205 is to protect consumers from algorithmic bias in the context of high-risk use cases of AI decision making, and asserts requirements on both developers and deployers of AI systems in doing so. Although this landmark law represents the United States’ first broad regulation of development and use of AI tools by the private sector, significant concerns have been raised about whether the law will have a disproportionate impact on small and medium-sized businesses, whether it was passed without meaningful input from experts and stakeholders, whether its requirements will be workable, and whether Colorado has the infrastructure need to provide oversight as the law requires. Governor Polis has reportedly acknowledged concerns around the law. Because the law will not go into effect until February 1, 2026, there is time to work closely with Colorado legislators and regulators to improve the bill to address these and other issues.

During the nearly two-year time frame before full implementation, Colorado plans to establish a task force that will analyze the impact of the law and make recommendations before it is solidified. SIIA plans to stay engaged as this process evolves.

The following are key highlights of S.B. 205’s requirements in its current form:

  • Developers and Deployers are both required to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. They are both required to report incidents of algorithmic discrimination to the Attorney General when discovered.

 

  • Developers must provide disclosures and documentation to deployers regarding intended use, known or foreseeable risks, a summary of data used to train the system, possible biases, risk mitigation measures, and performance, amongst other items. Developers must also maintain a publicly disclosed list of high-risk systems.

 

  • Deployers are expected to maintain a risk management policy that governs high-risk AI use, which specifies processes and personnel used to identify and mitigate algorithmic discrimination. Additionally, impact assessment must be conducted annually, and upon each substantial modification. The assessments must be retained for three years. Deployers are also expected to provide consumers subject to a high-risk system with a statement disclosing information about the high-risk AI system in use, and must inform consumers of their rights under the Act and the Colorado Privacy Act.

 

  • In addition to being informed by any high-risk AI system used to make consequential decisions, the consumer must be informed of the right to opt-out of profiling. Consumers are also granted a right to an explanation in the instance an adverse consequential decision is made from the use of a high-risk AI system, and also have the right to correct and appeal.

 

  • The bill does not contain a private right of action and is only enforceable by the Attorney General’s office. Originally it contained a section regulating general purpose artificial intelligence models. This section was removed to narrow the bill’s focus and address concerns raised by stakeholders. There are some exceptions when it comes to interoperability with other relevant laws or regulations in regard to impact assessments, small business exceptions, and affirmative defense for those developers and deployers who discover and cure a violation through internal testing or red-teaming, and otherwise complies with the NIST Risk Management Framework or another nationally or internationally recognized risk management framework.
Policy Blog Templates (24)

SIIA Urges Congress to Uphold Section 230

The following statement can be attributed to Chris Mohr, President, Software & Information Industry Association.

For years, lawmakers have debated the future of Section 230 of the Communications Decency Act and now there is draft legislation authored by Chair McMorris Rodgers and Ranking Member Pallone, which would sunset section 230 by the end of 2025. This law protects free speech on the Internet and eliminating it would inflict harm on millions of people.

While Section 230 has been blamed for many of the Internet’s perceived ills, there is virtually no agreement about what those problems are. Democrats generally believe that online platforms remove too little material, Republicans tend to believe they remove too much. And what no one seems willing to acknowledge is that Section 230 is about much more than just “Big Tech.” It enables online platforms of all sizes, including sites like Wikipedia and digital news providers, to host and display third-party content, while also making necessary editorial decisions. Sunsetting this shield will remove it for everyone.

Arbitrary deadlines will not produce sensible legislation, and here they create great uncertainty for a substantial part of the economy. Instead, we encourage lawmakers to uphold Section 230 and to consider the risk of significant unintended consequences the proposal would create, particularly for small- and medium-sized companies.