Copy of Media Library (28)

Balancing Web Accessibility: SIIA’s Perspective on ADA Title II and WCAG Compliance

In a letter to the Department of Justice, SIIA expresses support for enhanced web accessibility under Title II of the ADA and the WCAG. However, SIIA raises concerns:

  1. Disproportionate Impact on Smaller Entities: SIIA suggests a longer phase-in period for smaller public entities to comply.
  2. Embracing Innovation: SIIA emphasizes the potential of innovative technologies in improving accessibility.
  3. Conforming Alternate Versions: We stress the importance of maintaining the ability to provide alternative versions of materials.
  4. Remediation for Non-Conformance: SIIA recommends allowing remediation for isolated and temporary non-conformance.
  5. Tailored Compliance: We suggest considering factors like annual budget when setting compliance requirements.
  6. Live-Audio Content Captioning: SIIA agrees with deferred compliance for live-audio captioning.
  7. Third-Party Web Content: SIIA acknowledges third-party efforts to meet accessibility standards.
  8. Course Content Accessibility: We propose a later compliance date for remediating existing content.
  9. Measuring Compliance: SIIA mentions the use of the Voluntary Product Accessibility Template (VPAT) and emphasizes the need for clear policies.

SIIA supports the goal of accessibility but raises concerns about the potential impact of the proposed rules on various entities, especially smaller ones. We advocate for flexibility, innovation, and consideration of practical challenges in implementing web accessibility standards.

Copy of Media Library (25)

Addressing the Risks and Harnessing the Benefits of Artificial Intelligence: New Models in the Senate, New Actions by State Governors

By: Sharan Sawlani

In September alone, both Congress and state governments have been active in regulating artificial intelligence. 

In the U.S. Senate, in addition to a large number of bills that address AI risks and innovation in a more targeted manner, we summarize here the four broader frameworks for AI regulation that have come forth in recent months.

Earlier in the year, Senate Majority Leader Schumer released his agenda for AI regulation with his SAFE Innovation Framework. The central policy objectives of the framework include: 

  • Security – Safeguarding our economic and national security with AI by studying its impact at home and abroad. 
  • Accountability – Ensuring that all AI policy addresses concerns around misinformation, bias, and intellectual property infringement. 
  • Foundations – Requiring that AI contributes to some form of social benefit, and aligns with American values. 
  • Explain – Avoiding the common “AI black box” issue, and informing both the federal government and public on the information they need to know about individual AI systems. 

As Congress has returned from its fall recess, we’ve already witnessed three broad-scale proposals for AI regulation that will likely be considered as part of the process that Senator Schumer is leading.

First, Senators Blumenthal and Hawley on September 8th announced a bipartisan framework on AI legislation. The Bipartisan Framework for US AI Act lays out specific principles for upcoming legislative efforts, including the establishment of an independent oversight body, ensuring legal accountability for harms, defending national security, promoting transparency, and protecting consumers and kids.

Second, Senators Wyden and Booker, and Representative Clarke introduced a bill to regulate the use of AI in making critical decisions surrounding things like housing, employment, and education. Aligning with Schumer’s principles of foundations & explainability, the Algorithmic Accountability Act requires assessments of critical algorithms and establishes new transparency guidelines surrounding when and how AI use is permitted.  

Finally, Senators Thune and Klobuchar are working on a bill that aims to be lighter compared to Schumer’s “heavy-handed” approach to AI regulation. While the bill hasn’t been proposed yet, its outlined goals include requiring companies to assess the impact of artificial intelligence systems and self-certify the safety of systems seen as particularly risky.

At the state level, governors have also been active in the AI regulation space. So far in September three states have released launched executive orders related to generative AI specifically: 

First, in California, Governor Newsome’s executive order aims to tackle the ethical and responsible use of generative AI, and assist the state in remaining as the world’s leader in generative AI innovation. Provisions in the executive order include:

  • Directives for risk-analysis reports for the use of AI in California’s critical infrastructure
  • A procurement blueprint for the state government
  • Researching and reporting the beneficial uses of generative AI in California
  • Developing a deployment and analysis framework
  • Training state employees
  • Partnering with relevant stakeholders on AI, hosting a joint summit in 2024
  • Evaluating the impact of AI on a consistent basis 

On September 20th, Governor Shapiro of Pennsylvania also signed an executive order related to generative AI. The executive order aims to: 

  • Establish responsible standards and a governance framework for generative AI use by state agencies
  • Outline values and principles by which state employees will utilize generative AI
  • Engage Pennsylvania’s AI sector to understand potential impacts and opportunities of generative AI on workforce and digital services
  • Create a Generative AI Governing Board to guide Commonwealth policy, use, and deployment

Finally, in Virginia, Governor Younkin’s executive directive on AI included a focused inquiry on four key areas to ensure proper use of AI by state government, including:

  • A review of the legal requirements, such as privacy or intellectual property, under Virginia law for the use of AI technologies
  • Identifying the policy standards necessary for State agencies to effectively utilize AI
  • Implementing the appropriate IT safeguards such as cybersecurity and firewalls, that are needed to alleviate security and privacy risks
  • Ensuring students are trained to compete using generative AI technology, while also protecting against misuse in the classroom
Copy of Media Library (26)

SIIA Response to Ranking Member Cassidy RFI on AI

In response to Ranking Member Bill Cassidy, the Software & Information Industry Association (SIIA) outlines its perspective on AI in education, health care, and labor:
  1. AI in Education: SIIA believes AI can enhance education by supporting teachers and personalizing learning.
  2. Government Roles: They advocate for an active federal role in AI regulation to avoid a patchwork of state laws.
  3. Best Practices and Standards: SIIA highlights the importance of AI risk management frameworks and voluntary codes of conduct.
  4. AI in the Classroom: They showcase AI’s potential for efficient assessment and reduced teacher workload.
  5. AI for Safety: SIIA mentions an AI solution for identifying self-harm behaviors to enhance school safety.
  6. Impact on Curriculum: AI influences what students should learn, emphasizing the need for digital and media literacy.
  7. Assessment and Teacher Interaction: AI improves assessment and provides more time for teachers.
  8. Trust and Responsible Development: Transparency and principles for responsible AI are crucial.
  9. Global Regulation: SIIA underscores the U.S. role in shaping global AI regulation.
  10. Anti-Discrimination Laws: Existing laws provide a foundation for AI accountability.
 
Copy of Media Library (24)

SIIA’s Feedback on Proposed Rulemaking for Hart-Scott-Rodino Coverage: An Analysis of 16 CFR Parts 801-803

In response to the Federal Trade Commission’s (FTC) Notice of Proposed Rulemaking on 16 CFR Parts 801 and 803—Premerger Notification; Reporting and Waiting Period Requirements (NPRM), the Software & Information Industry Association (SIIA) has provided comments. SIIA’s comments revolve around several key points:

  1. The Role of the HSR Act and the Premerger Notification Program: The comments explain the historical context of the Hart-Scott-Rodino Antitrust Improvements Act (HSR Act) and its role in requiring companies to notify enforcement agencies before certain mergers. The purpose of this notification is to allow the agencies to review and potentially block mergers that could harm competition.
  2. The FTC’s Proposed Changes: SIIA discusses the FTC’s proposed changes to the premerger notification rules, which aim to expand the scope of documents required for submission, including early drafts, narrative explanations of business operations, and additional information about labor markets, defense contracts, and messaging systems. The comments highlight that these changes would substantially increase the burden on filers, particularly on small- and medium-sized businesses.
  3. Comments on the Proposed Changes: SIIA argues that the proposed changes could deter innovation, hinder economic growth, and disproportionately affect small companies and startups. The comments point out that the FTC’s estimate of the additional burden on filers is significantly lower than what practitioners in the field believe the actual impact would be. Moreover, the comments question the feasibility of the agencies reviewing the vast amount of additional information during the initial waiting period and highlight concerns about the chilling effect on merger activity.

SIIA notes that the FTC’s proposed changes draw inspiration from practices in the European Union and the United Kingdom but highlights the significant differences in the number of merger notifications received by these jurisdictions compared to the U.S. The comments suggest that these international practices may not be directly applicable to the U.S. context.

SIIA concludes its comments by suggesting that, unless there is a strong justification (which has not been provided), the proposed changes may be inconsistent with the legislative intent behind the HSR Act and the purpose of the Paperwork Reduction Act. Therefore, the association recommends that the Agencies rescind the NPRM and start over.

Copy of Media Library (23)

SIIA Urges Caution When Regulating Data Brokers; Response to Remarks from FTC

The following statement can be attributed to Chris Mohr, President, Software & Information Industry Association on remarks presented by Samuel Levine, Director, Bureau of Consumer Protection at the recent 2023 Consumer Data Industry Association Law & Industry Conference on data brokers.

As the leading trade association for the information industry, SIIA supports privacy regulations applying to many of Director Levine’s concerns around sensitive data and its uses, as well as safeguarding First Amendment-protected free speech. In fact, we have repeatedly advocated for a strong, preemptive federal privacy law that provides clear and consistent protections for consumers and predictable requirements for businesses.

However, although abuses certainly occur, data brokers’ products and services largely enable commerce, prevent crime, and provide the building blocks of ideas: the invisible backbone of functioning markets and a functioning democracy. Far from being malicious, the end uses of this data often provide valuable services consumers have come to take for granted. The public domain that enables law enforcement investigation, investigative journalism, and fraud prevention in which “data brokers” engage is a direct result of constitutional design and protected free expression.

While the activities of data brokers play a significant role in various sectors, the work of private detectives also contributes crucially to maintaining societal order and security. Private investigators often rely on data from brokers to conduct thorough background checks, uncover hidden information, and resolve complex cases. The integration of such data can be seen in services offered by entities like the Shadow Investigations website, which leverages this information to provide accurate and timely insights. This synergy enhances their ability to serve clients effectively, ensuring that investigations are as comprehensive and reliable as possible.

We encourage the FTC not to conflate constitutionally protected publishing activity with real privacy or other harms the Bureau is empowered to prevent. We also caution against well-intentioned data broker legislation that is overbroad in its application, as the First Amendment creates challenges in grouping all “data brokers” into one category. As written, bills such as the Fourth Amendment is Not For Sale Act would go well beyond preventing law enforcement from accessing Fourth Amendment-protected data, and needlessly stymie investigations and preventative law enforcement efforts that keep Americans safe.