By: Sharan Sawlani

In September alone, both Congress and state governments have been active in regulating artificial intelligence. 

In the U.S. Senate, in addition to a large number of bills that address AI risks and innovation in a more targeted manner, we summarize here the four broader frameworks for AI regulation that have come forth in recent months.

Earlier in the year, Senate Majority Leader Schumer released his agenda for AI regulation with his SAFE Innovation Framework. The central policy objectives of the framework include: 

  • Security – Safeguarding our economic and national security with AI by studying its impact at home and abroad. 
  • Accountability – Ensuring that all AI policy addresses concerns around misinformation, bias, and intellectual property infringement. 
  • Foundations – Requiring that AI contributes to some form of social benefit, and aligns with American values. 
  • Explain – Avoiding the common “AI black box” issue, and informing both the federal government and public on the information they need to know about individual AI systems. 

As Congress has returned from its fall recess, we’ve already witnessed three broad-scale proposals for AI regulation that will likely be considered as part of the process that Senator Schumer is leading.

First, Senators Blumenthal and Hawley on September 8th announced a bipartisan framework on AI legislation. The Bipartisan Framework for US AI Act lays out specific principles for upcoming legislative efforts, including the establishment of an independent oversight body, ensuring legal accountability for harms, defending national security, promoting transparency, and protecting consumers and kids.

Second, Senators Wyden and Booker, and Representative Clarke introduced a bill to regulate the use of AI in making critical decisions surrounding things like housing, employment, and education. Aligning with Schumer’s principles of foundations & explainability, the Algorithmic Accountability Act requires assessments of critical algorithms and establishes new transparency guidelines surrounding when and how AI use is permitted.  

Finally, Senators Thune and Klobuchar are working on a bill that aims to be lighter compared to Schumer’s “heavy-handed” approach to AI regulation. While the bill hasn’t been proposed yet, its outlined goals include requiring companies to assess the impact of artificial intelligence systems and self-certify the safety of systems seen as particularly risky.

At the state level, governors have also been active in the AI regulation space. So far in September three states have released launched executive orders related to generative AI specifically: 

First, in California, Governor Newsome’s executive order aims to tackle the ethical and responsible use of generative AI, and assist the state in remaining as the world’s leader in generative AI innovation. Provisions in the executive order include:

  • Directives for risk-analysis reports for the use of AI in California’s critical infrastructure
  • A procurement blueprint for the state government
  • Researching and reporting the beneficial uses of generative AI in California
  • Developing a deployment and analysis framework
  • Training state employees
  • Partnering with relevant stakeholders on AI, hosting a joint summit in 2024
  • Evaluating the impact of AI on a consistent basis 

On September 20th, Governor Shapiro of Pennsylvania also signed an executive order related to generative AI. The executive order aims to: 

  • Establish responsible standards and a governance framework for generative AI use by state agencies
  • Outline values and principles by which state employees will utilize generative AI
  • Engage Pennsylvania’s AI sector to understand potential impacts and opportunities of generative AI on workforce and digital services
  • Create a Generative AI Governing Board to guide Commonwealth policy, use, and deployment

Finally, in Virginia, Governor Younkin’s executive directive on AI included a focused inquiry on four key areas to ensure proper use of AI by state government, including:

  • A review of the legal requirements, such as privacy or intellectual property, under Virginia law for the use of AI technologies
  • Identifying the policy standards necessary for State agencies to effectively utilize AI
  • Implementing the appropriate IT safeguards such as cybersecurity and firewalls, that are needed to alleviate security and privacy risks
  • Ensuring students are trained to compete using generative AI technology, while also protecting against misuse in the classroom