The Software & Information Industry Association (SIIA) has consistently advocated for the involvement of the U.S. government in fostering the responsible implementation of artificial intelligence (AI).

In order to mitigate risks during the pre-deployment phase, SIIA recommends that the government establish guidelines and tailored requirements for AI systems that pose significant safety and rights concerns. Additionally, SIIA emphasizes the importance of advancing AI innovation, enhancing government adoption, and fostering strong public-private collaboration to effectively address the multifaceted challenges and opportunities presented by AI.

By taking the following steps, SIIA believes we can enhance oversight and regulation of AI:

  1. Establish Foundational Principles: AI policy should be based on a risk-based approach, ensuring that advancements in technology align with principles of responsibility, ethics, and trustworthiness. Responsible AI governance should go hand in hand with innovation, recognizing that fostering responsible practices is essential for the development and deployment of AI. To achieve this, effective AI policy demands a novel model of public-private collaboration to promote the development of sound governance and ensure that AI benefits society as a whole.

2. Advance a Framework for High-Risk AI System Oversight: A federal law ought to establish a foundational framework for overseeing high-risk AI systems, with a precise definition of what constitutes “high risk.” The emphasis of the regulation should be on effective oversight rather than strict enforcement. To achieve this, relying on the expertise of the National Institute of Standards and Technology (NIST) and the NIST Artificial Intelligence Risk Management Framework (AIRMF) would be beneficial.

3. Advance a Sector-Based Approach for High-Risk AI System Oversight:  To enhance governance and accountability, Congress should establish formal oversight responsibility for high-risk systems within crucial agencies. These key agencies must define the criteria for identifying high-risk systems within their respective domains. Additionally, a collaborative interagency coordination mechanism should be established to oversee the implementation and regulation of artificial intelligence. Instead of creating a new regulatory agency specifically for digital matters, existing structures should be leveraged. Lastly, it is crucial to explore and implement tailored, use-based regulations for general-purpose AI systems to ensure responsible and effective deployment.

4. Appropriate Funds for AI Oversight and Government-Led AI Innovation: Allocate adequate funding to oversight agencies responsible for regulating AI technologies to ensure they can effectively monitor and govern AI advancements. Simultaneously, commit resources to support fundamental AI research and the development of practical AI applications, fostering innovation and responsible implementation of AI across various sectors.

5. Promote Domestic and International Regulatory Alignment: It is crucial for federal law to take precedence over state law in governing AI oversight. Additionally, there should be continuous endeavors to foster global cooperation and coordination in the development of AI standards and regulations.

6. Reinforce a Tech-Neutral Approach to Intellectual Property: Copyright and patent laws offer robust frameworks to tackle the challenges posed by generative AI.