The following statement can be attributed to Paul Lekas, Senior Vice President, Global Public Policy, Software & Information Industry Association.
The Software & Information Industry Association is pleased with much of what the Executive Order on AI released today. The EO goes a long way to demonstrating global leadership on what remains an evolving technology in a way that will help to provide both a model for AI regulation to other democratic countries, and a guide to Congress to bridge legislative gaps. It builds on the voluntary commitments agreed to earlier this year by 15 leading AI companies and advances measures that we hope will make the US government a leader in responsible AI.
We are encouraged to see:
- A call for comprehensive federal privacy legislation, which is a critical foundation and predicate for any broad AI legislation;
- Measures to advance the research, development, and adoption of privacy-preserving technologies that make use of AI technology and address critical data risks;
- Promotion of needs for advancing the responsible use of AI in education that support the needs of educators and students, in line with SIIA’s Principles for the Future of AI in Education – a framework intended to guide the ed tech industry’s implementation of artificial intelligence in a purpose-driven, transparent, and equitable manner that enables critical tools for personalized and enhanced learning experiences and improved assistive technologies;
- Support for further action on content provenance and authenticity, and on using AI tools to advance cybersecurity and respond to AI-driven vulnerabilities; and
- Launching of a pilot program to democratize AI research and resources as Congress considers legislation to authorize and implement the National AI Research Resource.
Despite our top level support for the White House’s action today on AI, we have concerns about some of the directions taken in the executive order.
- While we are pleased the foundation model review process is focused on high-risk use cases – those that involve national security, national economic security, and national public health and safety – we are concerned that the EO imposes requirements on the private sector that are not well calibrated to those risks and will impede innovation that is critical to realize the potential of AI to address societal challenges.
- We support efforts to advance safe, secure, and trustworthy AI systems in classrooms, along with increased NIST involvement of standards, yet we have concerns that a DHS-led process to evaluate AI in schools – which are considered part of “critical infrastructure” – could impede the safe and responsible use of AI in the classroom and would recommend, at a minimum, that the Department of Education be involved in the process.
- While we support the measures to democratize research and access to AI resources and reform immigration policy, we believe the directive to the FTC to focus on competition in the AI markets will ultimately undermine the administration’s objectives to maintain U.S. technological leadership.