July 26, 2024, marked nine months since the Biden-Harris Administration took a step toward positioning the United States at the forefront of artificial intelligence (AI) innovation and governance by enacting Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Executive Order set in motion a comprehensive strategy to use the tools of the federal government to harness the transformative power of AI while also addressing the challenges it poses.
One catalyst to the Executive Order stemmed from voluntary commitments signed by 15 leading U.S. AI companies last year. This initiative has now been further strengthened by Apple’s recent endorsement, the Administration announced. Apple is a member of the Software and Information Industry Association (SIIA), along with three other companies that have previously signed on to the commitments: Amazon, Google and Meta.
The Administration also shared that federal agencies reportedly completed all of the 270-day actions to the Executive Order on schedule. The following is a snapshot of some of the AI action items completed by the federal government in recent months.
Guidance on AI Risk Mitigation
The Executive Order directed agencies to undertake measures to address the safety and security risks associated with AI. All of these were completed at the 270-day mark.
The National Institute of Standards and Technology’s (NIST) finalized a series of guidance documents that build upon the AI Risk Management Framework. These include a framework for managing generative AI risks, which SIIA provided feedback on in the drafting phase.
The Artificial Intelligence Safety Institute, established within NIST earlier this year, issued draft guidance on preventing the misuse of dual-use foundation models. A public comment period has been opened to provide feedback, of which SIIA intends to submit comments.
The Department of Energy (DOE) and the National Science Foundation (NSF) have also expanded AI testbeds, reinforcing the U.S. infrastructure to evaluate and enhance AI model safety and security. These initiatives are crucial in preventing potential threats to critical infrastructure and national security.
The Administration also issued a call to action from the Gender Policy Council and Office of Science and Technology Policy directed at technology companies to combat image-based sexual abuse, including synthetic content generated by AI. This is an issue that SIIA has been actively engaged on, both with the Administration and separately in Congress.
Initiatives to Build a Talent Pipeline for the Future
Industry and government are able to collaborate more effectively to solve the problems of tomorrow when we invest in expertise and resources across both sectors. Recognizing the need for specialized skills and increased demand for AI expertise, the Executive Order launched a government-wide AI Talent Surge, bringing hundreds of AI professionals into federal service. This initiative, bolstered by the Presidential Innovation Fellows AI cohort and the DHS AI Corps, ensures that the government is well-equipped to tackle AI-related challenges. The recent announcement of nearly $100 million in funding from across the technology ecosystem to support public interest technology further underscores the administration’s efforts to build a sustainable talent pipeline. This investment not only enhances the government’s capacity but also supports a robust and diversified technology ecosystem—a goal shared by SIIA.
Fostering Responsible AI Innovation
The Executive Order has spurred initiatives to promote responsible AI innovation. The award of resources through the National AI Research Resource (NAIRR) pilot led by the National Science Foundation is shaping up to be a pivotal step in that direction. Over 80 research teams now have access to computational and other AI resources, an effort that enables the research community to address pressing issues such as deepfake detection and next-generation medical diagnostics, highlighting the transformative potential of AI and the importance of involving the U.S. research community.
The Department of Education also released a guide on AI in education, which directs the Department to develop resources to support the safe, responsible, and nondiscriminatory uses of AI in education. SIIA is proud to have played a role in the development of the guide and appreciates the inclusion as a key stakeholder by the Department of Education. The core message of this document is “Shared Responsibility for Building Trust,” and it highlights SIIA’s Principles for the Future of AI in Education, released last October, as a key resource for developing AI-enabled ed tech products.
These initiatives are designed to ensure that AI advancements are collaborative efforts and include various sectors and stakeholders in conversations around responsible AI.
Enhancing U.S. Leadership Globally
The Executive Order has also prompted actions designed to enhance global standards for AI governance. NIST’s strategy for U.S. engagement in global AI standards, the State Department’s guidance on managing AI-related human rights risks, and the launch of a global network of AI Safety Institutes are some of the key public steps that have been taken pursuant to the EO. Additionally, the U.S.-led Political Declaration on the Responsible Military Use of AI and Autonomy has now been endorsed by fifty-five nations, which outlines norms for the responsible development, deployment and use of military AI capabilities. These U.S.-led initiatives are intended to demonstrate a commitment to promoting ethical norms and safe practices in AI worldwide given the vast national security and global defense ecosystem.
Conclusion
Overall, SIIA applauds the administration’s comprehensive approach to advance responsible development and use of AI by the government, in the United States, and through international engagement. The Executive Order, supported by the voluntary commitments from leading companies and the recent inclusion of Apple, reflects a strong, collective commitment to advancing AI in a manner that prioritizes safety, equity, and innovation. As we look to the future, the SIIA remains dedicated to supporting policies and initiatives that foster a thriving, secure, and responsible AI ecosystem domestically and globally. We encourage ongoing collaboration amongst government, industry, academia and civil society to ensure that the United States continues to lead the world in harnessing the promise of AI while fostering thoughtful approaches to managing its risks.