Written by: Suher Adi

The White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence signed on October 30th, 2023, was a welcomed approach to AI policy in the United States. In line with the Executive Order, the White House Office of Management and Budget (OMB) released a request for comments on their Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Memorandum. The memorandum discussed a variety of topics regarding the development, use and procurement of AI technologies for the federal government. One of the major takeaways, which builds on the Executive Order, was the classification of AI Risks.

The Draft AI Memo outlines two categories, “safety-impacting AI” or “rights-impacting AI” that would require those AI systems to meet a set of “minimum practices” as outlined in the Memo. The Memo defines “safety-impacting AI” as “AI that has the potential to meaningfully impact the safety of human life, climate and the environment, critical infrastructure, and classified strategic resources.” The Draft AI Memo then outlines qualifications for “safety-impacting” AI, which include the transportation or management of industrial waste and “the design, construction, or testing of industrial equipment, systems, or structures that, if they failed, would pose a meaningful risk to safety.” These presumed “safety-impacting AI” purposes are very broad in scope, triggering the minimum additional requirements needed for the implementation of the specific AI technology. By triggering these additional requirements, it would likely disincentivize the government from adopting new technologies that may benefit the people it serves.

Additionally, the Memo defines “rights-impacting AI” as “AI whose output serves as a basis for decision or action that has a legal, material, or similarly significant effect” on a variety of rights and privileges. However in section 5.b.ii, which outlines how to determine if an AI is “rights-impacting” or “safety-impacting,” there is a list of types of impacts that would determine this classification. For “rights-impacting AI,” this list includes a number of activities such as detecting emotions or thought to loan allocation and decision making. As it reads currently in the draft memo, the qualification for “rights-impacting AI” is very broad and would classify virtually all AI systems as being high risk. By providing such a high classification for not “rights-impacting AI” use cases, it will make it difficult or impossible for federal agencies to adopt AI tools as intended through the Executive Order.

Risk classification models are important to the implementation of AI and is a welcome step, mirroring that of the EU AI Act. However, the EU AI Act has a narrower definition and use cases associated with the most “high risk” applications utilizing a tiered approach. While some use cases for rights-impacting AI align with particular categories that high-risk applications would likely be found, not all use cases within that category would pose the same risk.

Because of this, we have suggested that OMB adopt more of a risk-based approach to promote certainty and avoid inadvertently capturing a variety of low risk activities. There has been a lot of work done to advance risk assessment methodology and leveraging that work – especially the  NIST AI Risk Management Framework – should be at the foundation for the U.S. AI policy and guidance.

In addition, to better approximate a risk-based approach, SIIA has recommended refining the definitions of safety-impacting and rights-impacting AI. Limiting safety-impacting AI systems to those that pose a risk of loss of human life or serious physical injury ensures the scope is appropriate to uses that directly can lead to harming the safety of others. Additionally, SIIA has recommended focusing rights-impacting AI systems to exclude circumstances where an AI system is incidental or ancillary to making the consequential decision. Rights-impacting AI should be scoped to decisions that are already regulated by existing laws, such as non-discrimination or consumer protection, to ensure clarity around the type of activity regulated in this circumstance.

Using AI to make decisions about government services and benefits is a clear example of what should be considered “rights-impacting AI” and require a greater degree of scrutiny and oversight. As with programs like SNAP benefits for low income households, if the government were to use AI to help with determining who is removed from the program or determining who meets the requirements for access to the program, this use of AI should have higher scrutiny. It is the potential exclusion from services and benefits that makes this use case rights-impacting and should then trigger the additional minimum requirements as discussed in the OMB memo.