By Bethany Abbate

With Congress taking a careful approach to broad-scale AI regulation, states have become increasingly emboldened to pursue their own paths to fill in what are perceived as gaps. Colorado’s SB 205, signed into law by Governor Jared Polis on May 17, marks the first to cross the finish line and potentially set the stage for others.

 

The aim of SB 205 is to protect consumers from algorithmic bias in the context of high-risk use cases of AI decision making, and asserts requirements on both developers and deployers of AI systems in doing so. Although this landmark law represents the United States’ first broad regulation of development and use of AI tools by the private sector, significant concerns have been raised about whether the law will have a disproportionate impact on small and medium-sized businesses, whether it was passed without meaningful input from experts and stakeholders, whether its requirements will be workable, and whether Colorado has the infrastructure need to provide oversight as the law requires. Governor Polis has reportedly acknowledged concerns around the law. Because the law will not go into effect until February 1, 2026, there is time to work closely with Colorado legislators and regulators to improve the bill to address these and other issues.

During the nearly two-year time frame before full implementation, Colorado plans to establish a task force that will analyze the impact of the law and make recommendations before it is solidified. SIIA plans to stay engaged as this process evolves.

The following are key highlights of S.B. 205’s requirements in its current form:

  • Developers and Deployers are both required to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. They are both required to report incidents of algorithmic discrimination to the Attorney General when discovered.

 

  • Developers must provide disclosures and documentation to deployers regarding intended use, known or foreseeable risks, a summary of data used to train the system, possible biases, risk mitigation measures, and performance, amongst other items. Developers must also maintain a publicly disclosed list of high-risk systems.

 

  • Deployers are expected to maintain a risk management policy that governs high-risk AI use, which specifies processes and personnel used to identify and mitigate algorithmic discrimination. Additionally, impact assessment must be conducted annually, and upon each substantial modification. The assessments must be retained for three years. Deployers are also expected to provide consumers subject to a high-risk system with a statement disclosing information about the high-risk AI system in use, and must inform consumers of their rights under the Act and the Colorado Privacy Act.

 

  • In addition to being informed by any high-risk AI system used to make consequential decisions, the consumer must be informed of the right to opt-out of profiling. Consumers are also granted a right to an explanation in the instance an adverse consequential decision is made from the use of a high-risk AI system, and also have the right to correct and appeal.

 

  • The bill does not contain a private right of action and is only enforceable by the Attorney General’s office. Originally it contained a section regulating general purpose artificial intelligence models. This section was removed to narrow the bill’s focus and address concerns raised by stakeholders. There are some exceptions when it comes to interoperability with other relevant laws or regulations in regard to impact assessments, small business exceptions, and affirmative defense for those developers and deployers who discover and cure a violation through internal testing or red-teaming, and otherwise complies with the NIST Risk Management Framework or another nationally or internationally recognized risk management framework.