On May 17, 2024, Colorado Governor Jared Polis signed SB24-205 into law, introducing comprehensive consumer protections for artificial intelligence (AI) systems. SB205 imposes rigorous requirements on developers and deployers of high-risk AI systems to safeguard consumers from algorithmic discrimination. This article provides a detailed overview of the key elements of the law, its implications, and the responsibilities it places on stakeholders.
SB205 introduces specific terms to ensure a clear understanding and application of the law. Below are key definitions central to the legislation.
Algorithmic discrimination under SB205 is defined as any condition where an AI system results in unlawful differential treatment based on protected characteristics such as age, race, disability, and more. Importantly, this does not include high-risk AI systems used solely for self-testing or to increase diversity and address historical discrimination.
An AI system is any machine-based system that generates outputs like content, decisions, or recommendations from received inputs. High-risk AI systems, which are subject to stringent regulations, are those that make or significantly influence consequential decisions.
Additionally, the following systems are not considered high-risk unless they are used to make or significantly influence important decisions:
Consequential decisions are those that materially affect consumers in areas such as:
These categories align with high-risk AI applications in other jurisdictions.
From February 1, 2026, developers must exercise reasonable care to protect against algorithmic discrimination by providing extensive documentation to deployers, including:
Developers must publicly share summaries of the high-risk AI systems they offer, including how they manage risks and foreseeable discrimination.
Developers must disclose to the Attorney General and other stakeholders any known or reasonably foreseeable risks of algorithmic discrimination within 90 days of discovery.
Deployers of high-risk AI systems are required to follow specific protocols to ensure the responsible use of AI. These obligations are focused on managing risks and maintaining transparency to protect consumers from potential harms.
Deployers must implement a risk management policy that incorporates principles, processes, and personnel for identifying and mitigating discrimination risks. This policy should consider recognized frameworks like the AI Risk Management Framework (AI RMF) by NIST or ISO/IEC 42001.
Deployers must complete annual impact assessments and within 90 days of any significant modification to the AI system. These assessments must cover:
Deployers must notify consumers when high-risk AI systems are used for consequential decisions, providing information about the system’s purpose, data sources, and consumers' rights to correct data and appeal decisions.
Deployers must conduct annual reviews to ensure that AI systems do not cause algorithmic discrimination.
Under SB205, the attorney general has exclusive enforcement authority. Violations are considered unfair trade practices and are subject to enforcement actions. Developers and deployers have an affirmative defense if they discover and cure violations through feedback, testing, or internal reviews and comply with recognized risk management frameworks.
While specific penalties are not outlined, non-compliance constitutes an unfair trade practice, which can lead to significant legal and financial repercussions.
SB205 applies to developers and deployers of high-risk AI systems operating in Colorado. However, there are specific exemptions:
Federal Agency Compliance
Research and Federal Contracts
Specific Entities
These exemptions ensure that SB205 does not overlap with existing federal regulations or impede specific critical research and operations.
The provisions of SB205 will become effective on February 1, 2026, giving organizations less than two years to align their practices with the new requirements.
Organizations must immediately review their AI systems, implement necessary risk management policies, conduct impact assessments, and establish transparency measures to ensure compliance by the deadline.
With the AI regulatory ecosystem rapidly evolving, compliance is not something that can happen overnight, particularly when multiple frameworks and jurisdictional differences must be navigated
Schedule a demo with our experts to discover how Holistic AI can help you prioritize your AI Governance.
Colorado's SB205 sets a significant precedent in regulating AI systems to protect consumers from algorithmic discrimination. By imposing rigorous documentation, risk management, and transparency requirements on AI developers and deployers, the law aims to foster a fair and accountable AI ecosystem. As AI continues integrating into various sectors, SB205's provisions ensure that technological advancements do not come at the cost of consumer rights and equity.
Organizations have until February 1, 2026, to comply with SB205, necessitating immediate steps to align their AI systems with the law's requirements. Holistic AI solutions can assist in navigating these regulatory landscapes and ensuring robust AI governance.