Colorado SB24-205

Take action now to align your AI systems with SB205 and safeguard consumer rights and equity.

On May 17, 2024, Colorado Governor Jared Polis signed SB24-205 into law, introducing comprehensive consumer protections for artificial intelligence (AI) systems. SB205 imposes rigorous requirements on developers and deployers of high-risk AI systems to safeguard consumers from algorithmic discrimination. This article provides a detailed overview of the key elements of the law, its implications, and the responsibilities it places on stakeholders.

Colorado SB 21 169

SB24-205 - Key Definitions

SB205 introduces specific terms to ensure a clear understanding and application of the law. Below are key definitions central to the legislation.

Algorithmic Discrimination

Algorithmic discrimination under SB205 is defined as any condition where an AI system results in unlawful differential treatment based on protected characteristics such as age, race, disability, and more. Importantly, this does not include high-risk AI systems used solely for self-testing or to increase diversity and address historical discrimination.

Artificial Intelligence System

An AI system is any machine-based system that generates outputs like content, decisions, or recommendations from received inputs. High-risk AI systems, which are subject to stringent regulations, are those that make or significantly influence consequential decisions.

Additionally, the following systems are not considered high-risk unless they are used to make or significantly influence important decisions:

  • Anti-fraud systems that do not use facial recognition
  • Anti-malware, anti-virus tools, and firewalls
  • AI-enabled video games
  • Calculators
  • Cybersecurity tools
  • Databases and data storage tools
  • Internet domain registration tools and Internet website loading tools
  • Networking
  • Spam and robocall filters
  • Spell checkers
  • Spreadsheets
  • Web caching and web hosting

Consequential Decision

Consequential decisions are those that materially affect consumers in areas such as:

  • Education
  • Employment
  • Financial services
  • Government services
  • Healthcare
  • Housing
  • Insurance
  • Legal services

These categories align with high-risk AI applications in other jurisdictions.

Developer Obligations

Colorado SB 21 169

From February 1, 2026, developers must exercise reasonable care to protect against algorithmic discrimination by providing extensive documentation to deployers, including:

  • General Statements: Describing foreseeable uses and risks.
  • Data Summaries: Detailing training data and its limitations.
  • Purpose and Outputs: Explaining the AI system's intended use and outputs.
  • Risk Mitigation: Outlining measures taken to evaluate and mitigate discrimination risks.

Public Disclosure

Developers must publicly share summaries of the high-risk AI systems they offer, including how they manage risks and foreseeable discrimination.

Reporting to the Attorney General

Developers must disclose to the Attorney General and other stakeholders any known or reasonably foreseeable risks of algorithmic discrimination within 90 days of discovery.

Deployer Obligations

Colorado SB 21 169

Deployers of high-risk AI systems are required to follow specific protocols to ensure the responsible use of AI. These obligations are focused on managing risks and maintaining transparency to protect consumers from potential harms.

Risk Management Policies

Deployers must implement a risk management policy that incorporates principles, processes, and personnel for identifying and mitigating discrimination risks. This policy should consider recognized frameworks like the AI Risk Management Framework (AI RMF) by NIST or ISO/IEC 42001.

Impact Assessments

Deployers must complete annual impact assessments and within 90 days of any significant modification to the AI system. These assessments must cover:

  • Purpose and intended use
  • Foreseeable risks and mitigation steps
  • Data categories and outputs
  • Performance metrics and transparency measures
  • Post-deployment monitoring procedures

Consumer Notification

Deployers must notify consumers when high-risk AI systems are used for consequential decisions, providing information about the system’s purpose, data sources, and consumers' rights to correct data and appeal decisions.

Annual Reviews

Deployers must conduct annual reviews to ensure that AI systems do not cause algorithmic discrimination.

Enforcement and Penalties under SB-205

Colorado SB 21 169

Under SB205, the attorney general has exclusive enforcement authority. Violations are considered unfair trade practices and are subject to enforcement actions. Developers and deployers have an affirmative defense if they discover and cure violations through feedback, testing, or internal reviews and comply with recognized risk management frameworks.

While specific penalties are not outlined, non-compliance constitutes an unfair trade practice, which can lead to significant legal and financial repercussions.

Exemptions to Colorado's SB205

SB205 applies to developers and deployers of high-risk AI systems operating in Colorado. However, there are specific exemptions:

Federal Agency Compliance

  • Systems approved by federal agencies like the FDA or FAA.
  • Systems that comply with federal standards, such as those from the Federal Office of the National Coordinator for Health Information Technology, if these standards are equivalent to or stricter than SB205.

Research and Federal Contracts

  • Systems used for research supporting federal approval or certification.
  • Systems are used under contracts with the US Department of Commerce, Department of Defense, or NASA unless they make or significantly influence decisions about employment or housing.
  • Systems acquired by or for federal government agencies unless used for employment or housing decisions.

Specific Entities

  • Healthcare Entities: Covered entities under HIPAA providing AI-generated healthcare recommendations that require a healthcare provider's action, which is not considered high-risk.
  • Insurers and Fraternal Benefit Societies: Entities in the insurance sector.
  • Financial Institutions: Banks, out-of-state banks, credit unions (chartered by Colorado or federal), and their affiliates.

These exemptions ensure that SB205 does not overlap with existing federal regulations or impede specific critical research and operations.

Implementation Timeline

The provisions of SB205 will become effective on February 1, 2026, giving organizations less than two years to align their practices with the new requirements.

Preparatory Steps

Organizations must immediately review their AI systems, implement necessary risk management policies, conduct impact assessments, and establish transparency measures to ensure compliance by the deadline.

Get compliant with Holistic AI

With the AI regulatory ecosystem rapidly evolving, compliance is not something that can happen overnight, particularly when multiple frameworks and jurisdictional differences must be navigated

Schedule a demo with our experts to discover how Holistic AI can help you prioritize your AI Governance.

Conclusion

Colorado's SB205 sets a significant precedent in regulating AI systems to protect consumers from algorithmic discrimination. By imposing rigorous documentation, risk management, and transparency requirements on AI developers and deployers, the law aims to foster a fair and accountable AI ecosystem. As AI continues integrating into various sectors, SB205's provisions ensure that technological advancements do not come at the cost of consumer rights and equity.

Organizations have until February 1, 2026, to comply with SB205, necessitating immediate steps to align their AI systems with the law's requirements. Holistic AI solutions can assist in navigating these regulatory landscapes and ensuring robust AI governance.

Schedule a Call to learn how Holistic AI can help with Colorado SB205

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.