What Is the Colorado AI Act?
The Colorado AI Act (SB 24-205) is the first comprehensive state-level AI regulation in the United States. Signed into law by Governor Jared Polis on May 17, 2024, the Act creates a duty of care for developers and deployers of high-risk AI systems to protect consumers from algorithmic discrimination in consequential decisions.
Originally set to take effect on February 1, 2026, the enforcement date was delayed to June 30, 2026 by SB 25B-004, giving organizations additional time to prepare. The Act applies to any organization that develops or deploys high-risk AI systems affecting Colorado consumers, regardless of where the organization is headquartered.
The Colorado AI Act is significant because it moves beyond sector-specific AI regulation (like NYC Local Law 144 for hiring) to create a cross-sector framework covering employment, lending, housing, healthcare, insurance, education, and legal services. It establishes a model that other states are expected to follow, making early compliance strategically important.
Areebi provides the technical infrastructure organizations need to satisfy their duties under the Colorado AI Act, including policy enforcement, data protection, audit trails, and compliance monitoring.
Scope: High-Risk AI Systems and Consequential Decisions
The Colorado AI Act applies specifically to high-risk AI systems - defined as any AI system that, when deployed, makes or is a substantial factor in making a consequential decision. The Act defines seven categories of consequential decisions:
- Employment: Hiring, termination, promotion, compensation, and other material employment decisions
- Education: Admissions, financial aid, credentialing, and disciplinary decisions
- Financial and lending services: Credit, insurance, and loan decisions
- Essential government services: Access to government benefits and services
- Healthcare: Diagnosis, treatment recommendations, and coverage decisions
- Housing: Rental, sale, and mortgage decisions
- Legal services: Access to legal representation and case outcomes
The law targets algorithmic discrimination - any condition in which the use of an AI system results in an unlawful differential treatment or impact on individuals based on their actual or perceived membership in a protected class. Protected classes include age, color, disability, ethnicity, genetic information, national origin, race, religion, sex, and veteran status.
It is critical to note that the Act applies to AI systems that are a "substantial factor" in decisions, not only those that make decisions autonomously. This means AI tools used to inform, recommend, or support human decision-makers in consequential contexts are within scope.
Developer Duties Under the Colorado AI Act
Developers - entities that create, code, or substantially modify AI systems - bear specific obligations under the Act:
- Risk documentation: Provide deployers with documentation describing the high-risk AI system's known limitations, intended uses, and known risks of algorithmic discrimination
- Training data disclosure: Share information about the types of data used to train the system, including data sources, quality measures, and known biases
- Testing and evaluation: Conduct testing to identify and mitigate risks of algorithmic discrimination before making systems available to deployers
- Public disclosure: Make available on their website a statement summarizing the types of high-risk AI systems they develop and how they manage risks of algorithmic discrimination
- Reporting known issues: Notify the Colorado Attorney General and known deployers of any discovered algorithmic discrimination within 90 days
For organizations building AI solutions, Areebi's audit capabilities and compliance dashboards provide the documentation and evidence trail that developers need to satisfy these disclosure and reporting requirements.
Deployer Duties Under the Colorado AI Act
Deployers - entities that use high-risk AI systems to make or substantially factor into consequential decisions - have their own set of obligations:
- Risk management policy: Implement a risk management policy and program governing the use of high-risk AI systems
- Impact assessments: Conduct annual impact assessments for each high-risk AI system, documenting its purpose, intended benefits, risks of algorithmic discrimination, data inputs, performance metrics, and oversight measures
- Consumer notification: Notify consumers that an AI system is being used to make a consequential decision about them, with a description of the system's purpose
- Opportunity to correct: Provide consumers the opportunity to correct inaccurate personal data that the AI system processes
- Appeal process: Allow consumers to appeal a consequential decision, with the opportunity for human review
- Public statement: Publish on their website a summary of high-risk AI systems in use and how the organization manages discrimination risks
- AG notification: Notify the Colorado Attorney General within 90 days of discovering any algorithmic discrimination
Areebi helps deployers implement these requirements through policy enforcement that ensures AI systems are used within approved parameters, DLP controls that protect consumer data, and monitoring dashboards that support ongoing impact assessment processes.
Affirmative Defense and Safe Harbors
The Colorado AI Act provides important safe harbors and affirmative defenses for organizations that demonstrate good-faith compliance efforts:
- NIST AI RMF alignment: Organizations that comply with the latest version of the NIST AI Risk Management Framework or any framework that the Attorney General designates as substantially equivalent may assert an affirmative defense
- Reasonable care standard: The duty of care is satisfied when developers and deployers use reasonable care to protect consumers from foreseeable risks of algorithmic discrimination, considering the size and complexity of the organization
- Small business exemptions: Reduced obligations for small businesses with fewer than 50 employees, though they must still provide consumer notices and appeal rights
This creates a clear incentive for organizations to adopt recognized AI governance frameworks. Implementing the NIST AI RMF through Areebi's platform not only provides comprehensive AI risk management but also establishes the affirmative defense that the Colorado AI Act explicitly recognizes.
Take the Areebi AI Governance Assessment to evaluate your organization's readiness for Colorado AI Act compliance and identify gaps requiring remediation.
Enforcement and Penalties
The Colorado AI Act is enforced exclusively by the Colorado Attorney General - there is no private right of action. Key enforcement provisions include:
- Enforcement authority: The Attorney General may bring actions under the Colorado Consumer Protection Act (CCPA) for violations
- Civil penalties: Violations are subject to civil penalties under the CCPA, which can include injunctive relief, restitution, and penalties of up to $20,000 per violation
- Cure period: Organizations that discover and cure violations within 90 days may avoid enforcement action, provided they demonstrate remediation
- Good faith considerations: The Attorney General is directed to consider an organization's good-faith compliance efforts, including adoption of recognized frameworks like NIST AI RMF
The delayed enforcement date of June 30, 2026 provides organizations with additional preparation time, but the scope and specificity of the Act's requirements mean that compliance programs should be initiated well in advance. Organizations deploying AI in any of the seven consequential decision categories should begin implementation now.
Explore how Areebi supports compliance at our Trust Center, or request a demo to see the platform in action.
Preparing for Colorado AI Act Compliance
Organizations should take a systematic approach to Colorado AI Act compliance:
- AI system inventory: Identify all AI systems that make or substantially factor into consequential decisions affecting Colorado consumers. Include third-party AI tools and shadow AI applications.
- Risk classification: Classify each AI system by consequential decision category and assess its potential for algorithmic discrimination
- Implement NIST AI RMF: Adopt the NIST AI RMF to establish the affirmative defense and build a comprehensive risk management program
- Deploy monitoring controls: Implement Areebi's compliance dashboards and audit trails to support ongoing impact assessments and discrimination monitoring
- Establish consumer rights processes: Build mechanisms for consumer notification, data correction, and decision appeal with human review
- Public disclosure: Prepare website disclosures summarizing high-risk AI systems and discrimination risk management practices
- AG notification procedures: Establish internal procedures for timely notification of discovered algorithmic discrimination to the Attorney General and affected consumers
The Colorado AI Act's requirements are significant, but organizations that build comprehensive AI governance programs using platforms like Areebi will be well-positioned for compliance - and for similar laws expected in other states. Visit our pricing page to find the right plan for your organization.