Article
Colorado’s AI Law Is Here: What Employers Need to Know About the Colorado Artificial Intelligence Act (CAIA)
Published: Apr 7, 2026
Colorado has enacted the nation's first comprehensive AI statute regulating "high‑risk" systems used in making employment decisions. Effective February 1, 2026, the law specifically targets "algorithmic discrimination"—unlawful differential treatment or impact disfavoring individuals based on classifications protected by Colorado or federal law—and is enforced exclusively by the Colorado Attorney General, with no private right of action.
What the CAIA Covers
Central to the statute is the concept of "high-risk AI systems," defined as systems that make, or are a substantial factor in making, consequential decisions with material legal or similarly significant effects. Employment decisions are expressly included. The "substantial factor" standard extends to AI‑generated outputs used as a basis for such decisions. The statute excludes from the "high-risk" category systems intended to perform narrow procedural tasks or detect deviations in decision-making without replacing human assessment, as well as common technologies (e.g., anti-malware, calculators, spreadsheets, spam filters) unless they themselves make or substantially factor into a consequential decision.
CAIA regulates "developers" that create or intentionally and substantially modify AI systems and "deployers" that use high‑risk AI systems. Many employers will fall into the deployer category, though some may also qualify as developers if they substantially modify vendor AI tools. Notably, ongoing model learning that was predetermined and documented in initial impact assessments does not constitute an "intentional and substantial modification."
Protection under CAIA extends to "consumers," defined as Colorado residents, capturing both job applicants and employees who reside in the state. The law provides limited relief for smaller organizations: deployers with fewer than 50 full-time equivalent employees are exempt from the risk management policy, impact assessment, and public website statement requirements, but only if they do not use their own data to train the system, use the system solely for developer-disclosed purposes, and make available to consumers any developer-completed impact assessment with substantially similar content. Even when this exemption applies, duties of reasonable care, pre-decision notices, adverse-action explanations and appeals, and Attorney General notification remain in full force.
Key Compliance Obligations
Under the CAIA, developers and deployers must exercise reasonable care to prevent known or reasonably foreseeable algorithmic discrimination risks beginning February 1, 2026. A rebuttable presumption of reasonable care applies in AG enforcement actions if they satisfy statutory requirements and any AG rules. Deployers must:
-
implement an iterative, lifecycle risk management policy and program, scaled to size and complexity and aligned with the NIST AI RMF, ISO/IEC 42001, or another framework designated by the Attorney General;
-
complete impact assessments before deployment, at least annually, and within 90 days after any intentional and substantial modification, covering purpose, discrimination risks, data inputs/outputs, performance metrics, transparency measures, and post-deployment monitoring, and retain assessments and related records for at least three years after final deployment;
-
separately review each deployed high-risk system at least annually to confirm it is not causing algorithmic discrimination;
-
before a consequential decision, notify the consumer that AI is in use, provide a plain-language description of the system's purpose and the decision's nature, deployer contact information, and instructions for accessing the deployer's public statement, along with Colorado Privacy Act profiling opt-out information where applicable; if the decision is adverse, disclose the principal reasons (including the AI system's degree and manner of contribution and the types and sources of data processed), offer an opportunity to correct personal data, and provide an appeal with human review where technically feasible and in the consumer's best interest; all notices must be in plain language, in all languages ordinarily used for consumer communications, and in accessible formats;
-
publish on the deployer's website a statement summarizing the types of deployed high-risk systems, how algorithmic discrimination risks are managed for each, and, in detail, the nature, source, and extent of information collected and used; and
-
notify the Attorney General within 90 days of discovering algorithmic discrimination.
Developers, in turn, have their own compliance obligations under the CAIA. Developers are required to:
-
provide deployers with documentation covering foreseeable and harmful uses, training data summaries, limitations and discrimination risks, purpose and intended benefits, evaluation and mitigation methods, data governance measures, intended outputs, and guidance on use, non-use, and human monitoring, and make available, to the extent feasible, artifacts (e.g., model cards, dataset cards) necessary for deployers to complete impact assessments;
-
publish and update (within 90 days of any substantial modification) a public statement summarizing the types of high-risk systems offered and how algorithmic discrimination risks are managed; and
-
notify the Attorney General and known deployers within 90 days upon discovering that their system has caused or is likely to cause algorithmic discrimination.
Separately from the high-risk framework, the CAIA imposes a general AI interaction disclosure requirement. Employers using AI systems to interact with employees (e.g., HR chatbots) must disclose the AI interaction unless it would be obvious to a reasonable person.
Enforcement, Liability, and Defenses
Violations of the CAIA constitute unfair or deceptive trade practices and are enforced exclusively by the Attorney General. It is notable, however, that the law under which a violation is deemed a deceptive trade practice, the Colorado Consumer Protection Act (CCPA), does indeed provide for a private right of action, so the potential application of the CCPA warrants monitoring.
The AG may require disclosure of deployers' risk policies, impact assessments, and records—as well as developers' documentation—to assess compliance; such materials are exempt from the Colorado Open Records Act, may be designated as proprietary or trade secrets, and disclosure does not waive attorney–client privilege or work-product protection. An affirmative defense exists if the organization discovers and cures a violation through encouraged feedback, adversarial testing, or red teaming, or internal review, and is otherwise in conformity with the NIST AI RMF and ISO/IEC 42001, another substantially equivalent framework, or a framework designated by the Attorney General. The party bears the burden of proof.
Sectoral exemptions apply to certain federally approved or standardized systems, specified federal-contract work (except for employment or housing decisions), HIPAA-covered entities providing eligible non-high-risk health-care recommendations, insurers subject to Colorado insurance AI rules (§ 10-3-1104.9), and prudentially regulated banks and credit unions under substantially equivalent or more stringent supervisory regimes that require anti-discrimination auditing and mitigation. Parties invoking exemptions bear the burden of demonstration.
Practical Recommendations
Organizations should begin by inventorying and classifying their AI systems, mapping all AI tools that touch consequential decisions, determining whether each qualifies as "high‑risk" (paying particular attention to the substantial-factor standard and statutory exclusions), identifying the organization's role as deployer or developer, and documenting any exclusions along with supporting rationale. Building on this foundation, organizations should stand up AI risk governance by adopting a lifecycle risk-management policy aligned with NIST AI RMF or ISO/IEC 42001, designating accountable personnel, and embedding monitoring, testing, and vendor diligence into their operations; small employers should confirm whether all conditions for the limited exemption are continuously met and plan for rapid compliance scaling if conditions change.
In addition, organizations should prepare standardized impact assessment templates, pre-decision consumer notices (with multilingual and accessibility accommodations), adverse-action explanations including data-source disclosures, correction mechanisms, appeals with human review, and public website statements that include the required detail on information collected and used. In parallel, organizations should ready their incident response capabilities by establishing a 90-day clock and protocols for investigating and reporting discovered algorithmic discrimination to the Attorney General. Finally, organizations should monitor Attorney General rulemaking—which may address documentation, notices, risk policies, impact assessments, presumptions, and affirmative-defense recognition—and the task force's recommendations, while updating vendor contracts to secure required developer documentation (including model cards and impact assessment artifacts) and allocate risk appropriately.