Lineage Entries
OECD Recommendation of the Council on Artificial Intelligence by Dr. Christine Custis
National Artificial Intelligence Initiative Act of 2020 (NAIIA) by Dr. Nathan C. Walker
OECD Updated Recommendation by Dr. Margaret Mitchell
Colorado AI Act by Dr. Rumman Chowdhury
OECD Recommendation of the Council on Artificial Intelligence

Dr. Christine Custis
Research Associate and Program Manager for the Science, Technology, and Social Values Lab
Read about Christine
Dr. Christine Custis is a Research Associate and Program Manager for the Science, Technology, and Social Values Lab. A computer scientist and organizational strategist whose work has spanned industry, civil society, and academia, Dr. Custis has more than two decades of experience in the development and governance of emerging science and technology. At IAS, she collaborates with Alondra Nelson on multidisciplinary research and policy initiatives examining the social implications of AI, genomics, and quantum science.
She previously served as Director of Programs and Research at the Partnership on AI a nonprofit, multisector coalition of organizations committed to the responsible use of artificial intelligence. At PAI, she oversaw the research, analysis, and development of a range of AI-related resources, from policy recommendations and publications to tools, while leading workstreams on labor and political economy; transparency and accountability; AI safety; inclusive research and design, and the public understanding of AI. An expert of both domestic and international policy, Christine served as a member of the Organisation for Economic Co-operation and Development (OECD) expert group on trustworthy AI and previously worked at the MITRE Corporation and IBM. She has advised and collaborated with a range of organizations, including the Global Democracy Coalition, the US National Institute of Standards and Technology, the US National AI Research Resource Task Force, the New America Open Technology Institute, the UC Berkeley Center for Long-Term Cybersecurity, and many others. She holds a M.S. degree in computer science from Howard University and received her Ph.D. from Morgan State University.
Definition Text
AI system: An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.
This is a Lineage Definition
Source text: Recommendation of the Council on Artificial Intelligence
Place of origin: International
Enforcing entity: N/A
Date of introduction: March 15-6, 2019
Date enacted or adopted: May 22, 2019
Current status: Updated November 8, 2023
Motivation
The OECD (Organisation for Economic Co-operation and Development), as stated on its website, is a forum and knowledge hub of over 100 countries across the world for data, analysis and best practices in public policy. They play a critical role helping to “coordinate responses to the use of AI-based on international, multidisciplinary and multi-stakeholder cooperation to ensure that the development and use of AI benefits people and the planet.” From the May 2019 OECD Meeting of the Council at Ministerial Level, those developing the recommendation for the definition of AI System regarded the important work being carried out on AI and, as stated in the guidance, “given the rapid development and implementation of AI, there is a need for a stable policy environment that promotes a human-centric approach to trustworthy AI, that fosters research, preserves economic incentives to innovate, and that applies to all stakeholders according to their role and the context.” Standards development, including accepted definitions, is part of this stable policy environment deemed important by the OECD.
This marked a concerted effort by the organization to set a standard meaningful to the work being done in privacy and data flow, cryptography policy, public sector information use, and the “global implications that are transforming societies, economic sectors and the world of work” which is relevant to civil society, academia, corporations, governments, and communities.
Approach
The definition includes the phrase “Human-defined objectives.” However, in the emerging age of AI Agents, objectives may not always be human defined. For instance, Anthropic has engaged in a practice with selective human involvement. It published a Constitutional AI concept in which AI model development is governed by other AI models with the indirect input of humans in the form of human-generated rules or principles. It can be conjectured that rules/principles are a level of abstraction (or more) higher than objectives. The updated definition (2023) seems to capture this with the explicit/implicit terminology.
Additionally, the definition includes the term “Machine-based.” In the simplest case, the interjection of humans-in-the-loop to complete the system might make the term “machine-based” too limiting. Additionally, we may need to think of the AI system as more than the operational component and extend the reach of the definition. Further down in this OECD guidance is a definition of AI system lifecycle which confirms the limitations placed on the AI system definition. The absence of data labelers and mineral extractors, which can be human, highlights the need for humanity as a member of the AI system composite. The ISO/IEC 22989 published in 2022 uses the term “engineered system” which might offer a broader encapsulation of both human and machine components.
I personally like the absence of the terms “model,” “algorithm,” “decisionmaking,” and “simulation” (as in simulation of human intelligence). For this and other reasons, the definition could be meaningful in the handling of discourse beyond what the typical AI actor may consider to be AI and could serve as a powerful tool for reminding us of the policy innovations available through the application of existing frameworks and laws. In an article for the National Academy of Engineering, Alondra Nelson writes about how “The perception that AI governance inherently lags behind technological development overlooks an immediate solution: the application of existing laws, rules, regulations, and standards.” Broad and inclusive definitions encompassing the past and present of technology can help us to see the similarities and make the necessary connections.
An AI system can be considered a tool useful in automated decisionmaking. The Canadian Government defines “automated decision system” as “Any technology that either assists or replaces the judgment of human decision makers. These systems draw from fields like statistics, linguistics and computer science, and use techniques such as rules-based systems, regression, predictive analytics, machine learning, deep learning, and neural networks.” This Canadian Government definition compliments the 2019 OECD definition of an AI System in that decisionmaking impacts and legal rights should be considered if an AI system is used to aid in decisionmaking, especially for public services, where such decisions affect people directly.
The UNESCO 2022 Recommendation on the Ethics of Artificial Intelligence seems to expound on the definition by adding “means,” “methods,” and operational designation. A section of the recommendation follows:
“AI systems are information-processing technologies that integrate models and algorithms that produce a capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments. AI systems are designed to operate with varying degrees of autonomy by means of knowledge modelling and representation and by exploiting data and calculating correlations. AI systems may include several methods…”
Reception
This 2019 version of the OECD definition of AI System was used in:
- the Regulation (EU) 2024/1689 (the EU “AI Act”) — Article 3(1),
- NIST AI Risk Management Framework (AI RMF 1.0),
- National Telecommunications & Information Administration (NTIA) AI accountability/glossary pages and federal policy glossaries,
- G20 AI Principles (2019),
- Canadian Directive on Automated Decision-Making,
- UK Government White Paper “A pro-innovation approach to AI regulation,” and
- the US Executive Order 14110 (2023).
There have been some critiques of the definition. Joanna Bryson wrote in a 2022 Wired article that the definition was vague in its handling of autonomy and adaptiveness. Additionally, the OECD itself has discussed the difficulty in sourcing liability and accountability data for implicit objections of any AI System.
Although I cannot say verifiably, I would imagine that the directives housed in the EU AI Act would hold some sway in EU courts. Otherwise, courts and judges apply statutes or regulatory text and not necessarily the OECD language which could be considered a technology-neutral starting point for deliberations.
Additional Resources
National Artificial Intelligence Initiative Act of 2020 (NAIIA)

Dr. Nathan C. Walker
First Amendment and Human Rights Educator at Rutgers University
Read about Nathan
Dr. Nathan C. Walker is an award-winning First Amendment and human rights educator at Rutgers University, where he teaches AI ethics and law as an Honors College faculty fellow. He is the principal investigator at the AI Ethics Lab, the founding editor of the AI & Human Rights Index, a contributing researcher to the Munich Declaration of AI, Data and Human Rights, and a non-resident research fellow at Stellenbosch University in South Africa. Dr. Walker is a certified AI Ethics Officer and has held visiting research appointments at Harvard and Oxford universities. He has also served as an Expert AI Trainer for OpenAI’s Human Data Team, where he applied his expertise in law and education to frontier AI models. He has authored five books on law, education, and religion, and presented his research at the UN Human Rights Council, the Italian Ministry of Foreign Affairs, and the U.S. Senate. Nate has three learning disabilities and earned his doctorate in First Amendment law and two master’s degrees from Columbia University. An ordained Unitarian Universalist minister, Reverend Nate holds a Master of Divinity from Union Theological Seminary. More at sites.rutgers.edu/walker
Definition Text
ARTIFICIAL INTELLIGENCE.—The term “artificial intelligence” means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to—(A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.
Lineage Definition
AI system: An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.
Source text: National Artificial Intelligence Initiative Act of 2020
Place of origin: Federal, USA
Enforcing entity: Federal Agencies
Motivation
In President Trump’s first term, there were four competing understandings of artificial intelligence: (1) one legal definition in the John S. McCain National Defense Authorization Act for Fiscal Year 2019 (McCain NDAA); (2) another in the United States’ support of a nonbinding international agreement with the Organisation for Economic Co-operation and Development (OECD) in 2019; the (3) notable absence of a substantive legal definition in President Trump’s first AI executive order in 2019; and (4) Congress’ alignment of the OECD’s definition when passing the 2020 National Artificial Intelligence Initiative Act (NAIIA, pronounced “nye-ah”), embedded in the William M. (Mac) Thornberry National Defense Authorization Act. NAIIA aimed to provide a unified federal definition of artificial intelligence to standardize how US agencies identify and regulate AI systems.
This definition was written in response to the first executive order on AI, issued by President Donald Trump in February 2019, Maintaining American Leadership in Artificial Intelligence (No. 13,859), which called for a coordinated national AI strategy. The NAIIA acknowledged the earlier definition of AI in the McCain NDAA, which introduced the first statutory definition of AI in federal law under the section governing Department of Defense activities.
By contrast, the NAIIA expanded the legal scope to create a whole-of-government approach, coordinating both defense and civilian agencies (including the Department of Defense, the National Science Foundation, the National Institute of Standards and Technology, and the Department of Education). It also closely aligned the federal definition with the international standard endorsed by 48 OECD member and non-member states and the European Union.
The NAIIA’s revised definition sought to reconcile fragmented definitions across federal agencies. It removed anthropomorphic analogies and articulated three core functions: perceiving, abstracting, and inferring. Its emphasis on perception and modeling reflected technological advances such as deep learning, autonomous vehicles, AlphaGo, and conversational AI, like Siri and Alexa.
Approach
NAIIA is helpful in that it uses a technique-agnostic definition that does not specify any particular approach, such as machine learning, deep learning, or neural networks. Doing so illustrates the attempt to create a future-proof law that is flexible enough to evolve with new methods. Other advantages of the definition include avoiding technical and legal jargon and focusing on processes rather than speculative ideas about what constitutes human-like intelligence. Compared to its 2019 predecessor in the McCain NDAA, the NAIIA correctly removes anthropomorphic phrases such as “acts like a human” and “thinks or learns.” It acknowledges both “real and virtual environments”; however, it would be more accurate to adopt the OECD’s 2024 language of “physical and virtual,” because, although intangible, digital spaces are real domains where social, economic, and political activity occurs.
There are other disadvantages to NAIIA’s 2020 definition of artificial intelligence. Most notable is the absence of the term “generate.” The NAIIA definition predates the boom in generative AI following the launch of ChatGPT in November 2022. President Biden’s 2023 executive order, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, was the only of the nine executive orders about AI that provided a legal definition of generative AI. President Trump revoked it upon returning to the White House in 2025.
It can be argued, however, that even without explicit reference to “generate,” the NAIIA’s 2020 definition applies to generative AI. It addresses the pre-generative concept of AI by recognizing that predictive modeling underlies generation, by describing how humans provide inputs and participate in data training, and by emphasizing the perceive-abstract-infer clauses that describe model learning behavior. The absence of the term “generate,” though, means the definition could be misinterpreted as falling outside of human-defined objectives and the creative, expressive, and stochastic outputs that stem from problems of randomness, unpredictability, and confabulations.
Other limitations include the absence of terms such as “varying levels of autonomy” and “adaptiveness,” which are included in the OECD’s 2024 updated definition. Without these three terms, the NAIIA definition does not explicitly recognize that AI systems can generate new content, operate along a continuum of human oversight safeguards, and learn and change after deployment, thereby posing risks such as model drift, unintended bias amplification, and evolving decisionmaking behaviors.
The NAIIA definition removed the phrase “without significant human oversight” as a first step in broadening its scope to include autonomous systems. NAIIA continued to emphasize “human-defined objectives,” which, although ideal when emphasizing human responsibility, fail to account for the fact that AI agents are increasingly designed to define their own intentions and operate semi-independently. An updated law should account for these developments.
The 2020 definition also implies that the federal government will use AI as a general-purpose technology, with core methods applicable across sectors. However, this legal framing overlooks the various ways agencies rely on narrow, sector-specific AI systems tailored to distinct regulatory contexts. For example, computer vision or anomaly behavior-detection models can be considered general-purpose AI for use in surveillance, facial recognition, and fraud detection. In contrast, medical imaging and diagnostic algorithms, pollution monitoring, and energy grid optimization are highly sector-specific systems that require specialized datasets, subject-matter expertise, and agency oversight. By not making this distinction, the NAIIA definition blurs the policy line: general-purpose AI systems benefit from a horizontal government strategy through the proposed cross-agency frameworks (e.g., NIST or OMB), while sector-specific applications may require additional vertical, domain-grounded oversight (e.g., subject-matter experts in the FDA, EPA, or DOJ).
Overall, the NAIIA’s 2020 definition, which was built upon OECD’s 2019 definition, marks an important moment in US policy, moving from a defence-centric approach to a whole-of-government strategy. Its strengths lie in its accessible, inclusive language, while its weaknesses include failing to distinguish between general-purpose and narrow AI and omitting AI systems’ generative, autonomous, and adaptive capabilities.
Recommendation
In conclusion, it is recommended that the federal definition of artificial intelligence be updated as follows to address the technical, legal, and ethical gaps in current law.
ARTIFICIAL INTELLIGENCE.—The term “artificial intelligence” means a machine-based system that can, for a given set of direct or indirect human-defined objectives, generate, predict, recommend, or decide outputs that influence physical make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to—(A) perceive physicalreal and virtual environments; (B) abstract such perceptions into models through analysis in an automated or adaptive manner; and (C) use model inference to formulate options for information or action, or to generate new content. Artificial intelligence systems operate with varying levels of autonomy and adaptivity after deployment, and may include general-purpose AI technologies applicable across agencies, as well as narrow AI applications designed for specific-sector functions.
Reception
NAIIA’s 2020 definition, aligned with 2019 OECD principles and language, received wide bipartisan support upon its passage on January 1, 2021. President Biden and President Trump, upon his return to office, added the NAIIA definition to six of their executive orders on artificial intelligence between 2023 and 2025. To illustrate its influence, as of November 2025, several states have passed laws on artificial intelligence that adopt an NAIIA/OECD-style definition. From its inception, this NAIIA/OECD definition has received widespread industry support.
The primary concern, however, is that despite widespread bipartisan support at the federal and state levels and endorsements from leading AI companies, Congress must replace the outdated legal definition in NAIIA 2020 to align with the OECD 2024 definition.
Furthermore, Congress and federal agencies will need to carefully align their efforts, as even the Advancing American AI Act of 2022 and the Office of Management and Budget’s 2025 Memorandum (M-25-21) incorrectly reference the outdated definition of AI in the McCain NDAA 2018.
Additional Resources
- Explanatory Memorandum on the Updated OECD Definition of an AI System (2024)
- BSA Comments on Bill on Fostering Artificial Intelligence Industry and Securing Trust (2023)
Software Alliance (BSA) explicitly “recommends adopting the OECD’s definition of AI.” The BSA includes major AI companies such as Adobe, Amazon Web Services, Cohere, Databricks, Elastic, IBM, Microsoft, OpenAI, Oracle, Salesforce, and SAP.
OECD Updated Recommendation

Dr. Margaret Mitchell
Researcher and Chief Ethics Scientist at Hugging Face
Read about Margaret
Dr. Margaret Mitchell is a researcher focused on machine learning (ML) and ethics-informed AI development. She has published over 100 papers on natural language generation, assistive technology, computer vision, and AI ethics, and holds multiple patents in the areas of AI conversation generation and sentiment classification. She has been recognized in TIME’s Most Influential People; Fortune’s Top Innovators; and Lighthouse3’s 100 Brilliant Women in AI Ethics.
She currently works at Hugging Face as a researcher and Chief Ethics Scientist, driving forward work on ML data processing, responsible AI development, and AI ethics. She previously worked at Google AI, where she founded and co-led Google’s Ethical AI group to advance foundational AI ethics research and operationalize AI ethics internally. Before joining Google, she worked as a researcher at Microsoft Research and as a postdoc at Johns Hopkins.
She has spearheaded a number of workshops and initiatives at the intersections of diversity, inclusion, computer science, and ethics. Her work has received awards from Secretary of Defense Ash Carter and the American Foundation for the Blind, and has been implemented by multiple technology companies. She is most known for her work pioneering “Model Cards” for ML model reporting; developing “Seeing AI” to assist blind and low-vision individuals; and developing methods to mitigate unwanted AI biases.
She holds a PhD in Computer Science from the University of Aberdeen and a Master’s in Computational Linguistics from the University of Washington. She likes gardening, dogs, and cats.
Definition Text
AI system: An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
Lineage Definition
AI system: An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.
Source text: Revised Recommendation of the Council on Artificial Intelligence
Place of origin: International
Enforcing entity: N/A
Date of introduction: November 8, 2023
Date enacted or adopted: May 3, 2024
Current status: In force
Motivation
As written, the 2023 OECD definition of “AI system” appears crafted to bridge a gap between machine learning-specific terminology and more common language, building from an earlier definition provided by the OECD. Changes from the previous OECD definition are striking and problematic, inserting terms used in machine learning but using them in a confusing or inappropriate way, ultimately making the definition less suitable for defining AI.
The first difference between the current definition and the previous OECD definition is the replacement of “[can,] for a given set of human-defined objectives” with “for explicit or implicit objectives,” a change that at its surface sounds more similar to what is happening under-the-hood in machine learning, but makes the definition less clear for regulators and machine learning developers alike. For example, what is an implicit objective?
Before going further in untangling the OECD definition’s phrasing, let’s unpack some of the basics about what an “AI system” is. One approach for building an AI system involves machine learning, where what’s called a model uses an objective function and inference to learn how to connect inputs to outputs. In the definition provided by OECD, the phrase “explicit or implicit objectives” may be an attempt to reference the concept of an objective function in machine learning—which is a mathematical expression, distinct from a human high-level goal, and generally explicitly defined by a developer. In contrast, an “implicit” objective is one that a system can define for itself based on explicitly defined goals by a human. As a very simple example, a human-defined high-level goal of “make me a sandwich” might be met by a system using implicit objectives of getting bread out of the refrigerator, cutting it, etc. But this explicit-implicit distinction is a bit “in the weeds” for a general definition, and misses that there is still a role of explicit definition from a person even when implicit objectives are defined. Or, perhaps “implicit objective” is referring to something else. It’s not at all clear.
Adding to the confusion is that this change from the previous OECD definition removes a key component of an AI system: the human developers who define it. Where instead the definition may benefit from being more human-centered—e.g., by highlighting that common systems are trained using human-created data—it instead removes the role of the human altogether, elevating the concept of an AI system to something that exists outside of human creation, imbued with a kind of mystique that can only further fuel misunderstandings and inappropriate hype.
The rest of the text is similar, using terminology that is hard to follow from a machine learning perspective, and that I can’t imagine is easier to parse from a non-machine learning perspective.
Approach
From the original OECD text of “can…make predictions…,” this definition substitutes “infers…how to generate…predictions,” adding terminology reminiscent of the concepts of inference and generation in machine learning, but applied inappropriately and introducing multiple issues. Machine learning processes underlying many AI systems leverage an input-output relationship similar to the one provided, where a model can be said to “infer” (using inference) “from the inputs it receives, how to generate outputs.” However, In the context of defining an “AI system,” there are three main issues with this framing. The first is that outside of technical machine learning text, the concept of infers generally refers to a mentalistic concept that denotes reasoning, judgement, having thought, holding opinions, etc. (Sources: Merriam Webster Dictionary, Oxford English Dictionary, Cambridge Dictionary). While sharing some similarities to a technical term, the more general usage of “infer” afforded by the definition’s context brings with it the risk (if not the direct requirement) of anthropomorphism: If something can infer, then surely it can think. This problem is compounded by the usage of colloquial mentalistic concepts now used in marketing AI systems, like “chain of thought reasoning,” which corresponds to algorithmic processes and is not a representation of the human mind.
The second issue is that many technologies referred to as “AI” are not based on machine learning, and can’t be said to “infer” in the colloquial or technical sense, for example, systems that are rule based (e.g., ELIZA, SYSTRAN) or statistical without using machine learning (e.g., DeepBlue). The third issue is that machine learning models are often just one part of an “AI system,” which is what this definition is attempting to define. In contrast to an isolated machine learning model, an AI system—such as ChatGPT—can incorporate a variety of other programming techniques, rules, and hand-designed algorithms.
As such, the usage of “infers” here serves to further muddy and confuse the issue, blending machine learning-esque terminology (e.g., inference) with mentalistic anthropomorphised concepts (e.g., thinking) to produce a phrasing that is stunningly unclear and fails to capture multiple types of AI systems. (To make the definition more clear, perhaps “infer” could be defined without using anthropomorphising language.)
Examining the definition as a whole, there are further issues in what it includes and excludes. On the one hand, the definition is inappropriately inclusive of non-AI systems, such as a traditional weather forecasting system that uses inputs such as sensor data (temperature, pressure, humidity, wind speed, satellite imagery) and processing from mathematical models grounded in physics equations (fluid dynamics, thermodynamics) to predict how weather systems evolve, generating output in the form of forecasts (predictions about rain, temperature, storms, etc.), which influence physical environments (e.g., people wearing a coat, farmers deciding when to plant). Such systems are generally not considered to be AI, but do fall under the given definition.
On the other hand, the definition problematically overlooks the importance of one of the most critical aspects of AI systems for policy: the role of training data. As discussed above, the current definition appears to align with machine learning AI systems, which leverage training data often created by humans at the expense of their rights. Yet how human rights are affected is something that is particularly important for politicians to be aware of when examining AI systems, and this key issue is erased in the definition.
Reception
I am not aware of any industry practitioners using this definition and have not come across others using it in industry circles. I do not think it is a well-accepted one in industry. Notably, Bezerra et al. discuss how the definition is overly broad and a critical risk for the AI market. A related interesting discussion on issues with the definition occurred in the OSI forum.
Colorado AI Act

Dr. Rumman Chowdhury
Co-founder and Distinguished Advisor of Humane Intelligence
Read about Rumman
Dr. Rumman Chowdhury (she/her) is a globally recognized leader in data science and responsible AI, uniquely positioned at the intersection of industry, civil society, and government. She is a sought-after speaker at high-profile venues, including TED, the World Economic Forum, the UN AI for Good Summit, and the European Parliament, where she brings a rare combination of technical expertise and real-world implementation experience. She is the former US Science Envoy for AI at the US Department of State; the former Engineer Director of Machine Learning, Ethics and Transparency at Twitter; the founder and CEO of Parity AI (acquired by Twitter); and the former Managing Director of Responsible AI at Accenture. Rumman co-founded Humane Intelligence in 2022 and served as its CEO until August 2025. She stepped down to launch her new startup, details about which are coming soon. Rumman holds dual Bachelor degrees from MIT, a Master degree from Columbia University, and completed her PhD in Political Science at UC San Diego. She remains a Distinguished Advisor of Humane Intelligence, the nonprofit.
Definition Text
“Artificial Intelligence System” means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.
Lineage Definition
AI system: An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.
Source text: An Act Concerning Consumer Protections for Interactions with Artificial Intelligence
Place of origin: Colorado, USA
Enforcing entity: State Attorney General
Date of introduction: April 10, 2024 in SB24-205
Date enacted or adopted: May 17, 2024
Current status: Enacted (but implementation has been postponed)
Motivation
The Colorado AI Act’s definition of “artificial intelligence system” was written in response to the growing need for regulatory clarity as AI technologies began to play a decisive role in consequential decisions for individuals—such as hiring, housing, healthcare, credit, and more. The definition targets the problem of “algorithmic discrimination,” situations where automated systems can yield unfair outcomes or treat individuals or groups differently based on protected attributes. By crafting a direct, functional definition, lawmakers sought to cover a broad range of present and future AI tools likely to impact consumer rights.
This definition appears to be a descendant of earlier frameworks such as the EU AI Act draft and the NIST AI Risk Management Framework, but it differs in some respects. The Colorado language explicitly references both explicit and implicit objectives and makes outputs—including non-decision content—central, reflecting how large language models and general-purpose AI systems (like ChatGPT, Gemini, and Claude) can influence user experience and outcomes. Events surrounding the rapid adoption of generative AI in 2023–2024, along with high-profile incidents of bias and opaque decisionmaking, clearly influenced both the need and the shape of this wording. The state balanced the need for innovation with robust consumer protections, aiming to future-proof the law against evolving AI capabilities while supporting transparency and legal accountability.
Approach
The Colorado AI Act takes a comprehensive, rights-focused approach to regulating artificial intelligence, targeting both the technologies themselves and their impact on consumers. It lays out a broad definition of “artificial intelligence system” while identifying explicit inclusions and exclusions that shape its scope, utility, and unique jurisdictional features.
Inclusions and Exclusions
The definition encompasses a wide array of current and future AI systems, from automated decisionmaking engines to generative content tools and algorithmic recommenders.
Included technologies are not limited by architecture or technique: anything with inferential capabilities that shapes outputs affecting users is covered, whether rule-based, statistical, or driven by machine learning. “High-risk artificial intelligence systems”—those used for consequential decisions in areas such as employment, education, healthcare, housing, finance, government services, insurance, or legal services—are the main regulatory target.
Yet, the Act carves out notable exclusions. Systems used solely for procedural tasks or for pattern detection with robust human oversight escape high-risk classification. Standard tools such as calculators, anti-virus software, spell-checkers, video games, web hosting, and certain natural language bots functioning strictly under non-discriminatory usage policies are excluded unless they make “consequential decisions.” Technologies governed by federal approval or certain sectoral regulations, and work done for federal agencies, are exempt if pre-existing standards are sufficiently strong. Small deployers (under 50 employees not feeding proprietary data into their systems) face lighter requirements, recognizing resource constraints and risk profiles.
Stakeholders: Who is Regulated?
The Act clearly delineates “developers” (those who create or substantially modify AI systems) and “deployers” (entities using high-risk systems in Colorado). For developers, regardless of whether located in Colorado, obligations apply if their systems are used in the state. Deployers must conduct risk management, impact assessments, and consumer-facing disclosures before deployment of high-risk AI. Ultimately, the law aims to protect “consumers”—defined as individuals residing in Colorado—from algorithmic discrimination in critical life areas.
Jurisdictional and Legal Context
The language is closely tied to Colorado consumer protection statutes and civil rights laws, referencing “classification protected under the laws of this state or federal law” for discrimination claims. Procedural obligations and enforcement powers are vested in the Colorado Attorney General, distinctly contrasting with private right of action models seen in some states or federal proposals—“this part 17 does not provide the basis for … a private right of action.”
Terms such as “consequential decision,” “algorithmic discrimination,” “developer,” and “deployer” are defined in detail to limit ambiguity. Exemptions for entities operating under stringent federal or sectoral standards tie the Act to national regulatory infrastructure while maintaining state sovereignty.
Scope Through Companion Terms
Related terms that define the scope of the AI definition include:
- “Algorithmic discrimination”—directly linked to consumer protection and anti-discrimination law, setting out what constitutes harm.
- “High-risk artificial intelligence system”—anchoring the law’s main application to use-cases where decisions have “material legal or similarly significant effect.”
- “Intentional and substantial modification”—clarifying when developer obligations are triggered for updates or retraining, with exceptions for mere model drift or technical maintenance.
- “Substantial factor”—further specifying systems whose outputs can alter consequential decisions.
Differences from Predecessor Definitions
Compared to predecessor definitions (like the EU AI Act or NIST RMF), Colorado’s scope is simultaneously broader in capturing a wider set of inferential processes (“explicit or implicit objective”) and more targeted by focusing regulatory attention on high-risk consequential use-cases. The Act excludes low-impact infrastructure and procedural automation tools despite their “machine-based” nature. Colorado’s version also ties compliance to recognized risk management frameworks, permitting a safe harbor for organizations that follow standards like the NIST RMF or ISO/IEC 42001.
The language adjustments—such as “any explicit or implicit objective” and “can influence physical or virtual environments”—ensure future-proofing and accommodate novel AI modalities like generative and conversational systems. At the same time, the detailed list of exclusions is itself an innovation, striving to balance public protection with business practicality.
In summary, the Colorado AI Act combines broad definitional language with specific operational scope, targeting technologies and stakeholders most likely to impact consumers in material ways. Its layered approach to definitions, inclusion/exclusion criteria, and jurisdictional specificity position it at the forefront of US state-level AI regulation while highlighting both practical utility and ongoing complexity for compliance and enforcement.
Reception
Colorado’s Act is viewed as a pioneering model in US state-level regulation, prompting legislative interest in California, Illinois, and New York City, and influencing ongoing debates at the federal level and in industry forums. Other US state laws—such as recently-proposed bills in California and Illinois—echo Colorado’s focus on algorithmic discrimination, consumer protection, and transparency, though few match its detailed controls or coverage of both developers and deployers. At the federal level, the White House’s AI Bill of Rights provides guideline principles that align with Colorado’s emphasis on protections in consequential decisions, but falls short in regulatory force.
Notable endorsements or support have emerged from privacy advocacy organizations and some legal scholars who praise the definition’s breadth and explicitness—particularly its coverage of both “explicit or implicit objectives” and wide range of outputs, including content, predictions, and recommendations. Legal experts, such as the American Bar Association and practitioners in AI law, have highlighted Colorado’s approach as likely to shape future legislation nationwide, especially in regulating “high-risk” AI and aligning with anti-discrimination policies.
Industry practitioners and some technology industry groups, however, have raised significant critiques. The US Chamber of Commerce, the Chamber of Progress, and the Consumer Technology Association opposed the Act’s breadth, citing concerns that ambiguous and expansive mandates would create compliance challenges, stifle innovation, and be burdensome for smaller developers. The law’s complex definitions and procedural requirements have also led to “buyer’s remorse” among state leaders, who delayed implementation after encountering stakeholder confusion and industry pushback.
Colorado’s courts have not yet interpreted the definition, as the Act’s effective date is delayed, and enforcement will be in the hands of the state Attorney General rather than private litigants. Legislative bodies nationwide continue to debate how much of Colorado’s terminology and structure to copy, with some adopting similar concepts (“high-risk AI,” “consequential decisions”) but not the full definition.
Additional Resources
Defining Technologies of our Time: Artificial Intelligence © 2026 by Aspen Digital. This work in full is licensed under CC BY 4.0.
Individual entries are © 2026 their respective authors. The authors retain copyright in their contributions, which are published as part of this volume under CC BY 4.0 pursuant to a license granted to Aspen Digital.
The views represented herein are those of the author(s) and do not necessarily reflect the views of the Aspen Institute, its programs, staff, volunteers, participants, or its trustees.

