Lineage Entries
Algorithmic Accountability Act of 2019 by Professor Ben Winters
Algorithmic Accountability Act of 2022 by B Cavello
California Privacy Protection Agency (CPPA) Rulemaking Draft by Professor Rashida Richardson
Algorithmic Accountability Act of 2019

Professor Ben Winters
Director of AI and Privacy at the Consumer Federation of America
Read about Ben
Professor Ben Winters is the Director of AI and Privacy at the Consumer Federation of America. Ben leads CFA’s advocacy efforts related to data privacy and automated systems and works with subject matter experts throughout CFA to integrate concerns about privacy and AI in order to better advocate for consumers. Ben is also an adjunct professor at the University of the District of Columbia David A. Clarke School of Law
Prior to CFA, Ben worked at the Civil Rights Division of the Department of Justice, where he was an Attorney Advisor in the policy section focusing on algorithmic harm in the civil rights context and was Senior Counsel at the Electronic Privacy Information Center (EPIC) where he led the AI/Human Rights project and advocated for accountability through legislative and direct legal action.
Ben is a graduate of Benjamin N Cardozo Law School and the SUNY Oneonta and is a member of the New York State and District of Columbia bars.
Definition Text
(1) AUTOMATED DECISION SYSTEM.—The term “automated decision system” means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.
This is a Lineage Definition
Source text: Algorithmic Accountability Act of 2019
Place of origin: Federal, USA
Enforcing entity: Federal Trade Commission (FTC)
Motivation
This was a very early attempt at regulating AI at the federal level in the US, so it had less precedent to work off of. The John S. McCain NDAA as well as the Department of Defense had adopted definitions of AI itself, but this focused on the systems and the decisions, not just AI as a concept or bigger category. This definition was written in light of stories including: Amazon admitting the algorithm they made led to discriminatory outcomes, Joy Buolamwini proving that popular facial recognition programs can’t accurately recognize black faces, and a case brought by the DOJ against Meta for their discriminatory housing ad targeting system. It also comes in the wake of some of the first very popular books illustrating the common issues of algorithmic discrimination: Weapons of Math Destruction by Cathy O’Neil (2016), Technically Wrong by Sarah Wachter-Boettcher (2017), and Automating Inequality by Virginia Eubanks (2018). It was done before Generative AI was on the market in popular widely available consumer-facing tools, and therefore reflects a better focus.
All of these stories helped fill in a previously more esoteric concept of algorithmic harm. They gave shape and urgency to the issue, and since it was a relatively new issue, there was less industry defense and more earnest attempts at helping for a period of time. Rashida Richardson, then at the New York Civil Liberties Union, published key information about ADS tools as part of the New York City Automated Systems Task Force process, proposed a very similar definition (see page 20).
Approach
This proposal centered privacy and data use—building on the General Data Protection Regulation in the EU in 2016 and Sen. Wyden’s Mind Your Own Business Act in 2019. It also was inspired in part by an approach taken in New York City—to increase algorithmic transparency around the use of AI by the city in 2017.
There are three significant dynamics at play at the heart of this bill that are still central to ADS bills in 2025: what kind of algorithmically driven decisions are covered, who bears responsibility when different entities create an automated system and deploy them (referred to as “developer vs deployer”), and what kind of entities are covered under the bill.
The qualifiers of what kind of decisions usually use some sort of phrase like critical, rights impacting, important, or consequential action—in this bill, it’s “high-risk.” The way high-risk is defined in this bill leads to a fairly broad interpretation. They include “makes a decision or facilitates human decision making” in the definitions of the bill which is a significant narrowing of the scope, and makes it very easy for entities to just avoid being swept into coverage by deliberately characterizing their processes as less automated than they are.
In the definitions of this bill, the authors have also taken the approach of separately defining data protection impact assessments and ADS impact assessments.
Like many federal tech-related bills, this would authorize the FTC to do the lionshare of enforcement work as well as some regulatory work that would inform what the bill looks like in practice. State AGs would also be given authority to enforce, but not establish rules.
Reception
The bill was endorsed by advocacy groups including Data for Black Lives, the Center on Privacy and Technology at Georgetown, and the National Hispanic Media Coalition. Law professors Andrew Selbst and Margot Kaminski praised the bill in a New York Times op-ed for its ambition but noted it lacked clarity and enforceability in key areas. Industry groups, unsurprisingly, opposed the bill, arguing it was too broad and burdensome, as reflected in critiques like those from the Center for Data Innovation.
The bill’s definition of automated decision systems has since influenced similar legislation in Washington (2019), Connecticut SB2 (2024), and California’s AB-1018 (2025). That said, as the cultural and political understanding of “AI” has expanded since 2019, fewer bills today focus narrowly on this type of decisionmaking system—though the 2022 and 2025 versions of the same bill, with the same primary sponsors, build on and expand this definition, as discussed later in this resource.
Since the release of this bill in 2019, there has simultaneously been a reduced focus on current harms like this bill has and an expanded focus on speculative harms. With that being said, the amount of bills at both the federal and state levels that would affect the use of these types of systems have increased significantly overall.
Additional Resources
Algorithmic Accountability Act of 2022

B Cavello
Director of Emerging Technologies at Aspen Digital
Read about B
B Cavello is a technology and facilitation expert who is passionate about creating social change by empowering everyone to participate in technological and social governance. They serve as director of emerging technologies for Aspen Digital, a program of the Aspen Institute. B also serves as assistant program chair for the Neural Information Processing Systems (NeurIPS) Conference, as a 2025 research fellow with Siegel Family Endowment, and as a board member to Metagov, an interdisciplinary research nonprofit promoting digital self-governance.
Previously, B advised Senator Ron Wyden on issues of privacy, internet governance, and algorithmic accountability. Prior to Congress, they were a research program lead at the Partnership on AI, senior engagement lead for IBM Watson, and director of both product development and community at Exploding Kittens. B has been recognized as an LGBT+ ‘Out Role Model,’ Global Crowdfunding ‘All-Star,’ and was selected as an MIT-Harvard Assembly Fellow for the 2019 Ethics and Governance in Artificial Intelligence Initiative cohort.
Definition Text
(2) AUTOMATED DECISION SYSTEM.—The term “automated decision system” means any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.
Lineage Definition
(1) AUTOMATED DECISION SYSTEM.—The term “automated decision system” means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.
Source text: Algorithmic Accountability Act of 2022
Place of origin: Federal, USA
Enforcing entity: Federal Trade Commission (FTC)
Motivation
The Algorithmic Accountability Act of 2022 was a substantial revision and reintroduction of the Algorithmic Accountability Act of 2019. The 2019 bill was influential, hailed by many as a significant first step toward federal AI consumer protection. It was also criticized for being overly vague and introduced as a messaging bill. The 2022 update aimed to address these critiques with greater clarity and specificity.
One of the most significant changes in approach was a shift in emphasis from a focus solely on the automated decision systems (ADSs), themselves, to the augmented critical decision processes (ACDPs) in which the ADSs are used. This new term, defined as “a process, procedure, or other activity that employs an automated decision system to make a critical decision,” was introduced in light of the reality that harms from automation do not only arise from flaws in the automated systems themselves. Rather, automation has the capacity to scale and speed up existing harmful processes, often while obfuscating the actual decisionmakers, making accountability more challenging.
The inclusion of ACDP was also intended to address some early uses of generative AI. While the Act was introduced before the release of ChatGPT which elevated AI into the public consciousness, it was drafted when the precursor technology (the generative pre-trained transformer, or GPT) was already widely acknowledged in the AI research community.
Approach
Legal definitions of AI are often critiqued as being overly broad (often of the dismissive “this would cover a spreadsheet” variety). ADS in the Algorithmic Accountability Act of 2022 is no exception. One reason tech legislation uses intentionally broad language is an attempt to “future-proof” the text given the continually changing nature of many technologies. The Algorithmic Accountability Act of 2022 follows this rationale, and—with inspiration from the foundational research of Professor Rashida Richardson—recognizes that spreadsheets are, in fact, incredibly consequential technologies, if used to make consequential decisions.
If anything, the 2022 ADS definition is even more expansive than the 2019 version. The updated definition drops the phrase “that impacts consumers” altogether. Instead, the bill uses other defined terms and directions to the target agency (the FTC) to capture the consumer protection context. Similarly, the 2022 definition expands from merely “a computational process” to “any system, software, or process … that uses computation.” Arguably, these terms mean the same thing, but the latter is less susceptible to creative reinterpretations of the term “process” in the context of computing.
Instead of more narrowly defining ADS, the 2022 Act invokes other companion definitions like covered entity and critical decision to scope the application of the bill. However, one place where the 2022 ADS definition is specifically limited is in the exclusion of passive computing infrastructure, a term for “any intermediary technology that does not influence or determine the outcome of a decision.” Examples provided in the text for passive computing infrastructure include things like web hosting, networking, and data storage.
This narrowing language qualifies the final phrase of the definition: “the result of which serves as a basis for a decision or judgment.” This phrasing was updated from 2019’s “that makes a decision or facilitates human decisionmaking,” because the earlier language anthropomorphized AI systems. The 2022 text clarifies more precisely how technology facilitates human decisionmaking (by producing results), but as an extra precaution—or to appease critics who claim that ADS covers everything under the sun—passive computing infrastructure explicitly excludes anything where the results aren’t connected to the decision being made.
Another way the 2022 Act narrows the set of actors and technologies to which the bill applies is through the definition of the covered entities it is intended to target. The entities in question are defined by their size, determined either by gross receipts (think: revenue, investment, etc.) or data processing activity. This is a common pattern in tech policy, where legislators aim to target a specific (often newsworthy) set of notable actors, but don’t want to sweep up other actors (especially small businesses) in the process.
Because this bill is motivated by the speed and scale of impact that automation enables, defining covered entities in terms of size makes sense. The text has slightly different qualifying criteria for entities depending on whether they are employing ADS in processes for critical decisions themselves or whether they are offering tools that are intended or likely to be used by others who will. (Other policy writing in this space sometimes uses a “deployer” vs “developer” framing, but this bill does not.)
Finally, the bill navigates having such a broad ADS definition by clarifying critical decisions. Critical decisions in the bill are decisions that affect consumers’ “access to or the cost, terms, or availability of” education and vocational training, employment, essential utilities, family planning, financial services, healthcare, housing or lodging, legal services, or similarly impactful areas of consumers’ lives. The definition of these decision categories echoes the EU AI Act which was drafted during the same period, but tailored specifically to the FTC context.
Decisions are a throughline of the Algorithmic Accountability Act of 2022. Indeed, the term automated decision system has it right in the name. Even so, because “decision,” itself, is undefined, it is possible that some interpretations of the term could unintentionally limit the applicability of the bill. (For example: whether “decision” is treated as a psychological process or as an outcome or ruling with consequence.) The critical decision definition gestures at an outcome rather than process understanding of the term (“a decision or judgment that has any legal, material, or similarly significant effect”) but if enough were on the line, crafty lawyers might find creative arguments to challenge this framing.
As such, this language may need to be revisited in light of the rise in so-called “AI agents,” or automated systems that execute actions (even complex, multi-step actions) on behalf of a user. For what it’s worth, an original press release for the bill references examples that substantiate intent to cover these types of automated systems, following the outcomes framing of the term.
Reception
The 2022 Act was endorsed by Access Now; Accountable Tech; Center for Democracy and Technology (CDT); Color of Change; Consumer Reports; Credo AI; EPIC; Fight for the Future; IEEE; JustFix; Montreal AI Ethics Institute; OpenMined; Open Technology Institute (OTI); Parity AI; US PIRG; and others.
Critics of the bill raised concerns about the lack of pre-emption, ambiguity regarding decisions on rulemaking left to the FTC, and the usual concerns about compliance burdens.
While the Algorithmic Accountability Act of 2022 did not pass, it was reintroduced in 2025 under the same name. Additionally, the revised ADS definition was referenced in a number of other policies including the 2022 Blueprint for an AI Bill of Rights (White House OSTP), the 2023 No Robot Bosses Act (Sen. Casey), the 2023 Transparent Automated Governance (TAG) Act (Sen. Peters), among others.
Additional Resources
California Privacy Protection Agency (CPPA) Rulemaking Draft

Professor Rashida Richardson
Distinguished Scholar of Technology and Policy at Worcester Polytechnic Institute
Read about Rashida
Professor Rashida Richardson is a Distinguished Scholar of Technology and Policy at Worcester Polytechnic Institute. Rashida is an internationally recognized expert in civil rights and artificial intelligence, and a legal practitioner in technology policy issues. Rashida has previously served as an Attorney Advisor to the Chair of the Federal Trade Commission and as a Senior Policy Advisor for Data and Democracy at the White House Office of Science and Technology Policy in the Biden Administration. She has worked on a range of civil rights and technology policy issues at the German Marshall Fund, Rutgers Law School, AI Now Institute, the American Civil Liberties Union of New York (NYCLU), and the Center for HIV Law and Policy. Her work has been featured in the Emmy-Award Winning Documentary, The Social Dilemma, and in major publications like the New York Times, Wired, MIT Technology Review, and NPR (national and local member stations). She received her BA with honors in the College of Social Studies at Wesleyan University and her JD from Northeastern University School of Law.
Definition Text
“Automated decisionmaking technology” or “ADMT” means any technology that processes personal information and uses computation to replace human decisionmaking or substantially replace human decisionmaking.
(1) For purposes of this definition, to “substantially replace human decisionmaking” means a business uses the technology’s output to make a decision without human involvement. Human involvement requires the human reviewer to:
(A) Know how to interpret and use the technology’s output to make the decision;
(B) Review and analyze the output of the technology, and any other information that is relevant to make or change the decision; and
(C) Have the authority to make or change the decision based on their analysis in subsection (B).
(2) ADMT includes profiling.
(3) ADMT does not include web hosting, domain registration, networking, caching, website-loading, data storage, firewalls, anti-virus, anti-malware, spam- and robocall-filtering, spellchecking, calculators, databases, and spreadsheets, provided that they do not replace human decisionmaking
Lineage Definition
(1) AUTOMATED DECISION SYSTEM.—The term “automated decision system” means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.
Source text: California Privacy Protection Agency Modified Text of Proposed Regulations (CCPA Updates, Cyber, Risk, ADMT, and Insurance Regulations)
Place of origin: California, USA
Enforcing entity: California Privacy Protection Agency
Date of introduction: May 9, 2025
Date enacted or adopted: September 22, 2025
Current status: Rulemaking Finalized
Motivation
Automated decisionmaking technologies (“ADMT”) is a categorical term that was adopted by policymakers globally to refer to a wide range of technologies and applications with varying degrees of automation that are deployed to support or supplant human decisionmaking. As ADMT are increasingly integrated into enterprise systems or deployed across a range of contexts, they became an early focal point for policymakers concerned about their pervasiveness, opacity, and possible consequences.
In California, ADMT was initially referenced in the Civil Code 1798.185, subdivision (a)(15) to direct the California Privacy Protection Agency (CPPA) to issue regulations governing access and opt-out rights regarding businesses’ use of ADMT; however, the term was not defined. In November 2024, the CPPA issued a public notice of rulemaking to update CPPA regulations including rules on consumers’ rights to access and opt-out of businesses use of ADMT. Informed and influenced by a recent federal AI policy framework and academic scholarship that excogitated an approach to defining technical terms for technology policy proposals, the CPPA presented a multipronged ADMT definition.
This definition describes and focuses on the role that ADMT can play in human decisionmaking. In crafting this definition, the CPPA wanted to focus on what ADMT are actually doing and their impact to clarify that such decisions are within scope of the broader privacy law that the rulemaking sought to reform and to ensure the underlying regulation provides comprehensive consumer protections provided by the law. Unlike the source text, this definition does not focus on or attempt to enumerate the techniques and processes that constitute autonomous or semi-autonomous technologies. Instead, it includes subprovisions that expound on key concepts and explicates what is included or exempted from the definition.
The motivation and context of the CPPA’s ADMT definition is also noteworthy. Unlike the source text and other ADMT definitions, which are defined to anchor legislation seeking to regulate these technologies or their use context; this CPPA definition exists to fill a gap and remedy a loophole in an existing privacy regulation. Therefore, the definition was drafted with a certain level of freedom and rationalization that is otherwise constrained in the legislative drafting process due to its more formal rules and norms.
Approach
While predecessor ADMT definitions focused on the processing of personal data (GDPR) or the variegated techniques and processes that constitute technologies of interest (Algorithmic Accountability Act of 2019), this CPPA definition represents a major departure in its approach to tech policy legislative drafting. The CPPA presented a conceptual definition that attempts to describe the categorical term so that different stakeholders understand its general meaning for their context, rather than adopting or possibly misappropriating technical jargon in attempts to be prescriptive. In addition to providing a simplified statutory definition, the CPPA included subprovisions that further clarify the scope of the law and presumably thwart rulemaking comments and lobbying efforts that claim the definition is overinclusive.
The core definition focuses on the processing of personal information and the reliance on technology in decisionmaking. The processing of personal information grounds the definition in its underlying privacy regulation, while the emphasis on the role of technology in human decisionmaking, along with a subprovision that clarifies the level of human involvement that renders these technologies as “automated” highlights their key function and impact. The CPPA drafters intended to leave little ambiguity about their intentions and the meaning of ADMT to ensure common explanations of noncompliance cannot be leveraged, and subsequent (judicial) review does not distort its meaning and potentially render the regulation unenforceable.
The second subprovision of the definition clarifies that ADMT includes profiling, which is a term commonly used in US state privacy laws to refer to the automated processing of personal information to provide analysis or predictions about an individual. Although ADMT and profiling appear to be cognate terms, the explicit clarification of this subprovision is notable because it foreshadows a trend in state legislatures to indirectly regulate AI and ADMT by revising the definition of profiling or key legislative provisions related to profiling.
The third subprovision of the definition provides an illustrative and non-exhaustive list of technologies generally excluded from the definition of ADMT. It also clarifies that while the named technologies are generally excluded because they tend to facilitate operational needs, they cannot be used in decisionmaking to circumvent legal requirements for ADMT. Although this statutory definition is intentionally comprehensive to include common technologies used as ADMT, it was not drafted to subject every technical system or software to ADMT legal obligations. Including exemptions in the ADMT definition can also aid legal compliance by businesses, or oversight and enforcement of the regulation.
Notably, the rulemaking that provides this ADMT definition also includes a definition of artificial intelligence. This represents a stark difference from the predecessor definition, which includes artificial intelligence techniques as illustrative examples of the computational processes that constitute an ADMT. By defining these terms separately, the CPPA tacitly emphasizes that AI and ADMT are not interchangeable terms and that these technologies must be evaluated in the contextual settings in which they function. This distinction also has the practical effect of demonstrating the expansive and inclusive nature of ADMT.
Reception
Since this CPPA definition was part of a formal rulemaking process it was drafted with awareness that both the definition and related Articles in the proposed regulation would be subject to intense debate and revisions. Indeed, the proposed text in the notice of proposed rulemaking included several examples that sought to clarify the technologies that comprise ADMT; the types of technology assisted decisionmaking that is covered by the definition; and how exemptions were context dependent. Nonetheless, there was significant pushback from industry groups, who claimed the definition was too broad and vague. Their primary complaints were that the scope of technologies covered were too broad, the framing of ADMT use in human decisionmaking could capture too many business decisions, and that the exceptions were not fixed. Yet, some business stakeholders were supportive. For instance, a technology lawyer and angel investor supported a broad conceptual definition to ensure broad regulatory protection and greater appreciation of the potential or cumulative implications of ADMT.
Additional Resources
- Initial Statement of Reasons (CCPA Updates, Cyber, Risk, ADMT, and Insurance Regulations) (2024)
- Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (2022)
- Defining and Demystifying Automated Decision Systems (2022)
- CCPA Updates, Cybersecurity Audits, Risk Assessments, Automated Decisionmaking Technology (ADMT), and Insurance Regulations (2025)
- Confronting Black Boxes: A Shadow Report of the New York City Automated Decision System Task Force (2019)
Defining Technologies of our Time: Artificial Intelligence © 2026 by Aspen Digital. This work in full is licensed under CC BY 4.0.
Individual entries are © 2026 their respective authors. The authors retain copyright in their contributions, which are published as part of this volume under CC BY 4.0 pursuant to a license granted to Aspen Digital.
The views represented herein are those of the author(s) and do not necessarily reflect the views of the Aspen Institute, its programs, staff, volunteers, participants, or its trustees.

