John S. McCain National Defense Authorization Act

Mark Dalton
Senior Policy Director of Technology and Innovation at R Street
Read about Mark
Mark Dalton leads R Street’s technology and cybersecurity policy portfolios as the organization’s senior director of technology and innovation.
Before joining R Street, Mark served as head of strategic planning at the U.S. Naval Undersea Warfare Center (NUWC) Division Newport, where he played a key role in aligning the organization’s acquisition and research initiatives with the strategic needs of the Navy and the Department of Defense. Prior to that, he served as NUWC’s chief engineer for undersea warfare cybersecurity, delivering technical excellence in cyber solutions across research, development, engineering, and testing. He also established NUWC’s inaugural portfolio of cybersecurity research and development, strategically focusing on leveraging artificial intelligence, machine learning, and formal mathematical methods to combat cyber threats.
As a computer scientist, Mark conducted research in the application of reinforcement learning to optimize behaviors in autonomous systems. He also managed numerous installations and tests of prototype technologies aboard U.S. Navy submarines worldwide.
Mark holds a master’s degree in national security and strategic studies from the U.S. Naval War College, a master’s degree in computer science from the New Jersey Institute of Technology, and a bachelor’s degree in information technology from the University of Massachusetts Lowell. He is currently pursuing a PhD in international relations at Salve Regina University.
Mark resides in Portsmouth, Rhode Island, with his wife, Nichole, and their 18-year-old daughter, Addison. His 3-year-old dog, Ziggy, is his loyal officemate.
Definition Text
ARTIFICIAL INTELLIGENCE DEFINED.—In this section, the term ‘‘artificial intelligence’’ includes the following:
(1) Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
(2) An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
(3) An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
(4) A set of techniques, including machine learning, that is designed to approximate a cognitive task.
(5) An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision making, and acting.
This is a Lineage Definition
Source text: Section 238(g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019
Place of origin: Federal, USA
Enforcing entity: Department of Defense
Date of introduction: April 13, 2018 in H.R.5515
Date enacted or adopted: August 13, 2018
Current status: Enacted (codified at 10 U.S.C. note prec. 4061
Motivation
Section 238(g)’s definition of AI serves many purposes for the Department of Defense (DoD). Foremost, it allows organization coordination across the most vast and complex bureaucracy in the world. DoD comprises military branches, defense agencies, and contractor networks staffed by a blend of military and civilian workforce. A shared understanding of AI is necessary to plan, budget, acquire, deploy, and maintain capabilities and this definition establishes that common lexicon.
The definition also supports the establishment of a senior oversight role responsible for AI strategy for the entire DoD. This reflects the formal recognition of AI as a transformative military technology that presents several implications for national security. Such recognition has substantial budgetary consequences. By codifying AI as strategically important in statute, Section 238 justifies billions of dollars in research, development, and acquisition funding for the technology.
The legislative context is equally important. By the time of the writing of the NDAA, there were significant concerns about the capabilities of emerging peer competitors on the world’s stage. Democratization of technology, particularly China’s state-backed AI initiatives, raised concerns in defense policy circles and anxieties about US competitiveness in technology domains related to military capabilities and future operations. Section 238 positioned the NDAA as an instrument for national-scale technology modernization, extending beyond the traditional hardware procurement process to address emerging digital capabilities that encounter friction with conventional defense acquisition. The definition simultaneously serves operational, budgetary, and strategic signaling purposes.
Approach
Section 238(g) adopts a notably expansive approach to defining artificial intelligence, employing five distinct formulations rather than a single, unified definition. This multi-faceted structure suggests the Department of Defense grappled seriously with AI’s conceptual breadth and chose comprehensiveness over precision. The definition encompasses system-level characteristics, implementation methodologies, and theoretical frameworks drawn from decades of AI research, creating an umbrella broad enough to capture nearly any intelligent system while still calling out specific technological approaches.
The definition contains five elements: (1) systems that perform tasks under varying circumstances without significant human oversight or learn from experience, (2) systems that solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action, (3) systems designed to think or act like humans, including cognitive architectures and neural networks, (4) techniques like machine learning designed to approximate cognitive tasks, and (5) systems designed to act rationally, including intelligent software agents or embodied robots that achieve goals through perception, planning, reasoning, learning, communication, decisionmaking, and action.
These five elements can be organized into three conceptual categories. System-level definitions include Elements (1), (2), and (3), which focus on capabilities and behavior: autonomous learning systems that operate without human oversight, systems that replicate human cognitive functions across multiple domains, and systems explicitly designed to emulate human thinking. Implementation-based definitions are captured in Element (4), which shifts focus from complete systems to methodologies, characterizing AI as “a set of techniques, including machine learning, that is designed to approximate a cognitive task.” This formulation emphasizes methods over outcomes and acknowledges that AI often involves approximation rather than true replication of cognition. Finally, rational agent definitions appear in Element (5), which draws from classical AI theory—particularly the rational agent paradigm articulated by Russell and Norvig—describing goal-oriented systems that execute a complete cognitive cycle.
This definitional breadth serves clear strategic advantages. The multi-pronged approach provides future-proofing as AI technology evolves, ensuring that novel techniques and architectures will likely fall under at least one element. For an organization focused on maintaining technological superiority over multi-decade timescales, this flexibility is valuable. The definition accommodates symbolic AI, statistical machine learning, neural networks, and robotics simultaneously, reflecting the reality that military AI applications span this entire spectrum. From a coordination perspective, the broad scope enables the designated senior official to exercise oversight across diverse AI initiatives without artificial categorical limitations. For budgetary purposes, the expansive definition supports funding requests across a wide range of programs, all legitimately classified as AI investments.
However, this breadth introduces significant problems. The definition risks severe over-inclusiveness, potentially encompassing conventional automation and traditional software systems that few practitioners would characterize as AI. Element (4)’s language “techniques…designed to approximate a cognitive task” is particularly vague and could arguably include recommendation algorithms, basic statistical models, or rule-based systems. This ambiguity creates substantial risk for “AI washing,” the practice of labeling conventional systems as AI-enabled to secure funding or prestige. The commercial sector has extensively demonstrated this phenomenon, with products ranging from streaming service recommendations to smart appliances marketed as “powered by AI” despite employing relatively simple algorithms.
The definition also lacks meaningful thresholds or boundaries. What constitutes “significant human oversight” in Element (1)? When does approximation become sufficient to qualify as AI under Element (4)? The absence of such specificity may facilitate organizational flexibility but undermines clarity for program managers, acquisition officials, and external stakeholders attempting to understand what qualifies as AI under DoD policy. The five elements exhibit substantial overlap, with most contemporary AI systems qualifying under multiple provisions simultaneously. Rather than providing definitional precision through multiple lenses, this redundancy may generate confusion about which element governs particular edge cases.
The definition’s complexity presents additional challenges. It employs technical terms like “machine learning,” “cognitive architectures,” and “neural networks,” that themselves lack precise boundaries and may not clarify matters for policymakers, military leadership, or even technical personnel outside AI specializations. This approach assumes familiarity with AI’s academic lineage, drawing implicitly on traditions from early theorists like Marvin Minsky through the symbolic AI movement, connectionism, and evolutionary computation. While this intellectual heritage is appropriate for a technical definition, it may limit accessibility for the broader DoD audience that must operationalize these concepts.
Notably absent from the definition are considerations that might distinguish AI from conventional software: requirements for large-scale data, computational intensity, emergent capabilities, or specific architectural features. The definition also predates the current generation of large language models and generative AI systems, raising questions about whether these technologies fit comfortably within the existing framework or represent capabilities that strain its boundaries.
From a legal and jurisdictional perspective, this definition applies specifically within the DoD context and carries statutory force for defense programs and budgets. However, it is not binding on other federal agencies, which have developed their own AI definitions for regulatory, research, or operational purposes. This lack of government-wide standardization creates potential coordination challenges and reflects the difficulty of establishing universal AI definitions across diverse mission spaces and regulatory contexts.
Reception
Section 238(g)’s definition produced tangible organizational consequences within the Department of Defense, most notably the establishment of the Joint Artificial Intelligence Center (JAIC) in 2018, later reorganized as the Chief Digital and AI Office. The definition enabled systematic tracking of AI-related expenditures across the defense budget and justified substantial increases in AI investment. Its influence extended beyond DoD, serving as a template for subsequent federal AI definitions and strategies, though this cross-pollination has proven problematic when definitions optimized for budgetary and organizational purposes are applied to regulatory contexts requiring greater precision. The definition has not been subject to significant judicial interpretation, and while it facilitated coordination within defense circles, critics have noted its over-breadth creates opportunities for AI washing and fails to provide meaningful boundaries for what qualifies as artificial intelligence in operational or acquisition contexts.
Additional Resources
Defining Technologies of our Time: Artificial Intelligence © 2026 by Aspen Digital. This work in full is licensed under CC BY 4.0.
Individual entries are © 2026 their respective authors. The authors retain copyright in their contributions, which are published as part of this volume under CC BY 4.0 pursuant to a license granted to Aspen Digital.
The views represented herein are those of the author(s) and do not necessarily reflect the views of the Aspen Institute, its programs, staff, volunteers, participants, or its trustees.

