Responsible Advanced Access for Frontier AI Models

May 12, 2026

In the early years of modern cybersecurity, vulnerability discovery followed no consistent rules. Researchers sometimes published vulnerabilities immediately, vendors occasionally denied flaws outright, and users bore the cost. Information regarding vulnerabilities was shared on mailing lists such as BugTraq, magazines like Phrack, and private forums. Researchers had limited economic incentive to find or report vulnerabilities; often they were treated as malicious actors, facing attempts to both penalize and censor third-party research.

CVD emerged in the late 1980s with the establishment of the Computer Emergency Response Team Coordination Center (CERT/CC) at Carnegie Mellon University. This development led to other organizations creating their own CERTs. To unify these individual entities, the U.S. Department of Energy’s (DOE) Computer Incident Advisory Capability (CIAC), CERT/CC, and other CERTs launched the Forum of Incident Response and Security Teams (FIRST). Over time, these organizations and others helped formalize vulnerability sharing practices into widely accepted norms and standards, including ISO/IEC 29147 on vulnerability disclosure and related guidance developed by the U.S. National Institute of Standards and Technology (NIST). The core principle was to coordinate vulnerability disclosure through predictable and transparent rules of engagement. This approach allowed defenders to act before attackers gained an advantage while providing legal protections to security researchers acting in good faith.

These new norms did not slow security progress. They accelerated it by aligning incentives, reducing panic, and ensuring discoveries translated into fixes rather than chaos. Furthermore, standardization led to legal protections for security researchers as well as a market for vulnerability detection and associated compensation through bug bounties. Governments, including the United States, created vulnerability equity processes to balance national security concerns with those of their citizenries.

Frontier AI models with advanced cyber capabilities pose a similar coordination problem, albeit at a different scale. When a single model can surface vulnerabilities across operating systems, network infrastructure, or widely used open‑source components, the question of who gets access, when, and under what conditions becomes a matter of public interest. Even when legally permissible, allowing advanced access decisions to rest solely on the internal deliberations of individual technology providers risks significant public harm in the absence of a Responsible Advanced Access framework.

Frontier AI model developers are mostly private enterprises and retain broad discretion over their work product and intellectual property. However, unrestricted and opaque access decisions can function as de facto market gatekeeping—shaping economic outcomes, privileging certain sectors or customers, and disadvantaging others without clear, risk‑based justification. The resulting economic harm extends to downstream customers and communities that rely on those enterprises for critical services and security. Framed through a technology fiduciary lens, organizations stewarding powerful AI systems carry responsibilities not only to shareholders but to the broader ecosystem their technologies influence.

A purely private early access regime creates three risks with significant public interest impact.

First, security coverage gaps. No single company, however well-intentioned, has visibility into the full range of critical infrastructure technologies deployed across sectors. Communications networks, energy systems, industrial controls, and financial platforms all rely on distinct software and hardware stacks. If advanced access is ad hoc, entire segments of critical infrastructure may be left vulnerable until similar capabilities become more widely available.

Second, market distortion and mistrust. When access decisions are opaque, excluded organizations are left to guess whether they were overlooked due to risk, relevance, or relationships. Over time, this inconsistency erodes confidence in both frontier AI model developers and the broader AI governance ecosystem—much as secretive vulnerability handling once did. Selective access can also distort competition, as granting privileged capabilities to only some actors within a sector—but not others based on ambiguous criteria—creates downstream commercial advantages that shape customer decisions and market outcomes.

Third, speed without coordination. The United States must move quickly to stay ahead of strategic competitors, particularly China, which is heavily investing in offensive cyber and AI capabilities. But cybersecurity history offers a cautionary lesson: haste without coordination can actually slow progress. Standardized, advanced access frameworks ensure that acceleration is shared, sequenced, and operationalized across trusted participants, rather than fragmented through ad hoc or preferential releases. Releasing capabilities faster than defenders can absorbexacerbates downstream costs and systemic risk—a dynamic that responsible access is specifically designed to mitigate by aligning timelines with readiness.

Calling for a public early access process does not mean publishing sensitive technical details or exposing models to misuse. It means making the rules of the system visible, much like modern vulnerability disclosure policies are today.

At minimum, a public framework for advanced access to frontier models with advanced cyber capabilities should answer five questions:

  1. Who is eligible? Eligibility should be risk‑based, focusing on organizations whose disruption would have national security, economic, or public safety consequences; or whose technologies are uniquely positioned to surface meaningful findings. The eligibility criteria and the application process for access should be made transparent for applicants.
  2. What are the expectations of participants? Responsible Advanced Access should come with clear responsibilities: secure handling, structured testing, timely sharing of lessons learned, and coordination with peers through existing information sharing regimes.
  3. How is access sequenced and scaled? Not every organization needs access at once. A phased approach—common in vulnerability disclosure—allows findings to be triaged and for remediation to begin before capabilities diffuse more broadly.
  4. Who convenes and coordinates? Executive‑level coordination already exists under current U.S. cybersecurity and national security policy. For example, the National Security and Telecommunications Advisory Committee (NSTAC) is an existing venue for public-private partnerships. Other examples include Sector Risk Management Agencies, Sector Coordinating Councils, and multi-stakeholder convenings led by NIST. The missing piece is not authority but operationalization: translating convening power into a repeatable, cross‑sector process.
  5. Who defines the rules? Responsible Advanced Access should be accompanied by clearly articulated rules of engagement. This includes explicit usage terms, misuse thresholds, enforcement responsibilities, and predefined consequences, ensuring the rules are understood, monitored, and consistently enforced.

None of these attributes require heavy regulation or new licensing regimes. They require clarity, predictability, and confidence that early access serves the same collective defense logic that responsible disclosure does.

Any Responsible Advanced Access regime must also address the question of cost. For infrastructure operators, advanced AI‑enabled vulnerability discovery accelerates remediation timelines, which increases near‑term costs. For developers of frontier AI models, early access programs impose additional burdens as well: enhanced oversight, security controls, and governance to ensure models are used responsibly during sensitive development phases. Yet, securing critical infrastructure against emerging cyber capabilities of frontier AI models is not a commercial perk; it is a public interest function, analogous to CVD. As CVD policies became accepted practice, governments eventually recognized this reality by providing legal safe harbors, supporting information‑sharing institutions, and—at times—funding remediation at scale. Similar solutions will be needed to enable Responsible Advanced Access.

The United States faces a genuine strategic race in AI. Moving slowly is not an option, but neither is repeating the mistakes of the early cybersecurity era, when breakthroughs sped ahead of the institutions needed to manage them.

Responsible vulnerability disclosure taught us that transparency about the process—not exposure of vulnerabilities themselves—was the key to faster, safer outcomes. Frontier AI access should follow the same path. A public, principled, and transparent Responsible Advanced Access framework would not constrain innovation. It would channel it, ensuring the most powerful new cyber tools strengthen shared defenses before they inevitably spread.

In vulnerability disclosure, coordination became the accelerator. Frontier AI should be no different.

The views represented herein are those of the author(s) and do not necessarily reflect the views of the Aspen Institute, its programs, staff, volunteers, participants, or its trustees.

Vaibhav Garg is the Executive Director of Cybersecurity & Privacy Research and Public Policy at Comcast Cable, where he leads work at the intersection of cybersecurity, artificial intelligence, privacy, and technology policy. He has more than a decade of experience bridging technical research with public policy, working across engineering, product, legal, and government‑facing teams to translate policy principles into operational systems and standards. He has authored more than 30 peer‑reviewed publications, with work cited by the National Institute of Standards and Technology (NIST), the National Security and Telecommunications Advisory Committee (NSTAC), the Communications Sector Coordinating Council (CSCC), and international standards bodies. He has held leadership roles across industry and advisory bodies, including serving as Working Group Lead for NSTAC’s post‑quantum cryptography workstream, Vice Chair of the Consumer Technology Association’s cybersecurity and privacy committee, and Co‑Chair of CSCC’s Emerging Technology and Cybersecurity Committees.

Elizabeth Chernow serves as Associate Vice President, Public Policy at Comcast Corporation. In this role, she focuses on the development of the company’s positions on a range of issues including broadband, cybersecurity, and artificial intelligence. She joined the company in 2010 and has nearly two decades of policy experience. Elizabeth holds a J.D. from American University Washington College of Law and a B.A. in Journalism from The George Washington University. She serves on the Board of The WICT Network: Washington DC/Baltimore Chapter. Elizabeth is a member of the D.C. Bar, an associate member of the Virginia State Bar, and a member of the Federal Communications Bar Association.

Jayati Dev is a cybersecurity researcher working at the intersection of policy and emerging technologies. She leads the inventorying workstream for the Post-Quantum Cryptography Center of Excellence at Comcast. She is also a Public Policy Researcher leading Comcast’s efforts on AI security public policy. She previously worked in the same team as a Privacy Engineer and helped build privacy threat modeling tools. Dr. Dev holds a PhD in Security Informatics from Indiana University Bloomington where she worked on privacy-preserving technologies in conversational platforms. She also holds a Bachelor of Technology degree in Electronics and Communication Engineering from West Bengal University of Technology. She was a Google Public Policy fellow in cybersecurity policy and a co-lead researcher in a National Science Foundation multi-year investigation into IoT privacy. She also serves on the board of SCTE’s New England Chapter. She is the co-chair for the Emerging Technology Committee within the Communications Sector Coordinating Council. She also Co-Chairs the Special Interest Groups (SIGs) on Academia and Research as well as Public Policy within M3AAWG.

Noopur Davis headshot.

Noopur Davis currently serves as the Chief Information Security and Product Privacy Officer for Comcast, a global Fortune 30 media and technology company. She leads teams responsible for product security and privacy, privacy operations, cloud security, information and infrastructure security, cybersecurity risk, security engineering, security incident response, the legal response center and technical fraud. Prior to Comcast, Noopur was Vice President, Global Quality at Intel Security Group.  She was a Visiting Scientist and Senior Member of Technical Staff at Carnegie Mellon University Software Engineering Institute, Principal of a management consulting firm, and a software developer and leader at various Fortune 500 companies including Chrysler Corporation and Intergraph. Noopur holds a bachelor’s degree in Electrical Engineering from Auburn University and a master’s degree in Computer Science from the University of Alabama. She is a member of several trade associations and serves on the Board of Directors of Regions Financial, Board of Directors of Entrust, Board of Advisors of Immersive Labs and the Board of Directors of the National Technology Security Coalition.