Top US cyber leaders will headline the 2025 Aspen Cyber Summit on November 18.
Help us celebrate a decade of dialogue and action. Register now to join us in Washington, DC.
Top US cyber leaders will headline the 2025 Aspen Cyber Summit on November 18.
Help us celebrate a decade of dialogue and action. Register now to join us in Washington, DC.
Here’s What You Should Know
Workstream
Topic
Share
In an era of growing cyber threats, traditional defensive measures can be insufficient in the face of sophisticated or novel tactics. As a result, the question of whether private or public entities should engage in active defense or offensive “hack back” tactics has taken on greater urgency. The legality and permissible scope of such tactics, especially when adopted by private actors, is an open question. In this essay, we trace the contours of the law as applied to different types of active defense and offensive cybersecurity measures.
A hack back is a type of cyber response that incorporates a counterattack designed to proactively engage with, disable, or collect evidence about an attacker. Although hack backs can take on various forms, they are—by definition—not passive defensive measures.
Instead, hack back tactics fall on a spectrum between active defensive measures, which protect the potential victim’s system or data, and offensive measures, which interfere with the attacker’s system or data. Some tactics fall archetypically into one of two extremes on this spectrum, but many are “mixed” tactics, combining characteristics of both active defensive and offensive measures or beginning with active defense efforts but culminating in offensive measures.
Hack back tactics can help deter cyberattacks, as they could increase prospective risks and actual costs for attackers; if attackers begin weighing the possibility of a retaliatory counterattack that could wipe out their servers or networks, malware that would help law enforcement track them down, or the potential corruption or encryption of any data they do exfiltrate, they may decide that an attack’s risks outweigh its reward. As compared to purely offensive tactics, calculated hack backs—such as malware booby traps—can, in theory, be precisely targeted to harm only the attacker’s systems, reducing the collateral damage to innocent parties.
But these tactics also carry significant risks. First, and perhaps most significant, is the risk of misattribution and collateral damage to unwitting third-parties. Threat actors engaged in organized hacking campaigns often use commercial cloud service providers, botnets of compromised home or small office routers, VPNs, and other infrastructure controlled by unwitting third-parties to obfuscate the source of their attacks. In this context, there is a risk that in retaliation, a counterattacker might strike a system belonging to an unwitting and innocent third-party. Second, even if the counterattacker tracks down the attacker, retaliation against the attacker might still lead to collateral damage to distinct connected networks and systems that played no role in the attack. Finally, deploying a hack back tactic against an attacker may lead the attacker to retaliate in turn, leading to a cycle of escalating countermeasures in which each party, believing it is deploying a form of active defense, deploys new cyberattacks and increases the risk of conflict or harm to innocent parties.
The prevailing view is that there is a blanket prohibition on “hacking back” by private parties. The Computer Fraud and Abuse Act (“CFAA”), codified at 18 U.S.C. § 1030, is the primary federal computer crime statute and provides broad civil and criminal liability for hacking.1 While the full extent of the CFAA’s application to hack backs is uncertain, the law creates two major barriers to private-party hack backs:
Taken together, these two sections appear to strictly circumscribe a private entity’s ability to engage in hack back tactics that involves accessing a hacker’s systems, but do not necessarily proscribe active defense measures undertaken entirely within the private entity’s own systems.
Interesting questions regarding the application of the CFAA’s prohibitions could be raised by a victim’s advanced deployment of disguised tools on their network—such as a beacon, booby trap, or malware—pre-programmed to automatically trigger upon its exfiltration to an unrecognized host.
A victim defending such measures in litigation might argue that they had not “intentionally access[ed]” the attacker’s system, or “knowingly cause[d]” the transmission of the code to the attacker’s system in violation of the CFAA so long as the attacker was the one to cause its exfiltration to their own system, and the victim did not issue commands to the tools after they left the victim network.
While certain hack back strategies might violate the CFAA when undertaken by private parties alone, certain federal statutes contemplate that private action might be permissible when paired with government oversight or authorization. Specifically, 18 U.S.C. § 1030(f) provides that the CFAA “does not prohibit any lawfully authorized investigative, protective, or intelligence activity of a law enforcement agency,” arguably could allow federal agencies to deputize private cybersecurity forms, authorize them to trace intrusions, or permit them to carry out certain countermeasures on the government’s behalf. Similarly, Section 104 of the Cybersecurity Information Sharing Act of 2015 (“CISA”) authorizes private entities to, “for cybersecurity purposes, monitor” or “operate a defensive measure” on its own information system or—if they have written authorization—on “another non-Federal entity.” Although Section 104 expressly excludes from “defensive measures” any measures that would “render unusable” or “substantially harm[]” another entity’s information or information system, there arguably remains room for limited government-sanctioned private hack back tactics.
Questions about the legal permissibility of hack backs by private actors should begin with an evaluation of the broad range of hack back tactics. The following chart traces the existing authority for eight “hack back” tactics, ranging from pure active defense to offensive measures.
Tactic | Goal | Description | Legal Authority |
---|---|---|---|
Dye Packets | Active Defense: Data Encryption | This tactic is used to automatically encrypt data when it is accessed or exfiltrated by an attacker, denying them access to the underlying information. | Permissible. This tactic is likely authorized by the CFAA, as it involves only the private actor’s own systems or data and does not involve accessing the hacker’s computer. |
Full Packet Capture | Active Defense: Monitoring Traffic | This tactic is used to capture and store all inbound and outbound traffic on a network. | Permissible. This tactic is likely authorized by the CFAA because it is used to track traffic entirely within the private actor’s system and does not involve accessing the hacker’s computer. |
Canary Honeypots | Active Defense: Detection & Tracking Attackers | This tactic involves the use of decoy devices or files within a system to attract attackers, alert the private actor when accessed, and misdirect attackers from the private actor’s data. | Permissible. This tactic is likely authorized by the CFAA to the extent it is contained within the private actor’s own system and does not involve accessing the hacker’s system. |
Social Engineering & Dark Web Intelligence Gathering | Mixed: Information Gathering | These tactics involve engaging in observation, impersonation, and misrepresentation in order to gain information on hacker motives, activities, and capabilities. | Likely Permissible. To the extent these tactics involve the use of public channels or the private actor’s own systems to obtain information, they are likely permissible under the CFAA, but a private actor’s use of information obtained to log into or otherwise access or damage the hacker’s systems would likely violate the CFAA. |
Attributional Cyber Beacons, Keyloggers, Screen-Grabbing Tools | Mixed: Tracking, Identifying, & Monitoring Hackers | These tactics are used to track an attacker’s location or otherwise monitor and collect intelligence on an attacker. | Likely Impermissible. While they do not necessarily harm the attacker’s systems, they do monitor the attacker beyond the victim’s own systems and thus could be read to violate the CFAA, especially under a broader interpretation of the statute. Creative legal arguments may be available depending on the way in which these tools are deployed and used. But these tactics arguably might be legalized for private parties when combined with government oversight or authorization; the government might authorize a private company’s application of the beacon, under certain conditions, and then itself take the subsequent steps to take down a server or website. |
Booby Traps | Mixed: Deleting Exfiltrated Data | This tactic is used to wipe the memory of a computer that has intruded into a victim’s network or exfiltrate a victim’s data. | Likely Impermissible. Because this tactic involves accessing the hackers computer and wiping its memory, this tactic may violate both of the relevant CFAA provisions absent government authorization or oversight, though creative legal arguments may be available depending on the way in which these tools are deployed and used. |
Distributed Denial of Service Attacks | Offensive: Disrupting, Restricting, or Disabling Access | This tactic is used to flood an attacker’s network with malicious traffic from other sources and prevent the attacker’s access to their own networks. | Likely Impermissible. Because this tactic contemplates accessing the attacker’s network and/or blocking the attackers access, it likely violates both of the relevant CFAA provisions, as well as Section 104, absent government authorization or oversight. |
Ransomware & Malware Attacks | Offensive: Disrupting, Restricting, or Disabling Access | This tactic involves the use of malware to encrypt data on an attacker’s computer. It can be used to lock or regain the stolen information, or to inflict greater harm on an attacker’s server or network. | Likely Impermissible. Because this tactic contemplates accessing the attacker’s network and/or restricting access to systems or files, it may violate both of the relevant CFAA provisions, as well as Section 104, absent government authorization or oversight, though creative legal arguments may be available depending on the way in which these tools are deployed and used. |
Absent further developments clarifying the permissible scope of private-sector hack backs, private actors should generally continue to operate with some form of governmental authorization or oversight—especially when seeking to engage in offensive operations to disrupt hackers’ systems.
Such authorization or oversight might involve direct cooperation with federal law enforcement, but it can also be obtained through the pursuit of private rights of action, as illustrated by three use cases from Microsoft’s Digital Crimes Unit discussed below.
In 2013, Microsoft’s Digital Crimes Unit, acting under a civil seizure warrant issued under a district court’s general equitable powers,4 and in coordination with the FBI, helped disrupt roughly 1,400 botnets created with “Citadel,” a malware toolkit used to steal more than $500 million from financial institutions. The operation, code-named “b54,” “marked the first time that law enforcement and the private sector [] worked together . . . to execute a civil seizure warrant as part of a botnet disruption operation.”
In 2020, Microsoft’s Digital Crimes Unit engaged in an investigation that allowed it to identify details about Trickbot, one of the world’s largest malware operations used to infect and control victim computers, including the IP addresses of Trickbot’s command and control servers. Acting pursuant to a temporary restraining order issued by the U.S. District Court for the Eastern District of Virginia,5 Microsoft disabled the IP addresses, rendered the content stored on the command and control servers inaccessible, suspended all services to the botnet operators, and blocked any effort by the Trickbot operators to purchase or lease additional servers. Microsoft was able to obtain the court order in connection with a complaint alleging violations of the Copyright Act, the Electronic Communications Privacy Act, the Lanham Act, and state tort law resulting in injuries to Microsoft, its customers, and the public.
In September 2025, Microsoft’s Digital Crimes Unit announced that it disrupted the RacoonO365 phishing service, which has been used to steal Microsoft 365 credentials. This disruption was possible because Microsoft was able to engage directly with the threat actor without revealing its identity in order to acquire phishing kits and additional information. This allowed the Microsoft team to engage in Bitcoin transaction analysis to identify the threat actor. After obtaining this information, and pursuant to a temporary restraining order issued by the U.S. District Court for the Southern District of New York,6 Microsoft seized 338 websites associated with the service.
As the law currently stands, specific forms of purely defense measures are authorized so long as they affect only the victim’s system or data.
At the other end of the spectrum, offensive measures that involve accessing or otherwise causing damage or loss to the hacker’s systems are likely prohibited, absent government oversight or authorization. And even then parties should proceed with caution in light of the heightened risks of misattribution, collateral damage, and retaliation.
As for the broad range of other hack back tactics that fall in the middle of active defense and offensive measures, private parties should continue to engage in these tactics only with government oversight or authorization. These measures exist within a legal gray area and would likely benefit from amendments to the CFAA and CISA that clarify and carve out the parameters of authorization for specific self-defense measures. But in the absence of amendments or clarification on the scope of those laws, private actors can seek governmental authorization through an array of channels, whether they be partnering with law enforcement or seeking authorization to engage in more offensive tactics from the courts in connection with private litigation.
[1] This statute is not without controversy. Professor Orin Kerr has argued that the language is so broad that courts might feel obligated to adopt narrow constructions to avoid unconstitutional vagueness concerns. Orin Kerr, Vagueness Challenges to the Computer Fraud and Abuse Act, 94 Minn. L. Rev. 1561 (2010). Some have suggested that the law invokes the rule of lenity, which would give parties engaging in hack backs the benefit of the doubt under ambiguous terms of the statute. And others have suggested that common-law doctrines of self-help, under which victims of physical theft are authorized to take limited, non-violent steps to recover property, could counsel in favor of interpreting the statute narrowly to exclude hack backs by private victims or even provide a separate legal basis for such activity. Jeremy Rabkin and Ariel Rabkin, Hacking Back Without Cracking Up, Hoover Working Group on National Security, Technology, and Law 14 (2016).
[2] Although the statute formally protects only specific classes of computers, any computer that is “used in or affecting interstate or foreign commerce or communication, including a computer located outside the United States” is covered. 18 U.S.C. § 1030(e)(2).
[3] The argument that hack backs serve constructive goals, such as identifying hackers or deterring data theft, might be irrelevant for CFAA purposes: In Van Buren v. United States, 593 U.S. 374, 383 (2021), the Supreme Court adopted a narrow interpretation of “authorization” for the purposes of the CFAA, holding that the purpose or goal of the access was irrelevant.
[4] See Microsoft Corp. v. Does, No. 3:13-cv-00319, 2013 U.S. Dist. LEXIS 168237, at *8 (W.D.N.C. Nov. 13, 2013).
[5] Ex parte Temporary Restraining Order, Microsoft Corp. v. Does, No. 1:20-cv-01171-AJT (E.D. Va. Oct. 6, 2020).
[6] Ex parte Temporary Restraining Order, Microsoft Corp. v. Ogundipe, No. 25-cv-7111 (S.D.N.Y. Sept. 16, 2025).
This piece is part of Aspen Digital’s Playing Offense project, which tackles how lawmakers and industry leaders alike should think about offensive cyber operations, including both the risks and opportunities.
The views represented herein are those of the author(s) and do not necessarily reflect the views of the Aspen Institute, its programs, staff, volunteers, participants, or its trustees.