Lost Legacy: The Risk of “End-of-Life” Technology

February 19, 2026
  • Eric Wenger
  • Senior Director for Cyber and Emerging Technology Policy, Cisco Systems

The 2024 year-end report from Cisco’s Talos Threat Intelligence Organization indicates that 2 of the top 10 exploited vulnerabilities in network devices, accounting for 34% of the total, were aimed at technology beyond its supported lifespan. A significant portion of this malicious activity was aimed at small office/home office (“SOHO”) routers widely used by small and medium-sized businesses for Internet connectivity. Significantly, these infected devices—almost all of which were well-beyond the period of support from their manufacturers—served as a vector for pre-placement of malware on critical infrastructure systems commonly referred to as “Volt Typhoon.”

The office networking devices leveraged to spread the attack in Volt Typhoon continued to function such that their owners likely never noticed an impact on system performance. Those same devices were leveraged by malicious actors to infect critical computer systems that treat water, distribute power, monitor roads, and route communications. While Volt Typhoon was identified before large-scale damage or outages occurred, the threat illustrates the costly impact of end-of-life technology goes well beyond the devices and systems where they reside.

Another example: in 2023, the US and UK governments issued a joint advisory about a Russian state-sponsored advanced persistent threat (“APT”) actor that gained “unauthenticated access via a backdoor . . . into Cisco routers worldwide. This included a small number based in Europe, US government institutions and approximately 250 Ukrainian victims.” The alert states that the “vulnerability was first announced by Cisco on 29 June 2017, and patched software was made available.” Not only was the patch issued 6 years prior to the date that these governments issued an alert, but even at that time the equipment was  beyond the  vendor support period. 

A 2025 report from the U.S. Government Accountability Office puts the problem into stark contrast. While the U.S. federal government spends more than $100B annually on IT, about 80% of that amount is consumed with operations and maintenance for increasingly expensive and obsolete systems. Technology beyond vendor support is not only incapable of being effectively secured, but it is also increasingly expensive to operate over time. This dynamic consumes resources to the point where the government finds itself incapable of making timely transitions necessary to capture the benefits of AI and to harden systems against the impacts to classical encryption that scalable quantum computing will soon yield.

The problem is not limited to the U.S., either. “Industry estimates in 2020 were that globally, across business network infrastructure, almost half (48%) of assets were aging or obsolete [and] 60% of EU cyber breaches in 2022-2023 exploited known vulnerabilities for which there were patches, but which had not been applied.”(WPI and Cisco Systems Report 2020, pg. 3, Citing NTT (2020), Global Network Insights Report, Lifecycle management infographic.)

How did we get to the point where critical infrastructure continued to operate unpatched end-of-life technology for years after a fix was available? And how is it possible that so much critical telecommunications equipment remains not only unpatched, but also incapable of being secured at all? What can we do to change the paradigm? And how can a national effort to measure and reduce risk from the technology debt associated with unpatchable technology help avoid future attacks on critical networks?

Just in the past decade, we have witnessed an explosion of innovation in fields such as artificial intelligence and quantum computing—much of which is enabled by new models for developing, iterating, and delivering technology in software from the cloud. These new technologies increasingly connect with foundational infrastructure (e.g., physical networks that deliver the data packets) and operational technology “OT” (e.g., actuators and sensors that interact with the physical world). Points of contact between the world of things that think and the world of things that sense and move require greater focused attention, given the need to patch, update, and configure any technology exposed to dynamic network environments.

Technologies supporting critical systems were once exclusively built for use in closed, static environments. The risks they encountered could be modeled in advance and reflected in engineering tolerances. For example, materials  used in buildings and planes could be tested to ensure they tolerate the loads they are expected to bear.

The same was once true for computers that operate critical physical systems, like oil and gas pipelines or water treatment facilities. Regulations could incorporate these static requirements and mandate that the resulting infrastructure and systems never change—absent new information showing that the original models and estimates were inaccurate. However, in our modern hyper-digitized society, even highly critical operational technologies (“OT”) are tied to digital networks. Everything that can be connected, will be connected.

By contrast with traditional static systems, the risk environment for network-connected IT and OT devices is always changing as the rest of the technological ecosystem evolves. Threat actors learn from and adapt to changes made by defenders seeking to “patch”, or fix, newly discovered vulnerabilities. This dynamism in the risk landscape underscores the importance of regular maintenance for software-powered connected devices. It also highlights the significant risks presented by dependencies on unpatchable, end-of-life devices in critical systems. The future we are building is increasingly at risk given the growing fragility of the underlying IT infrastructure and our failure to account for critical connected devices powered by software that cannot be patched.

Software companies regularly release patches to fix vulnerabilities discovered in their code. This occurs throughout the supported lifespan for that software and is an important security practice. Software developers commonly use development resources, including blocks or “libraries” of code, across multiple generations and types of products. This practice is efficient and offers significant security benefits. Code that is more widely used and patched becomes higher quality over time, whereas software that is not managed will become riskier over time—particularly when connected to dynamic networks. Coding errors found in supported products are regularly reported to the developers, who can write patches to fix the errors, or implement other mechanisms to mitigate risk, including workarounds or configuration changes.

Once a patch is developed and published, it becomes urgent for all who use products relying on the vulnerable software to apply those fixes. However, as new patches are released, adversaries can compare the original vulnerable code with the revised version to identify changes and “reverse engineer” exploits aimed at the flaw. Failure to patch renders one’s end-of-life products even more vulnerable, because malicious actors can easily scan Internet-facing devices to find potential victims subject to available exploits. Disturbingly, the time required for malicious actors to reverse engineer patches and launch successful attacks is becoming ever shorter. Google Mandiant research shows the average time from patch release to exploit detection dropped from 63 days in 2018 to 5 days in 2023.

AI will dramatically accelerate the scale of this problem because of its ability to rapidly analyze code changes, generate reverse-engineered logic, scale vulnerability discovery, and automate exploit launch in ways that were previously infeasible. The longer a technology’s lifespan, the more likely there will have been significant patches to its underlying software, and therefore more data for attackers to analyze and create exploits against.

Once a product is beyond its supported lifespan, developers cease producing patches for the underlying software. However, other products still within the supported lifespan could share a code base with technology that has been retired from support. As patching occurs for supported technology products, this process actively highlights weak spots in products now too old to support or patch.

This creates a two-fold long-term security risk; 1) every vulnerability fixed today can expose unsupported technology from years ago to attack tomorrow, and 2) every end-of-support technology that no longer receives patches is forever exploitable to an increasingly large number of vulnerabilities. As a result, the continued use of end-of-life technology becomes riskier over time and the risk grows ever greater as technology exits its supported lifespan.

A commonly repeated aphorism states “what gets measured, gets managed.” Measures can help us to understand whether a problem actually exists at a given moment. We must be clear-eyed about the costs of failing to understand, much less effectively manage, risks from critical infrastructure powered by IT and OT too old to update and secure.

In cybersecurity, risk management is a foundational concept. But just as with the management of financial resources, it is impossible to eliminate all risk without forfeiting the opportunity to achieve desired beneficial outcomes. As the technology environment evolves, threat actors are capable of adaptation to defensive actions and mitigations, and the resources for security are limited. Choices have to be made and then periodically adjusted to account for the inherently dynamic nature of cyber risk.

What current risk management frameworks or reporting requirements might address the specific risks stemming from end-of-life technologies? Assuming such tools can be identified, it will become necessary to assess whether they are adequate for the task—or perhaps whether they need adjustments to reflect our current understanding of the risks. Finally, it will then be useful to ascertain whether existing frameworks need to be more widely leveraged—and which policy tools would be necessary to incentivize the desired level of utilization to shift behaviors from the status quo. But the current situation is untenable –critical systems that protect our access to clean water, reliable electricity, emergency communications, and safe transportation are increasingly vulnerable to attacks from software that is not only unpatched, but too old to patch. Measuring the problem is a key first step to managing it.

AI powered tools may help us develop the necessary inventories to assess dependencies from end-of-life technology. This work may soon also receive a significant boost from an industry-led non-profit effort called Open EOX. The project represents a global coalition of cybersecurity leaders from organizations including Cisco, Dell Technologies, IBM, Microsoft, Oracle, and Red Hat who are developing standardized machine-readable mechanisms to communicate the status of technology as it moves from being sold and supported by a vendor through its end-of-life. This level of transparency will facilitate not only measurement of the problem but the development of metrics to assess progress over time—and potentially to highlight areas where new policy levers are needed to effectuate change. 

There are now hopeful signs that policy makers are seeing the importance of these efforts. Last year, the UK government issued guidance for assessing and managing legacy IT risk in public sector systems. The National Defense Authorization Act passed into law in the U.S. for fiscal year 2026, mandates a new framework for the integration of technical debt assessment, tracking, and management into IT investment decisions and budget justification materials. And in February 2026, DHS Cybersecurity and Infrastructure Security Agency (CISA) issued a new requirement for federal civilian agencies to not only apply patches, but to affirmatively retire unsupported edge network devices over the next year. Even better, this new CISA directive points explicitly at industry-led efforts in Open EOX as a mechanism to describe what technology is beyond manufacturer support. 

Technology vendors have an obligation to adopt secure development practices that will yield products that are secure by design and default, which will reduce the number of vulnerabilities during their supported lifespan. Customers also need more efficient ways to understand what technology they rely upon, how long it will be supported, and when and how to patch. Those who operate technology systems also need to apply available patches, use known secure configurations, effectively adopt “zero-trust” architectures—and they need a plan to manage the ever-increasing risks from reliance on end-of-life technology. Open EOX provides a potential pathway to help manufacturers and their customers effectively communicate about the support status for technology products. AI-powered tools may also help to provide compensating controls through additional surveillance, monitoring, isolation, and segmentation on the fly for technology that is vulnerable and cannot be readily replaced because of cost or other reasons.

A pragmatic conclusion about the state of this problem is that there may be room for optimism about future developments in technology, but we will need metrics to fully understand  the current state and progress towards a better future. The status quo— where endless use of end-of-life technology is viewed as essentially without cost—is  unacceptable. The shared nature of code over generations means that long after a product has been retired from support by its manufacturer, new vulnerabilities will be found and subject to exploitation.

There is hope that AI can help us to find and apply additional protections to technology that is “unpatchable” and where replacement is not feasible in the near term. Industry-led cooperative efforts like Open EOX also provide a hopeful path forward where machine readable information can be shared more effectively, and risks can be managed better between vendors and their customers. Policy makers should foster such efforts and consider requiring their use to help track and shrink this problem over time.

As Rush’s legendary drummer and lyricist Neil Peart wrote, “if you choose not to decide, you still have made a choice.” Regardless of the frameworks or methodologies we choose, the riskiest action is inaction. It will take a collective assumption of responsibility to ensure we are actively building a more sustainably secure technological future.

The views represented herein are those of the author(s) and do not necessarily reflect the views of the Aspen Institute, its programs, staff, volunteers, participants, or its trustees.