

There’s a version of cybersecurity that exists in policies and frameworks and then there’s the version that exists in real environments.
In the ideal version, every system is up to date, vulnerabilities are patched on time, and compliance is simply a matter of following a checklist. But in real-world environments, especially those built over years or even decades, things are rarely that straightforward.
You inherit systems that were never designed to be updated, that quietly run critical operations, and that no one wants to touch because even a small change could create a much bigger problem. Yet the expectation of staying compliant never goes away.
In this blog, we’ll explore what happens when patching isn’t an option, why this situation is more common than most teams admit, and how you can stay compliant by managing risk in a way that actually works in real environments.
The uncomfortable truth: some systems are built to stay unchanged
Legacy systems didn’t survive this long by accident. In many cases, they continue to exist because they were embedded into the operation early on and businesses cannot afford downtime.
Replacing them is rarely a simple upgrade. It often means:
- Rebuilding integrations
- Retesting entire workflows
- Accepting downtime or operational risk
That’s why many organizations choose to maintain their legacy system rather than replacing the systems, an operationally rational decision with real cybersecurity consequence.
This trade-off between maintaining operational continuity and delaying modernization has direct implications for both cybersecurity and operations. While keeping legacy systems running avoids downtime and disruption, it also increases risk of unpatched vulnerabilities, limits visibility and slows response to emerging threats. Meanwhile, security expectations continue to evolve while these systems remain the same and the gap widen as time passes.
When patching stops being the safest option
Patching is widely considered the best practice, and in most modern systems, it absolutely is. But legacy devices introduce a different kind of complexity.
In these environments, applying a patch can sometimes create more risk than leaving the system unchanged while sometime patching is not an option because:
- The vendor no longer provides updates
- The system runs on unsupported software
- There is no safe environment to test changes
This shifts the decision from being technical to strategic. The question is no longer simply “is the device patched?” rather it has become “What is the operational risk of applying the patch, and what is the impact of not patching?”
IBM’s risk-based patching insights recommend that patching decisions should always consider operational impact, not just technical vulnerability. In certain environments, especially those tied to critical processes, even a well-tested patch can introduce instability or unintended side effects.
The gap no one prepares you for: compliance vs reality
Most compliance frameworks, including patch management guidance from organizations like Cybersecurity and Infrastructure Security Agency (CISA), IEC 62443, and NIST etc., clearly stress the importance of timely updates and vulnerability remediation.
But these frameworks don’t always reflect the constraints of OT environments, and this disconnect creates a gap, one that many teams struggle to explain.
On one side, there are clear expectations to fix vulnerabilities, keep systems updated, and reduce exposure. On the other side, there are operational requirements to ensure operational uptime, system updates unavailable or unsafe for patching because of vendor end-of-life, no test environment or unsupported OD, and dependencies on OEMs, integrated applications, and connected infrastructure are too complex to risk disruption.
Without a structured way to bridge this gap, teams often end up either forcing changes that introduce new risks or avoiding the issue altogether, which holds up well in the long run.
The shift that changes everything: patching to control
The turning point comes when you stop treating patching as the goal and start focusing on what compliance is actually trying to achieve.
At its core, compliance is about control, ensuring that vulnerabilities, whether patched or not, do not turn into real incidents.
Once you start looking at it from the perspective of the operational continuity and practicality, the conversation becomes more realistic, as you start asking:
- Can this system be accessed easily?
- Is it exposed to the wrong network?
- Would we know if something went wrong?
This shift, from fixing vulnerabilities to controlling outcomes, is what allows organizations to move forward without forcing unrealistic solutions.
4 ways to stay compliant when patching isn’t possible
When patching is off the table, risk doesn’t disappear, it just needs to be handled differently. The most effective approach is not a single fix, but a combination of controls that work together to reduce exposure and improve visibility.
- Contain the system before you try to fix it
In case a system is vulnerable, the first thing to do is to ensure that vulnerability is restricted to the maximum extent.
Network segmentation plays a key role here. Isolating legacy systems to the rest of the environment and limiting their communications reduces the likelihood threat’s lateral movement.
- Reduce access until it becomes predictable
Most real-world incidents do not start with vulnerability. Rather it is caused by uncontrolled and unrestricted access.
You can minimize the chances of exploitation by locking down who is allowed to access the system, enabling more secure security that includes multi-factor authentication, replacing individual credential shared account, and implementing role-based access) can reduce the risk profile. Even the underlying system remains the same.
- Add visibility where there was none
Legacy systems often operate without active logging, monitoring, or clear visibility into network activity. This lack of visibility creates blind spots, especially in OT environments, where undetected threats can persist and escalate without being noticed.
Implementing monitoring through solutions such as network intrusion detection systems (NIDS), security information and event management (SIEM) platforms, or passive network monitoring tools help detect unusual activity and potential threats. In environments where patching is limited, this visibility becomes a critical second line of defense, enabling faster detection and response even when prevention is not fully possible.
- Build layers of protection around the weakness
When a vulnerability cannot be removed, the focus shifts to reducing the likelihood of it being exploited. This is achieved by implementing compensating controls around the vulnerable system.
Compensating controls, such as network segmentation, strict access controls, firewall rules, and traffic filtering, are widely recognized in frameworks like NIST and IEC 62443 as effective ways to manage risk when direct remediation is not feasible. These controls do not eliminate vulnerability itself, but they significantly reduce exposure and limit the potential impact of an attack.
Over time, combining these controls creates a layered defense strategy, where multiple safeguards work together to contain risk. In such an environment, the presence of a vulnerability does not automatically translate into a successful exploit, as attackers encounter multiple barriers that restrict access and movement.
Why a risk-based, compensating control approach works
Operators and engineers often assume that auditors expect fully patched, up-to-date systems with no exceptions. In reality, auditors are looking for something more practical and defensible: clear risk understanding, structured controls, and demonstrable evidence.
When patching is not possible, a risk-based approach using compensating controls ensures that vulnerabilities are still actively managed. The expectation is not to eliminate the issue, but to clearly document why patching is not feasible, define the associated risks, implement alternative controls, and continuously monitor their effectiveness.
When this is done properly, unpatched systems are no longer seen as compliance failures. Instead, they become managed and defensible risks, supported by clear documentation and evidence, needed for audit.
Common mistakes that weaken compliance efforts
Even when the right controls are in place, certain patterns can quietly undermine compliance if not addressed properly.
One common mistake is treating legacy systems as temporary exceptions. In reality, these systems remain in the operation for years, and without a long-term strategy, the vulnerability gaps accumulate. Without consistent control implementation you cannot assess risk and without risk assessment your monitoring lack baseline to work.
Another issue is implementing controls without structure. Measures like segmentation, access restrictions, and monitoring are effective only when they are consistently applied, properly documented, and integrated into a broader security control framework e.g. IEC 62443 or site level defense in depth architecture. Without this, the system fails during actual incident despite being documented for audit.
Third critical weakness is failing to produce evidence of control effectiveness. Controls that cannot be demonstrated simply do not exist not just from a compliance perspective, but during an actual incident as well. When something goes wrong, the absence of logs, configurations, or change history means engineers are left without the context needed to investigate, contain, or recover quickly.
This turns what could have been a manageable event into a prolonged outage or uncontrolled escalation. Logs, configurations, and change records are not administrative overhead, they are operational safeguards that enable faster troubleshooting, informed decision-making, and effective incident response. Without them, both compliance and real-world resilience break down.
Designing around reality, not against it
Legacy systems are not going away overnight. In an OT environment, including water utilities, oil and gas, electricity, manufacturing etc., they are part of the long-term operational reality, quietly supporting operations while newer systems are built around them.
The organizations that handle this well are not the ones trying to force ideal solutions. They are the ones who honestly identify their constraints, build control structures that can work within identified constraints. As well as document everything to ensure that their decisions are defendable.
Where a structured approach makes the difference
Legacy systems don’t fit neatly into modern security advice, which typically emphasizes regular patching, continuous updates, and rapid vulnerability remediation. While these practices are effective in modern IT environments, they often fall short in legacy OT environment where systems are fragile, critical, deeply interdependent or difficult to modify, the expectation to patch everything is unrealistic.
What matters more is not removing every vulnerability but understanding and controlling risk. This requires a shift toward a more practical, risk-based approach, one that focuses on gaining visibility, limiting unnecessary access and connectivity, and implementing compensating controls such as segmentation and monitoring to reduce overall risk.
At the same time, these measures should not be treated as permanent solutions. They need to be part of a broader, forward-looking strategy that includes gradual modernization, improved testing environments, and stronger alignment between IT and OT security practices. This ensures that legacy systems are not just maintained but progressively reduced as a long-term risk.
Ultimately, managing legacy systems is not about forcing them to meet modern standards they were never designed for. It is about applying structured, defensible controls that reflect real operational constraints.
If you’re working with unpatched systems, the next step is to evaluate your environment, identify gaps in access, visibility, and control, and take targeted action to strengthen these areas to ensure both security and compliance.
To learn more how ACET Solutions is helping achieve OT compliance with local and international regulations visit our website.
A layered approach, which is aimed at reducing exposure instead of patching vulnerable systems, is required to secure legacy systems that cannot be patched. This normally begins by secluding the system with segmentation of the network so that it cannot be easily accessed by other sections of the environment. This is then followed by restricting access to ensure that it can only be accessed by a few authorized users. They also introduce monitoring to identify unusual activity that is vital since prevention may not be effective. More protective measures like traffic filtering or outside security are implemented with time. The aim in general is to make it hard to exploit and easy to detect by having several barriers in place in case there is vulnerability.
Yes, compliance audit can be passed even when some systems are not patched provided that the risks are managed and reported accordingly. Compliance frameworks are usually concerned with risk management but not technical rigid actions. When patching is not possible, the organizations should adopt compensating controls which minimize the probability and consequences of exploitation. Such controls should be well documented and there should be evidence that they are in operation and in use. The auditors will normally seek a clear explanation as to why patching could not be done, what risks still exist and the way the risks are being managed. When done correctly, even unpatched systems can be a part of a compliant environment.
Compensating controls are other security controls that are employed in instances where a primary control like patching cannot be implemented. These controls are aimed at minimizing the probability of a vulnerability being exploited as opposed to eliminating it. As an example, an outdated system which cannot be patched may be segregated out of the primary network and limited to a limited number of users and closely monitored to detect any suspicious activity. All these measures combine to reduce risk but the root cause does not go away. Compliance Compensating controls are generally well accepted provided they are effective, well implemented, and have supporting evidence. They offer a pragmatic means of ensuring security in situations where optimal solutions are not attainable.
Patching may not be a choice in essential systems since the danger of being dragged down may be more important than the advantages of the update. When systems are running critical activities, a minor failure can cause downtime, loss of money or even safety issues. Moreover, the old systems could be based on outdated software or complicated integrations which would not be compatible with new patches. Updating tests in such environments can also be challenging, in case there is no dedicated testing environment. Because of this, organizations should closely consider whether a patch will enhance security or cause new vulnerabilities and in certain situations, the less risky alternative is to leave the system as is and handle the risk differently.
Network segmentation assists by restricting the extent to which vulnerability can spread in case it is used. By putting a legacy system in a different network segment, it is not accessible to the rest of the environment, eliminating many routes through which an attacker can access it. This also ensures that other systems are not compromised due to a possible breach. Strict access controls can also be used in conjunction with segmentation to decrease exposure even further, such that only certain users or systems can communicate with the legacy system. This practice is also highly advisable in cybersecurity practices as it focuses on risk at the network level, which is one of the most effective means of securing systems that are not patchable.
Auditors require clear and organized evidence that indicates that risks related to unpatched systems are realized and they are managed. This consists of documentation of why the system is not patchable, e.g., operation limitations or unsupported by the vendor. They also look for proof of compensating controls, such as network configurations, access control policies, and monitoring setups. It is particularly important that logs and reports indicate that controls are not only implemented but constantly upheld. The trick lies in the ability to construct an entire picture in which the risk has been recognized, justified, controlled, and supported by evidence to make it clear that the organization does not disregard the issue and is dealing with it in an accountable way.
Related Articles





