

OT environments don’t fail because the Purdue Model is misunderstood.
They fail because it’s only half applied.
Most environments don’t get exposed because segmentation fails.
They get exposed because segmentation is quietly bypassed in ways no one tracks.
Everything looks correct on paper. Levels are defined. Diagrams are approved.
But in the live environment, assets drift, exceptions accumulate, and boundaries that were supposed to protect operations quietly weaken over time.
Take a real-world scenario from a power generation plant.
A historian server, originally placed in Level 3, is opened up for enterprise analytics. IT needs visibility. A vendor needs remote access. A few exceptions are made, nothing unusual. Over time, that system starts behaving less like an OT asset and more like an IT-connected service sitting at the Level 4 boundary.
Nothing breaks immediately.
Network monitoring still shows normal cyclic traffic. Modbus polling intervals remain stable. No alarms are triggered in the control system.
Until one day, unusual traffic starts flowing in.
Not enough to trigger an alert. But enough to create doubt.
IT sees patterns that look like reconnaissance.
OT sees a running turbine that cannot afford disruption.
And now the real question isn’t what the asset is, it’s where it actually belongs, and what it’s allowed to trust.
According to the Cybersecurity and Infrastructure Security Agency, weak segmentation between IT and OT remains one of the most common contributors to industrial cyber incidents, not because controls don’t exist, but because boundaries are inconsistently enforced.
In practice, that gap between design and behavior is where most OT risk lives.
In this blog, we’ll look at how to map assets across Purdue Levels 0–5 in a way that reflects real operational environments, where uptime, safety, and security are constantly competing, and where a diagram is only as accurate as the last exception.
Why Purdue Mapping Fails in Real OT Environments
On paper, the Purdue Model is clean:
- Level 0: Physical process
- Level 1: Basic control (PLCs, RTUs)
- Level 2: Supervisory control (SCADA, HMIs)
- Level 3: Operations management
- Level 4: Enterprise IT boundary
- Level 5: External networks (internet, cloud, third-party access)
But real environments don’t behave like clean layers.
In oil & gas, remote telemetry units often send data directly to cloud analytics platforms, creating implicit Level 1/2 → Level 5 paths.
In power systems, vendor access introduced during outages frequently bypasses intended boundaries and remains long after the event.
The issue isn’t that teams don’t understand Purdue.
It’s that operational pressure continuously reshapes it, and those changes are rarely reflected in how assets are mapped.
A Real OT Incident: Ransomware Through Boundary Drift
Let’s walk through how this actually unfolds.
The Setup (Before the Incident)
A manufacturing environment is structured like this:
- Level 0: Sensors measuring pressure and temperature
- Level 1: PLCs controlling valves and actuators
- Level 2: SCADA system monitoring the process
- Level 3: Historian and engineering workstations
- Level 4: Enterprise IT + boundary systems (patch server, remote access gateway)
- Level 5: External networks (email, vendor systems, internet)
Everything looks compliant.
But over time:
- The historian begins serving enterprise analytics (Level 3 behaving like Level 4)
- The patch server pulls updates from external repositories (Level 5 connectivity)
- Vendors access systems through remote gateways into Level 4
No one updates the architecture.
The Incident (What Actually Happens)
The attack doesn’t start inside the organization.
It starts at Level 5.
A phishing email enters through an external email service and reaches a user in Level 4.
The user clicks. Malware establishes persistence.
From there:
- The attacker gains a foothold in Level 4 (enterprise IT)
- It scans for trusted connections and identifies a patch server bridging IT and OT
- Using that trusted pathway, it pivots into Level 3 (operations network)
- From Level 3, it reaches:
- Historian systems
- Engineering workstations
This movement doesn’t break segmentation.
It follows it.
The attacker uses:
- Scheduled patch synchronization over SMB and RPC
- Approved remote management channels
- Whitelisted communication paths that were never meant to be inspected closely
From the network’s perspective, this is normal traffic.
From an attacker’s perspective, it’s a pre-built bridge.
What OT Actually Sees
At this point, OT doesn’t see “ransomware.”
It sees signals that don’t quite fit.
- Unexpected OPC UA read requests hitting the historian outside normal polling cycles
- A spike in SMB traffic from Level 4 into Level 3, which should not exist under strict segmentation
- Historian queries increasing without any operator interaction in the HMI
- PLC logic remains unchanged, but access patterns around it shift
None of this justifies shutting down a process.
But together, it points to something that isn’t operational.
It’s architectural.
Left unchecked, this kind of access pattern allows attackers to enumerate tag structures, understand process dependencies, and identify which systems can be influenced without triggering immediate alarms.
The Real Problem
The issue isn’t the malware.
It’s that:
- The historian is no longer behaving like a Level 3 asset
- The patch server has become a bridge between Level 5, Level 4, and Level 3
- Trust boundaries are unclear
So, when IT recommends isolation, OT hesitates.
Because isolating Level 3 affects:
- Operator visibility
- Engineering workflows
- Active production
This is what Purdue drift looks like in reality.
Not failure.
Ambiguity under pressure.
In a power generation environment, even a few minutes of hesitation at this stage can delay isolation decisions enough to affect grid stability or force operators into manual control under uncertainty.
Mapping Assets Correctly (Without Breaking Reality)
You don’t fix this by redrawing diagrams.
You fix it by aligning mapping with actual behavior.
- Map by Function, Not Ownership
Assets belong to levels based on what they do, not who owns them.
A historian feeding enterprise analytics is no longer purely Level 3.
Example:
- A historian accessed by enterprise analytics → drifting toward Level 3.5
- An HMI exposed via remote desktop → no longer purely Level 2
If behavior crosses boundaries, classification must follow.
How to implement this:
Use passive network monitoring to observe real communication patterns. In most OT environments, this is essential because active scanning can disrupt fragile devices or legacy controllers.
- Treat Level 4 as an Enforced Boundary, Not a Convenience Layer
The Level 4 boundary is where IT and OT meet, and where most risk accumulates.
It should enforce separation.
Instead, it often becomes a bridge.
But in practice, it becomes:
- A shortcut for data access
- A bridge for vendor connectivity
In power utilities, it’s common to see patch servers, AV servers, and jump hosts all sitting in Level 4, each with different trust assumptions.
That’s not segmentation. That’s aggregation of risk.
How to implement this:
Apply strict allowlisting at firewalls, restrict bidirectional traffic, and log all cross-boundary communication. Every connection should be intentional, not inherited. In practice, this often fails because temporary rules added during outages or vendor access are never removed, slowly turning the boundary into a permanent bridge.
- Map Data Flows, Not Just Assets
In many environments, data skips layers entirely.
A Level 2 system talking directly to Level 5 invalidates segmentation, regardless of diagrams.
Mapping must include:
- Who initiates communication
- What protocols are used
- Whether the flow is persistent or temporary
How to implement this:
Use flow monitoring (NetFlow/SPAN) to map real traffic paths and identify undocumented communication channels.
- Validate Boundaries Under Failure Conditions
Architectures don’t fail during normal operations.
They fail under stress.
Always ask:
- What happens if Level 3 goes offline?
- Can Level 2 still operate safely?
- Does Level 3.5 become a bypass path during outages?
In power systems, fallback modes often introduce temporary connectivity that remains permanent.
How to implement this:
Run controlled isolation tests (e.g., disconnect Level 3) and observe whether lower levels operate safely without unintended bypass paths.
- Re-map After Every “Temporary” Change
Most drift starts with one sentence:
“Just for now. We’ll allow access.”
That single sentence is where most Purdue drift begins.
As a result:
- A vendor needs temporary access to diagnose a fault
- Emergency patching requires an unusual connectivity path
- A system upgrade creates a transient dependency that was never formally closed
You need to reassess:
- Has the asset’s behavior changed?
- Has its trust level shifted?
- Has the temporary path removed?
If yes, the Purdue mapping must change too.
How to implement this:
Integrate segmentation validation into change management. Every vendor access, patch cycle, or emergency change should trigger a re-evaluation of asset placement.
How Purdue Mapping Aligns with IEC 62443
The Purdue Model defines structure.
IEC 62443 defines enforcement.
- Purdue Levels → where assets belong
- Zones → which assets share trust
- Conduits → how communication is controlled
For example:
- Level 3 systems form an operations zone
- Level 2 systems form a control zone
- The boundary between them becomes a conduit with strict rules
Most organizations document levels. While only few enforce zones and conduits properly.
That’s where drift turns into exposure. This is the reason many mature environments move toward IEC 62443 zone-based segmentation, where trust is enforced continuously rather than assumed based on static architecture diagrams.
Conclusion: Purdue Isn’t About Levels. It’s About Trust
The biggest misconception is that Purdue is about placement. It’s not.
It’s about:
- What each system can trust
- What it can communicate with
- And what happens when that trust is misused
In real OT environments, the model is only as accurate as its last exception.
The goal isn’t to eliminate exceptions.
It’s to make sure they don’t silently redefine your architecture.
Because most incidents don’t begin with a failure.
They begin with a boundary that already moved, and no one realized it in time.
And in OT environments, you’re not just protecting systems.
You’re protecting processes that don’t fail safely when assumptions are wrong.
If you want to explore how organizations are continuously validating Purdue alignment in live OT environments, you can take a closer look at our OT cybersecurity assessment solutions.
The Purdue Model is a framework used to structure industrial control systems into hierarchical levels, from physical processes (Level 0) up to enterprise IT systems (Level 4/5). It helps define how systems should communicate and where trust boundaries should exist. In practice, it is used to enforce segmentation between operational technology and IT networks, reducing the risk of cyber threats spreading into critical control environments.
Asset mapping often fails because environments evolve faster than documentation. Systems get temporary access, new integrations are added, and data flows change, but mappings are not updated. Over time, assets behave differently than their assigned Purdue level, creating gaps between architecture and reality. These gaps are where most security and operational risks emerge.
Correct mapping requires focusing on how an asset behaves, not just where it was originally placed. This includes analyzing its communication paths, dependencies, and access patterns. If a system interacts with higher-level networks or external services, its classification may need to change. Continuous validation is key, especially after system updates, vendor access, or infrastructure changes.
Purdue Model drift occurs when systems tend to shift out of their operational range as a result of operational changes. It may be due to added connectivity, remote access or integration with enterprise systems. The drift is usually unintended and is not realized until it causes security or operational problems. Drift management is an ongoing review of the asset placement and data flows.
Effective mapping enhances segmentation, minimizes the attack surfaces, and maintains separation of the critical control systems with less trusted networks. It also assists the teams in making quicker decisions during incidences since boundaries and accountabilities are evident. Mapping when it is reflective of real-world behavior can be considered a useful tool to both security and operational resilience, rather than a theory.
Related Articles





