Most systems do not behave according to their design documentation, their architecture diagrams or the expectations expressed during discussions.
They behave according to the sum of their runtime conditions — conditions that are rarely fully visible, rarely fully controlled, and often only understood after something goes wrong.
This gap between intended behaviour and actual behaviour is one of the core reasons why modern systems surprise their operators, drift over time and resist prediction.
This article explains where real behaviour comes from and why the intended architecture almost never matches reality.
1. Architecture Describes Intent, Not Behaviour Link to heading
Architecture diagrams describe:
- how components should interact
- which responsibilities should exist
- which flows should be followed
- which boundaries should be clear
- which constraints should apply
This is intent.
But behaviour emerges from:
- actual runtime state
- effective configuration
- inherited defaults
- side effects
- timing
- load patterns
- network conditions
- human interventions
- historical leftovers
Architecture is a design.
Behaviour is an outcome.
The two align less often than we assume.
2. Behaviour Emerges From Layers Nobody Sees Link to heading
Runtime behaviour is not dictated by the visible configuration alone.
It emerges from the combined influence of:
2.1 Explicit configuration Link to heading
Everything teams intentionally set.
2.2 Inherited configuration Link to heading
Machine-wide, global or role-based values that override or silence explicit settings.
2.3 Runtime defaults Link to heading
The invisible layer that shapes 80–90% of all system behaviour.
2.4 Fallback logic Link to heading
Rules like “if X is missing, use Y” that nobody documents.
2.5 Version-dependent changes Link to heading
Library updates, patch-level changes, framework behaviours.
2.6 Historical artefacts Link to heading
Leftovers from earlier incidents or temporary fixes.
2.7 Operational shortcuts Link to heading
Manual tweaks applied under pressure.
2.8 Environment conditions Link to heading
Load, latency, disk pressure, CPU scheduling, entropy, available services.
When these layers interact, behaviour becomes emergent, not purely deterministic.
An architecture diagram cannot capture any of this.
3. Systems Behave Differently Because They Have Different Histories Link to heading
Two systems deployed from the same pipeline can still behave differently:
- one had a hotfix
- one was rebooted at a different time
- one has leftover configuration
- one had a manual adjustment
- one inherited a different default
- one is missing a dependency
- one uses a different path through fallback logic
- one has different transient runtime conditions
Same intent.
Different history.
Different behaviour.
Over time, history becomes the dominant factor in behaviour — more than architecture, more than design, more than documentation.
4. Behaviour Depends on Timing and State, Not Just Configuration Link to heading
Even if two systems have identical configuration and identical history, behaviour can still diverge due to:
- race conditions
- event ordering
- asynchronous tasks
- startup sequence variation
- resource contention
- unpredictable scheduling
- cache warm-up patterns
- transient network instability
Runtime is not deterministic.
It is influenced by dozens of non-configurable factors.
Architecture does not predict these factors.
But behaviour is built on them.
5. Behaviour Is Often Driven by Defaults, Not Explicit Settings Link to heading
Most teams assume that behaviour comes from the settings they configured.
In reality, behaviour often comes from settings they did not configure:
- .NET ThreadPool defaults
- IIS pipeline defaults
- HTTP timeout defaults
- crypto defaults
- process model defaults
- garbage collection defaults
- connection pool defaults
- OS policy defaults
These defaults vary by:
- OS version
- library version
- patch level
- installed roles
- hardware
- dependencies
Defaults drift silently over time.
Behaviour drifts with them.
6. Systems Behave According to Their Constraints, Not Their Diagrams Link to heading
The real forces shaping behaviour are often invisible:
- memory pressure encourages fallbacks
- CPU saturation triggers scaling behaviour
- network latency changes algorithmic paths
- missing dependencies trigger degraded modes
- service outages activate retry storms
- overloaded systems drop to minimal functionality
This is the real operational world.
It is dynamic, messy and mostly undocumented.
No diagram captures the operational constraints under which the system actually runs.
7. Systems Behave According to Human Decisions Link to heading
Behaviour is also shaped by:
- manual edits
- unrecorded incident fixes
- local optimizations
- exceptions made “just this once”
- tribal rules
- shortcuts under pressure
- temporary hacks that become permanent
Every operational team knows this:
People change systems faster than documentation can track.
Human behaviour becomes part of system behaviour.
8. Implicit Coupling Drives Emergent Behaviour Link to heading
Teams design systems as if they were modular and decoupled.
In reality they are:
- implicitly coupled
- sensitive to load
- dependent on order
- dependent on timing
- dependent on defaults
- dependent on external state
A system that is designed to be decoupled can behave as if it were tightly coupled simply due to invisible runtime dependencies.
Predictability collapses.
9. Intent vs Behaviour Creates Operational Surprises Link to heading
The gap between design intent and runtime behaviour produces:
- “unpredictable” failures
- differing behaviour across environments
- strange performance regressions
- configuration that seems to be ignored
- intermittent issues
- behaviour that only shows under load
- inconsistent outcomes
These are not mysteries.
They are side effects of behaviour emerging from hidden layers.
10. Why This Matters Link to heading
If systems do not behave as intended:
- architecture loses predictive power
- documentation misleads
- CMDBs become fictional
- incident analysis becomes guesswork
- IaC governs only a fraction of reality
- automation assumes a world that does not exist
- governance operates on narratives, not truth
The only way to understand a system is to observe it as it behaves — not as it was designed.
11. Ways to Address This Link to heading
Architecture intent will always drift away from behaviour.
We cannot eliminate that.
But we can expose it.
11.1 Treat effective configuration as the truth Link to heading
Not files.
Not repos.
Not CMDBs.
Not architecture documents.
11.2 Observe runtime, not just configuration Link to heading
Use telemetry and APIs to extract:
- effective config
- defaults
- fallback behaviour
- dependency usage
- timing
- state
11.3 Accept behaviour as emergent Link to heading
Stop expecting architecture to perfectly predict behaviour.
11.4 Capture history Link to heading
History explains divergence more than design does.
11.5 Minimize manual pathways Link to heading
Every manual interaction adds entropy.
11.6 Make behaviour observable Link to heading
Expose drift, defaults, performance patterns, fallback activations, degraded modes.
Closing Link to heading
Systems do not behave the way they are designed.
They behave the way they run.
Design is a plan.
Behaviour is physics.
Understanding this difference is essential if we want infrastructure to be predictable, governable and explainable.
Follow-Up Questions Link to heading
- How can we systematically extract behaviour that emerges only under load or specific timing conditions?
- What telemetry is required to distinguish between intended behaviour and fallback behaviour?
- How can we model the interaction between explicit configuration, defaults and runtime state?
- Can architecture be made more behaviour-aware without becoming unreadable?
- How do we capture system history in a meaningful, analyzable way?
- What part of “unexpected behaviour” is truly unexpected — and what part is simply unobserved?
Get in touch
Email me: starttalking@sh-soft.de