Introduction:
This statement is ubiquitous in discussions about information system security.
It appears in training materials, documentation, and operational guidelines, particularly in the context of network filtering and firewall configuration.
It expresses a legitimate intention: to reduce the exposure surface by strictly limiting what is accessible.
Used as a methodological framework, it makes sense. But its actual scope is very often overestimated.
Because behind the statement itself, the implicit meaning is misleading.
The formulation suggests that a rigorously configured system could reach a state of complete safety.
That by closing everything that is not explicitly opened, one would fully control the system’s behavior.
In an ideal world, where software would be free of defects, this reasoning would hold.
But such a world does not exist.
The illusion of total control:
Everything that is not explicitly authorized is forbidden is a familiar maxim among network architects and firewall administrators.
And its use is justified.
It defines an essential defensive posture: deny by default.
But a posture is not a guarantee.
A firewall does not govern a system.
It applies rules to known flows: ports, protocols, addresses, session states.
It operates at a given layer, with an inevitably partial view.
Yet a modern information system extends far beyond this perimeter.
Complexity as a source of uncertainty:
The larger a software codebase is, the higher the probability of errors.
This is a statistical fact, confirmed by experience.
Neither unit tests, nor code reviews, nor formal methods, nor continuous integration have ever succeeded in completely eliminating bugs.
And application code is only one layer.
The compiler itself may introduce anomalies.
Third-party libraries may exhibit unexpected behaviors.
The operating system is a stack of historical layers, adjusted, patched, and sometimes worked around.
Deeper still, the hardware itself is not free from defects, as illustrated by Spectre, Meltdown, Rowhammer, and others.
Each additional layer expands the space of possible states.
And with it, the extent of what escapes knowledge.
The scale of complexity of an integrated circuit is disproportionate.
A diagram helps illustrate this:
The vertical axis is not a simple increase in “difficulty”.
Bottom: scale complexity
→ A vast number of combinations, but within a closed, well-defined system.
The rules are fixed. The space is immense, yet stable.
Top: systemic complexity
→ The problem changes in nature.
Cross-interactions, conflicting constraints, emergent effects.
The system reacts to the solution one attempts to impose.
The transition from formal games to standard-cell placement on a chip represents a change in scale, but above all a change in the nature of the problem. While chess or Go operate in enormous yet closed state spaces, cell placement on a chip faces an astronomical search space, exceeding 10^90,000 possible configurations, shaped by interdependent physical, temporal, and technological constraints: wire length, timing, power consumption, noise, density, routability, thermal dissipation, and so on. This combinatorial explosion is not merely quantitative: each local decision alters the global balance of the system, making exhaustive exploration unrealistic and rendering any notion of an absolute optimal solution inapplicable. What is sought is no longer a correct answer, but an acceptable compromise within a space continuously distorted by the material reality of the chip.
This is without even accounting for quantum effects that may arise at these scales (between 45 and 50 nanometers).
We therefore do not lack uncertainty in computing systems.
Note: In the future, AI may help us place the components of a chip in an optimal way… perhaps better than we ever could ourselves.
The domain of the known and the domain of the unknown:
This is where a persistent confusion arises: the confusion between security and safety.
Security belongs to the domain of the known.
It deals with identified, modelable, and verifiable scenarios.
- A disk fails → a RAID is implemented.
- A power supply fails → redundancy is added.
These events are predictable, quantifiable, and can be integrated into risk models.
Security is an engineering discipline rooted in probability.
Safety begins where this logic reaches its limits.
Safety: managing what has not been anticipated:
Safety concerns the unforeseen.
What has not been tested.
What has not yet been discovered.
- A vulnerability lying dormant for years.
- An unexpected interaction between otherwise compliant components.
- An emergent behavior resulting from a combination of states considered impossible.
No serious actor can claim that a complex system has been exhaustively tested.
No guarantee exists regarding the absence of unknown vulnerabilities.
In this domain, there are no absolute rules.
Only principles: impact limitation, containment, and resilience.
Why “deny by default” is not enough:
Stating everything that is not explicitly authorized is forbidden implies that:
- what is authorized is necessarily under control,
- and that any dangerous behavior must pass through something unauthorized.
These assumptions do not hold up against reality.
Exploitable vulnerabilities almost always use legitimate paths:
- an open port,
- an expected service,
- a planned feature,
- a compliant flow.
The problem is not the opening itself.
The problem is what we do not know about what passes through that opening.
Toward a more lucid approach: cyber safety:
The term cybersecurity has become a buzzword.
It reassures, it simplifies, and it gives the illusion of a fully controllable perimeter.
Yet it masks a far more uncomfortable reality:
information systems fall as much under safety as under security.
This is why it is far more accurate to speak of cyber safety.
Talking about cyber safety is not an exercise in pessimism.
It is acknowledging that:
- any exposed system is actively explored,
- any attack surface is alive and constantly evolving,
- any protection is, by nature, temporary,
- and that, much to the dismay of some, the real question is not if they will get in, but when. Yes, they get in. Sooner or later. The fight never ends.
Defense does not consist solely in preventing.
It also requires observing, understanding, detecting, reacting, and above all learning.
In short. Security protects. Safety understands, monitors, and governs.
Conclusion:
Everything that is not explicitly authorized is forbidden remains a relevant starting point.
But it will never be an end point. Far from it.
Maturity lies less in believing that all doors are closed than in understanding what happens when one gives way, or when a bypass exploits a weakness that was ignored.
And above all, in accepting that there is always at least one flaw that went unnoticed.
In this context, and for reasons that I will develop in a dedicated article,
the use of open source, as opposed to closed source models, represents an additional barrier in terms of cyber safety.
Not as an absolute promise, but as a lever for transparency, auditability, and understanding software behavior in the face of the unknown.
In my view, one of the most rational defenses we have lies precisely in Open Source and the entire philosophy that stems from it.