The Principles of Cyber Risk Mitigation

(1)  One cannot create a perfectly secure system

(2)  Active Defense requires active intelligence

(3)  Sense globally to defend locally

When I first started writing software, my colleagues would frequently ask a question that was roughly framed as, “Why don’t you just write better software to begin with then we would not have these problems.”  Obviously the problem is more complicated than that question suggests.  The reasons that it is more complicated are important to understanding how systems might be made more secure in the future.  The answer depends on two factors.

First, we have adversaries.  Anytime that one has adversaries - who are adapting their weapons, tactics, and deployment to the development of your technology – one has a classic arms race.   I would respond to my colleagues with, “I will create a computer system that will remain secure as soon as you build an airplane that cannot be shot down.”  This is the dynamic of adversarial relationships.  Nothing static will remain secure.  Systems must learn and adapt as the adversary learns and adapts.


The second factor in building secure systems is that we are building incredibly complicated systems.  We are building systems that are far more complex than our ability to completely model or understand their functionality.  We are building systems whose output to all possible input in all possible states is computationally infeasible.


These two factors guarantee that vulnerabilities will be found in the computer system, weapons and tactics will be developed to exploit those vulnerabilities, and that the system will be successfully compromised at some point in the future.  The system must change and adapt to the adversary’s behavior.

Since we must place highly complex computer systems in the presence of adversaries - computer systems that cannot be definitively tested - we need to  approach computer security in a new way.  We must acknowledge that we cannot build computer systems that are secure and will remain so in perpetuity.  Rather, we must build computer systems that are adaptable, configurable, and give us the ability to anticipate and respond to our adversary’s behavior.  It is the
difference between the Maginot Line and maneuver warfare.  We need to create the equivalent of maneuver warfare in cyberspace.


To make systems adaptable, we need to be able to change their behavior to input.   As it is likely that an adversary will, at some time, identify and exploit a previously unknown vulnerability, we must approach the problem differently.  We must make the use of an unknown cyber weapon prohibitively expensive, in a broad sense, for our adversary to use.  While we have no choice but to allow our adversary the use of a new weapon, we should immediately sense the attack and then rapidly adapt every other computer system in the enterprise to be immune to the weapon’s future use – change its behavior to a set of input.  This change in philosophy implies a level of sensors, communications, and what is now called active defense that is rarely found today.  However, it is obtainable with current technology, but depends on knowing what your adversary is doing.  It depends on intelligence.

The term sense is used in its broadest context.  It is some mechanism, software, or process, which provides information about the state of the system in which it is deployed.  This information is then used to generate indications and warnings – that is, to generate intelligence.  Ultimately, intelligence is derived from a rich and diverse population of sensors and is aggregated and correlated for maximum usefulness.


 When organizations develop a cyber-situational awareness capability, typically they instrument their IT environment as a sensor.  Usually this includes the
deployment of intrusion detection systems, anti-virus systems, network traffic analysis systems, and the like.  What we have found over time is that these sensors are the worst type of sensors for situational awareness.  They give no indication of what our adversary is planning, sometimes they can show that we are under attack, and they are excellent for forensics after an attack has occurred.  To put this into a military metaphor, this is akin to not knowing you are under attack until your adversary is in the foxhole with you.  Or, in my naval context, it is to not know one is under attack until the missile enters the skin of the ship.  This is a little too late.  In no other war fighting domain do we sense only at the assets that we are trying to protect.  We know that it does not work.  It is like asking the air force not to use radar.


The question then is how to develop and deploy sensors that can give us indications and warnings of our adversary’s intentions and actions.  That is, provide
intelligence.  You have to sense far beyond the assets that you wish to defend, as we no in all other domains of war.  While this may seem like an impossible task, there are practical sensors that have this characteristic.  Honeypots are an example of one such sensor.  If placed in desirable locations, outside of ones own network, they may provide information about attack tools and weapons.  If they are designed to be immune to known attacks then the only successful attacks will be ones that are hitherto unknown.  They can capture new tools and techniques and send that information to analyst or analysis machines where patches, signatures, settings, and/or heuristics can be developed.  The goal would be to rapidly disseminate the patches, signatures, settings, and heuristics to all other enterprise components to make them immune from the same attack.  Thus, the only successful attack using a new weapon would be the attack on the honeypot.  To do this, one must sense globally to get the intelligence to defend locally.