This post is the seventh post in a series of ten posts. The previous post explored the sixth law: Security is not stronger than its weakest link. Although often neglected, the seventh law is fundamental. It states that human users are often the weakest element of the security.
Humans are the weakest link for many reasons. Often, they do not understand security or have an ill perceived perception of it. For instance, security is often seen as an obstacle. Therefore, users will circumvent it when security is an obstruction to the fulfillment of their task and will not apply security policies and procedures. They do not believe that they are a worthwhile target for cyber-attacks.
Humans are the weakest link because they do not grasp the full impact of their security-related decisions. How many people ignore the security warnings of their browser? How many people understand the security consequences and constraints of Bring Your Own Device (BYOD) or Bring Your Own Cloud (BYOC)? Employees put their company at risk by bad decisions.
Humans are the weakest link because they have intrinsic limitations. Human memory is often feeble thus we end up with weak passwords or complex passwords written on a post-it. Humans do not handle complexity correctly. Unfortunately, security is too complex.
Humans are the weakest link because they can be easily deceived. Social engineers use social interaction to influence people and convince them to perform actions that they are not expected to do, or to share information that they are not supposed to disclose. For instance, phishing is an efficient contamination vector.
How can we mitigate the human risk?
- Where possible, make decisions on behalf of the end user; as the end users are not necessarily able to make rational decisions on security issues, the designer should make the decisions when possible. Whenever the user has to decide, the consequences of his decision should be made clear to him to guide his decision.
- Define secure defaults; the default value should always be set to that for the highest or, at least, an acceptable security level. User friendliness should not drive the default value, but rather security should.
- Educate your employees; the best answer to social engineering is enabling employees to identify an ongoing social engineering attack. This detection is only possible by educating the employees about this kind of attack. Training employees increases their security awareness and thus raises their engagement.
- Train your security staff; the landscape of security threats and defense tools is changing quickly. Skilled attackers will use the latest exploits. Therefore, it is imperative that the security personnel be aware of the latest techniques. Operational security staff should have a significant part of their work time dedicated to continuous training.
Interestingly, with the current progress of Artificial Intelligence and Big Data analytics, will the new generation of security tools partly compensate this weakness?
If you find this post interesting, you may also be interested in my second book “Ten Laws for Security” that will be available end of this month. Chapter 8 explores in details this law. The book will be available for instance at Springer or Amazon.
bellum” (i.e., “who wants peace, prepares for war”) is a Latin adage adapted from a statement found in Book 3 of the Roman author Publius Flavius Vegetius Renatus’s “tract De Re Militari” (fourth or fifth century). Many centuries before, Chinese General Sun Tsu has already claimed in his famous treaty “The Art of War”:
He will win who, prepared himself, waits to take the enemy unprepared.
Cyber security is a war between two opponents. On one side, the security designers and practitioners defend assets. On the other, cyber hackers attempt to steal, impair or destroy these assets. Most of the traditional rules of warfare apply to cyber security. Thus, “The Art of War” is a pamphlet that any security practitioner should have read.
Be proactive; a static target is easier to defeat than a dynamic one. Security defense should be active rather than reactive where possible. Furthermore, security is aging. Thus, the defenders must prepare new defenses and attempt to predict the next attacks. The next generation of defense should be available before the occurrence of any severe attacks. Of course, they must be different from the previous versions. The new defense mechanisms do not need to be deployed immediately. In most cases, their deployment may be delayed until their impact will be optimal. The optimal time may be immediately after the occurrence of an attack, or only once the loss occurred would be higher than the cost of deploying the new version. The optimal time may be when it hurts at maximum the attackers. For instance, a new generation of Pay TV smart card may be activated just before a major broadcast event.
Being proactive is also a rule for day to day defense. Do not wait for that a hack was detected to check your logs. Do not wait for an exploit to hit your system to learn about latest attacks and new tools. Do not wait for a hack to exploit unpatched systems, patch the system as soon as possible.
Design for renewability; according to Law 1, any secure system may be compromised one day. The only acceptable method to address this risk is renewable security. Every secure system must be renewable in the case of a successful hack. Without renewable security in its design, a system is doomed. Nevertheless, to ensure secure renewability, the kernel that handles renewability cannot be updated in the field. This kernel must ensure that attackers cannot misuse this renewability mechanism for their own purpose and that attackers cannot prevent the renewal. This kernel must also make sure that the attacker cannot roll back the updated system to the previously vulnerable version. One element of your trust model is probably that this kernel is secure.
Do not rest on your laurels; complacency is not an acceptable mindset for security practitioners. They must constantly be vigilant. The attackers are adapting quickly to new defenses and are creative. Some attackers are brilliant. If the defender did not detect a breach in the system, it does not necessarily mean that this system is secure. It may be that the breach has not yet been detected.
This is the fourth post in a series of ten posts. The previous post explored Law 3: No Security Through Obscurity. The fourth law is one of my preferred ones. The most futile reason is that I am an X-files fan (I want to believe) and it was a recurrent tagline. Now, the serious reason is that a key element of every system. Usually, my first detection test of snake oil is asking the vendor what the trust model of the system is. If the vendor has not a clear answer, it smells bad. And it becomes worrying if the vendor has no idea what a trust model is.
Trust is the cornerstone of security. Without trust, there is no secure system. It is the foundation of all secure systems. But what is trust? I like to use Roger Clarke’s definition.
Trust is confident reliance by one party on the behavior of other parties.
In other words, trust is the belief that the other parties will be reliable and that they are worthy of confidence. Other parties may be people, organizations such as Certification Authorities (CA), systems, software or hardware components.
Know your security hypotheses;
it is not possible to build a secure system without knowing the security assumptions. They define the minimal set of hypotheses that are supposed to be always true. This is the trust model. It is mandatory to identify and document the trust model thoroughly. Whenever one of these hypotheses is not anymore true, the system may not be secure anymore. Any change in the environment or context may invalidate the assumptions. Therefore, hypotheses must be continuously monitored as the system evolves to check whether they are still valid. If they have changed, then the design should accommodate the new security hypotheses.
An example is the current trust model of Internet when using TLS. The basic assumption is that the CA is trustworthy. Unfortunately, with modern browsers, this assumption is weak. By default, most browsers trust many trusted root CAs. The current Internet has more than 1,500 CA that the browsers trust. Only one wrongdoer amongst them is sufficient to weaken https.
Minimize the attack surface; Medieval castle builders new this consequence of the fourth law. They designed small, thin apertures in the walls that allowed observing at besiegers and the firing at them arrows with bows. The smaller the aperture was, the harder for an attacker it was to hurt the defender with a lucky arrow strike. Attackers will probe all the possible venues for breaking a system. The more the number of possibilities available for the attacker to try, the higher the likelihood that the attacker will succeed in finding an existing vulnerability. It is thus paramount to reduce the attack surface, i.e., the space of possible attacks available to an attacker. For instance, the current migration to the public cloud increases the surface attack compared to the traditional approach based on private data centers. This migration stretches the trust model.
Provide minimal access; a secure system should grant access only to the resources and data that the principal needs to perform his function. Access to any additional unnecessary resources or data is useless and creates an unnecessary potential risk. The consequence is that the role and function of each principal have to be clearly defined and thoroughly documented. For instance, there is no valid reason why the accounting team should have access to the shared repositories of the technical teams. This rule is very similar to the rule of least privilege and an illustration of minimizing the attack surface.
Keep it simple; complexity is the enemy of security. The more complex a system is, the higher the likelihood is that there is an error either in its design or its implementation. The error may turn into a security vulnerability that an attacker may exploit. Many vulnerabilities are due to software bugs. Protocols such as TLS become more and more complex, making their implementations more vulnerable.
Be aware of insiders; while
the attack usually comes from outside the trusted space, unfortunately, sometimes, the attacker may be an insider. The attacker will either accomplish the attack herself or knowingly be an accomplice in it. Sometimes, the insider may even be tricked into facilitating the attack involuntarily, for instance, through social engineering. Therefore, the trust within the trusted space should not be blind. Due to their privileged position, insiders are powerful attackers. Any security analysis has to tackle the insider threat.
Of course, security must trust some elements. There is no security with a root of trust. Never trust blindly. Trust wisely and sparsely.
This is the first post of a series of ten posts. The order of the ten laws is not meaningful excepted for this first one. For three reasons:
- It is the most important law as it is never failed. It should be engraved deeply in the mind of every security practitioner.
- It is my favorite law. In 1996, when I founded the Thomson Security Laboratories, this law allowed us to enter into the Hollywood arena. We were the first to claim it systematically in front of the MPAA members. At this time, it was not obvious. In 1998, DVD John with DeCSS illustrated its pertinence. Studios started to listen to us. A side effect of the first law is that the world will always need good security practitioners. This is a reassuring effect. J
- If somebody claim his or her system is unbreakable, then I already know that the system is snake oil.
No secure system is infallible. Any secure system is doomed to fail. Attackers will always find a way to defeat it. Even in ancient mythologies, it was true. For instance, invulnerable heroes such as Greek Achilles or Nordic Siegfried has a vulnerable spot. Along the History, this law has been right. Invincible Roman legions were defeated. Unsinkable RMS Titanic sank. Bletchley Park decrypted German Enigma. Mobile devices are jailbroken.
The only cryptographic system that has been demonstrated to be unbreakable in theory is Shannon’s One Time Pad. Unfortunately, it is not practicable. The symmetric key must be truly random and be of the same size that the clear text. Then, you have the problem to distribute the symmetric key securely, i.e., by secure sneaker net. Not very useful for everyday usage.
There is a strong asymmetry between security defenders and attackers. The attacker needs to succeed only once whereas the defender has to succeed every time. The attacker benefits from all the security technologies and tools that the defender may use. The attacker may put a lot of effort, resources and time for the exploit, as for instance, with high-profile Advanced Persistent Attacks (APT). Nature favors the attacker. The second law of thermodynamics states that entropy tends not to decrease. It highlights that it is easier to break a system than to build it. Creating increases the order, thus reduces entropy. Whereas breaking increases the chaos thus increases entropy. This is the sad, cruel reality of security.
Security designers must never deny the first law, but rather put this heuristic at the heart of their design.
The designer must expect the attackers to push the limits.
Any design operates within a set of limits defined by its initial requirements. The system should work correctly within these boundaries and should be tested within these limits. Unfortunately, an attacker may attempt to operate outside these boundaries to get unexpected behavior. The security designer should ensure either that these limits are out of reach or at least that the system should detect the violation of these boundaries to react accordingly. Typical examples are buffer overflows and SQL injections.
Systems will have vulnerabilities.
Publishing vulnerabilities is one of the best methods to reach a safer cyber world. Not only will the solution provider close the holes but the publication of the vulnerability will also educate the designers. Obscurity is dangerous for security (We will address it with Law 3). Nevertheless, implementers must have a reasonable amount of time to fix the issue before the public disclosure of the vulnerability. This is called responsible vulnerability disclosure.
As any system will be broken, the designed system must be ready to survive by the updating of its defense mechanisms. Without renewability, the system will be definitively dead. Renewability is a mandatory security requirement. The side effect is that the hacking scene must be monitored to learn as soon as possible about breaches and vulnerabilities.
As any defense will fail, a secure system should implement multiple defenses. Medieval builders knew about it. Middle Age castles had several bulwarks to protect the keep. Each one being increasingly higher than the previous one, It should construct successive obstacles that the attacker has to cross successfully. Diversity in protection makes the exploit harder to perform. A little ranting; one the current buzz messages of some vendors is “forget about firewalls and anti-viruses, use new method X”. Perimetric defense is of course not anymore sufficient to defend against modern threats. Nevertheless, the old-fashioned tools are still necessary for in-depth defense. Would you get rid of firewalls, then your network would become the weakest point of your system and would bypass new method X.
As any system will be broken one day, data may be corrupted or lost. Regular, frequent air-gapped backup of all non-constructible data is the ultimate defense. Back-up is today the only effective answer to ransomware (if you do not have a critical issue with data needed immediately, as for instance in hospitals). Air gapped is important to protect against a new generation of ransomware encrypting remote or cloud-based servers.
As a conclusion, never ask the question “if the system would be broken, …” but rather “Whenever the system WILL be broken, …”. The work of the security practitioner is to limit the risks of a breach, to detect its occurrence, and to mitigate the impact of such breach. The following laws will help in this difficult task.
Microsoft recently published a paper titled “Shared Responsibilities For Cloud Computing.” The aim is to explain that when migrating to the cloud not everything relies on the lapses of the cloud provider to reach a secure deployment. This reality is too often forgotten by cloud customers. Too often, when assessing the security of systems, I hear the statement, but cloud provider X is Y-compliant. Unfortunately, even if this declaration is true, it is only valid for the parts that the cloud provider believes are under its responsibility.
The golden nugget of this document is this figure. It graphically highlights the distribution of responsibilities. Unfortunately, I think there is a missing row: Security of the Application executing in the cloud. If the application is poorly written and riddled with vulnerabilities, then game over. In the case, of SaaS, this security is the responsibility of the SaaS provider. For the other cases, it is the responsibility of the entity who designed the service/application.
The explanations in the core of the document are not extremely useful as many elements are advertising for Microsoft Azure (it is fair as it is a Microsoft document).
The document can be used to increase the awareness of the mandatory distribution and sharing of responsibilities.
Once more, the die has been cast. Yesterday, I sent the final version of the manuscript of my second book to Springer.
The title is Ten Laws of Security. For 15 years, together with my previous security team, I have defined and refined a set of ten laws for security. These laws are simple but powerful. Over the years, when meeting other security experts, solution providers, potential customers, and students, I discovered that these laws were an excellent communication tool. These rules allowed benchmarking quickly whether both parties shared the same vision for security. Many meetings successfully started by me introducing these laws, which helped build reciprocal respect and trust between teams. Over time, I found that these laws were also an excellent educational tool. Each law can introduce different technologies and principles of security. They constitute an entertaining way to present security to new students or to introduce security to non-experts. Furthermore, these laws are mandatory heuristics that should drive any design of secure systems. There is no valid, rational reason for a system to violate one of these rules. The laws can be used as a checklist for a first-level sanity check.
Each chapter of this book addresses one law. The first part of the chapter always starts with examples. These anecdotes either illustrate an advantageous application of the law or outline the consequences of not complying with it. The second part of the chapter explores different security principles addressed by the law. Each chapter introduces, at least, one security technology or methodology that illustrates the law, or that is paramount to the law. From each law, the last section deduces some associated rules that are useful when designing or assessing a security system. As in my previous book, inserts, entitled “The Devil is in the details,” illustrate the gap between theory and real-world security.
The book should be available this summer.