Category Archive: Ten laws

Nov 20 2016

Law 7 – You Are the Weakest Link

laws7This post is the seventh post in a series of ten posts. The previous post explored the sixth law: Security is not stronger than its weakest link.  Although often neglected, the seventh law is fundamental.  It states that human users are often the weakest element of the security.

Humans are the weakest link for many reasons.  Often, they do not understand security or have an ill perceived perception of it.  For instance, security is often seen as an obstacle.  Therefore, users will circumvent it when security is an obstruction to the fulfillment of their task and will not apply security policies and procedures.  They do not believe that they are a worthwhile target for cyber-attacks.

Humans are the weakest link because they do not grasp the full impact of their security-related decisions.  How many people ignore the security warnings of their browser?  How many people understand the security consequences and constraints of Bring Your Own Device (BYOD) or Bring Your Own Cloud (BYOC)?  Employees put their company at risk by bad decisions.

Humans are the weakest link because they have intrinsic limitations.  Human memory is often feeble thus we end up with weak passwords or complex passwords written on a post-it.  Humans do not handle complexity correctly.  Unfortunately, security is too complex.

Humans are the weakest link because they can be easily deceived.  Social engineers use social interaction to influence people and convince them to perform actions that they are not expected to do, or to share information that they are not supposed to disclose.   For instance, phishing is an efficient contamination vector.

How can we mitigate the human risk?

  • Where possible, make decisions on behalf of the end user; as the end users are not necessarily able to make rational decisions on security issues, the designer should make the decisions when possible. Whenever the user has to decide, the consequences of his decision should be made clear to him to guide his decision.
  • Define secure defaults; the default value should always be set to that for the highest or, at least, an acceptable security level. User friendliness should not drive the default value, but rather security should.
  • Educate your employees; the best answer to social engineering is enabling employees to identify an ongoing social engineering attack. This detection is only possible by educating the employees about this kind of attack.  Training employees increases their security awareness and thus raises their engagement.
  • Train your security staff; the landscape of security threats and defense tools is changing quickly. Skilled attackers will use the latest exploits.  Therefore, it is imperative that the security personnel be aware of the latest techniques.  Operational security staff should have a significant part of their work time dedicated to continuous training.

Interestingly, with the current progress of Artificial Intelligence and Big Data analytics, will the new generation of security tools partly compensate this weakness?

If you find this post interesting, you may also be interested in my second book “Ten Laws for Security” that will be available end of this month.   Chapter 8 explores in details this law. The book will be available for instance at Springer or Amazon.

Sep 25 2016

Law 6 – Security Is No Stronger Than its Weakest Link

This is the sixth post in a series of ten posts. The previous post explored Law 5: Si vis pacem, para bellum. The sixth law is one of the less controversial ones. Security is the result of many elements and principals that interact to build the appropriate defense. As a consequence, security cannot be stronger than its weakest element. Once more, Chinese general Sun Tzu explained it perfectly.

So in war, the way is to avoid what is strong and to strike at what is weak.

A smart attacker analyzes the full system and looks for the weakest points. The attacker focuses on these points. For instance, in 2012, about 80% of the cyber incidents implying data breach were opportunistic. Furthermore, they did not require proficient hacking skills. The targets were not properly protected. Attackers went after these easy targets.

Another example of attacking the weakest link is the use of side-channel attacks. Side-channel attacks are devastating, non-intrusive attacks that reveal secret information. The information leaks through an unintentional channel in a given physical implementation of an algorithm. These channels are the result of physical effects of the actual implementation. They may, for instance, be timing characteristics, power consumption, generated audio noise, or electromagnetic radiation.

As a general rule, the defender has to know its defense mechanisms. When trying to strengthen the defense, the designer must first focus on the weakest elements of its defense. Both for the defender and the attacker, the difficulty is to identify these weakest elements. They may take many forms: humans (see Law 7), design errors, bad implementations, limitations… White box testing is a good way to identify some weak points.

Know the hardware limitations; in this digital world, most of the technical effort is put into developing software. The focus is often on protecting the executed piece of code. Nevertheless, the code executes on hardware. Hardware introduces constraints that are often unknown to contemporary software developers. Ignoring these constraints may lead to interesting attack surfaces that a seasoned attacker will, of course, use. A typical example is the deletion of data in memory. Hardware memories have persistence even when erased or powered off. For instance, some data may be remaining DRAM several minutes after being powered off. Or, memories may have unexpected behavior when used in extreme conditions. The RowHammer attack is a perfect illustration.

Patch, patch, patch; Security is aging. New vulnerabilities are disclosed every week. As a result, manufacturers and publishers regularly issue patches. They are useless if they are not applied. Unfortunately, too many deployed systems are not properly patched. Smart attackers look first for unpatched targets.

Protect always your keys; Keys are probably the most precious security assets of any secure digital system. Their protection should never be the weakest link. Ideally, these protections should represent the strongest link as they defend the ultimate treasure. Keys need protection not only at rest but also while in use. A software implementation of cryptographic algorithms has to be carefully crafted, especially when operating in a hostile environment. In some contexts, the hardware implementation must resist side-channel attacks. Secure implementation of cryptography is expert work.

Sep 11 2016

Law 5 -Si Vis Pacem, Para Bellum

Si vis
pacem, para
” (i.e., “who wants peace, prepares for war”) is a Latin adage adapted from a statement found in Book 3 of the Roman author Publius Flavius Vegetius Renatus’s “tract De Re Militari” (fourth or fifth century). Many centuries before, Chinese General Sun Tsu has already claimed in his famous treaty “The Art of War”:

He will win who, prepared himself, waits to take the enemy unprepared.

Cyber security is a war between two opponents. On one side, the security designers and practitioners defend assets. On the other, cyber hackers attempt to steal, impair or destroy these assets. Most of the traditional rules of warfare apply to cyber security. Thus, “The Art of War” is a pamphlet that any security practitioner should have read.

Be proactive; a static target is easier to defeat than a dynamic one. Security defense should be active rather than reactive where possible. Furthermore, security is aging. Thus, the defenders must prepare new defenses and attempt to predict the next attacks. The next generation of defense should be available before the occurrence of any severe attacks. Of course, they must be different from the previous versions. The new defense mechanisms do not need to be deployed immediately. In most cases, their deployment may be delayed until their impact will be optimal. The optimal time may be immediately after the occurrence of an attack, or only once the loss occurred would be higher than the cost of deploying the new version. The optimal time may be when it hurts at maximum the attackers. For instance, a new generation of Pay TV smart card may be activated just before a major broadcast event.

Being proactive is also a rule for day to day defense. Do not wait for that a hack was detected to check your logs. Do not wait for an exploit to hit your system to learn about latest attacks and new tools. Do not wait for a hack to exploit unpatched systems, patch the system as soon as possible.

Design for renewability; according to Law 1, any secure system may be compromised one day. The only acceptable method to address this risk is renewable security. Every secure system must be renewable in the case of a successful hack. Without renewable security in its design, a system is doomed. Nevertheless, to ensure secure renewability, the kernel that handles renewability cannot be updated in the field. This kernel must ensure that attackers cannot misuse this renewability mechanism for their own purpose and that attackers cannot prevent the renewal. This kernel must also make sure that the attacker cannot roll back the updated system to the previously vulnerable version. One element of your trust model is probably that this kernel is secure.

Do not rest on your laurels; complacency is not an acceptable mindset for security practitioners. They must constantly be vigilant. The attackers are adapting quickly to new defenses and are creative. Some attackers are brilliant. If the defender did not detect a breach in the system, it does not necessarily mean that this system is secure. It may be that the breach has not yet been detected.

Aug 28 2016

Law 4 – Trust No One

This is the fourth post in a series of ten posts. The previous post explored Law 3: No Security Through Obscurity. The fourth law is one of my preferred ones. The most futile reason is that I am an X-files fan (I want to believe) and it was a recurrent tagline. Now, the serious reason is that a key element of every system. Usually, my first detection test of snake oil is asking the vendor what the trust model of the system is. If the vendor has not a clear answer, it smells bad. And it becomes worrying if the vendor has no idea what a trust model is.

Trust is the cornerstone of security. Without trust, there is no secure system. It is the foundation of all secure systems. But what is trust? I like to use Roger Clarke’s definition.

Trust is confident reliance by one party on the behavior of other parties.

In other words, trust is the belief that the other parties will be reliable and that they are worthy of confidence. Other parties may be people, organizations such as Certification Authorities (CA), systems, software or hardware components.

Know your security hypotheses;
it is not possible to build a secure system without knowing the security assumptions. They define the minimal set of hypotheses that are supposed to be always true. This is the trust model. It is mandatory to identify and document the trust model thoroughly. Whenever one of these hypotheses is not anymore true, the system may not be secure anymore. Any change in the environment or context may invalidate the assumptions. Therefore, hypotheses must be continuously monitored as the system evolves to check whether they are still valid. If they have changed, then the design should accommodate the new security hypotheses.

An example is the current trust model of Internet when using TLS. The basic assumption is that the CA is trustworthy. Unfortunately, with modern browsers, this assumption is weak. By default, most browsers trust many trusted root CAs. The current Internet has more than 1,500 CA that the browsers trust. Only one wrongdoer amongst them is sufficient to weaken https.

Minimize the attack surface; Medieval castle builders new this consequence of the fourth law. They designed small, thin apertures in the walls that allowed observing at besiegers and the firing at them arrows with bows. The smaller the aperture was, the harder for an attacker it was to hurt the defender with a lucky arrow strike. Attackers will probe all the possible venues for breaking a system. The more the number of possibilities available for the attacker to try, the higher the likelihood that the attacker will succeed in finding an existing vulnerability. It is thus paramount to reduce the attack surface, i.e., the space of possible attacks available to an attacker. For instance, the current migration to the public cloud increases the surface attack compared to the traditional approach based on private data centers. This migration stretches the trust model.

Provide minimal access; a secure system should grant access only to the resources and data that the principal needs to perform his function. Access to any additional unnecessary resources or data is useless and creates an unnecessary potential risk. The consequence is that the role and function of each principal have to be clearly defined and thoroughly documented. For instance, there is no valid reason why the accounting team should have access to the shared repositories of the technical teams. This rule is very similar to the rule of least privilege and an illustration of minimizing the attack surface.

Keep it simple; complexity is the enemy of security. The more complex a system is, the higher the likelihood is that there is an error either in its design or its implementation. The error may turn into a security vulnerability that an attacker may exploit. Many vulnerabilities are due to software bugs. Protocols such as TLS become more and more complex, making their implementations more vulnerable.

Be aware of insiders; while
the attack usually comes from outside the trusted space, unfortunately, sometimes, the attacker may be an insider. The attacker will either accomplish the attack herself or knowingly be an accomplice in it. Sometimes, the insider may even be tricked into facilitating the attack involuntarily, for instance, through social engineering. Therefore, the trust within the trusted space should not be blind. Due to their privileged position, insiders are powerful attackers. Any security analysis has to tackle the insider threat.

Of course, security must trust some elements. There is no security with a root of trust. Never trust blindly. Trust wisely and sparsely.

Aug 14 2016

Law 3 – No Security Through Obscurity

This is the third post in a series of ten posts. The previous post analyzed Law 2: Know the Assets to Protect.

Undoubtedly, this law is one of the most famous laws of security. Unfortunately, its interpretation is too often over simplified. This law is rooted in the 19th century. In his “la cryptographie militaire”, Auguste Kerckhoffs, a Dutch cryptographer, stated: “A [cryptographic system] must require secrecy, and should not create problems should it fall into the enemy’s hands.” In other words, the robustness of a cryptographic system should rely on the secrecy of its keys rather than on the secrecy of its algorithm. As such, a strong assumption is that if an attacker knows the algorithm used, she should gain only a minimal advantage.

Prefer known published algorithms; a
design should only use known, published, cryptographic algorithms and protocols and avoid proprietary solutions. Most recent cryptosystems, such as AES or SHA3, have been selected through a public process with the extensive scrutiny of the cryptography community. Protocols such as PKCS and TLS are public and constantly examined. And regularly vulnerabilities are reported.

Protect the key; keys are the most valuable assets in cryptography. Key management is the most critical and complicated task when designing a secure system. The symmetric keys and private keys have to be protected by all possible means. Hardware vaults such as HSM, or smart cards require more effort to be penetrated than software-based vaults,

A very common misconception about Kerckhoffs’s law is that this law mandates publishing everything, including the source code of the implementation. The implementation details are dependent on the trust model. If the trust model is similar to the model of the following figure, then open source libraries are optimal.

Unfortunately, in some contexts, the trust model looks more like the following figure. Alice cannot trust Bob. Or even, if Alice can trust Bob, she cannot trust the system on which he runs. Ruth may have compromised the system. Digital Rights Management is an example of such hostile environment

In such configuration, open source libraries are not suitable. Indeed, the attacker knows exactly where and when the secrets are handled and thus can extract them without much effort. Under these conditions, proprietary implementations are preferable. In a hostile environment, the implementation should use obfuscation techniques or white-box cryptography.

Security should never rely exclusively on obscurity. Nevertheless, obscurity may be one component of the defense. However, whenever the secret or ruse obfuscated by the obscurity is disclosed, the system should remain secure. In other words, obscurity should not be the centerpiece of the security of any system.


May 24 2016

Law 2: Know the Assets to Protect

This is the second post of a series of ten posts. The first one analyzed Law 1: attackers will always find their way.

The primary goal of security is to protect. However, to protect what? “What are the assets to protect?” is the first question that every security analyst should answer before starting any design. Without a proper response, the resulting security mechanism may be inefficient. Unfortunately, answering it is tough.

The identification of the valuable assets will enable defining the ideal and most efficient security systems. The identification should specify the attributes of the asset that needs protection (confidentiality, integrity, anti-theft, availability, and so on). Assets are coming in many forms: human, physical goods, information goods, resources and intangible goods. The four first categories are often well treated. Unfortunately, it is not the case for the last one. Intangible goods are the intangible concepts that define the value of a company. They encompass notions such as brand, reputation, trust, fame, reliability, intellectual property and knowledge. For instance, a tarnished reputation may have serious business impacts.

Once the assets identified, the second step is to valuate them. All assets should not have the same value. For instance, all documents of your company are not to be classified as confidential. If you classify too many documents as confidential, users will become lax, and the mere notion of confidential will become diluted.

Once all the assets identified, it is time to make a threat analysis for the most valuable assets. It is not sufficient to know what to protect to design proper defense. For that purpose, it is key to identify the potential attackers. According to general Sun Tzu in his “Art of War”, it is paramount to know your opponents.

“If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”

The knowledge of the enemies and their abilities is paramount to any successful security system. This knowledge can be collected by surveying the Darknet continuously, hacking forums and attending security conferences (Black Hat, Defcon, CCC, …). There are many available classifications for attackers. For instance,

  • IBM proposed three categories: clever outsiders who are often brilliant people, knowledgeable insiders who have specialized education and Funded organizations that can recruit teams of complementary world-class experts.
  • The Merdan Group defines an interesting five-scale classification: Simple manipulation, Casual hacking, Sophisticated hacking, University challenge and Criminal enterprise.
  • At CloudSec 2015, the FBI disclosed a motivation driven gradation: Hacktivism, Insider, Espionage, Terrorism and Warfare

The practitioner selects the classification that fits best the problem to analyze.

Once the threat analysis is completed, then starts the design of the countermeasures. An important heuristic to keep in mind: “in most cases, the cost of protection should not exceed the potential loss.” Usually, defense is sufficient if the cost of a successful attack is equivalent to or higher than the potential gain for the attacker. Similarly, defense is adequate if its expense is equal to or greater than the possible loss in the case of a successful attack.

Remember: know what you must ultimately protect and against who.

Mar 16 2016

Alea Jacta Est (3): Ten Laws of Security

Once more, the die has been cast. Yesterday, I sent the final version of the manuscript of my second book to Springer.

The title is Ten Laws of Security. For 15 years, together with my previous security team, I have defined and refined a set of ten laws for security. These laws are simple but powerful. Over the years, when meeting other security experts, solution providers, potential customers, and students, I discovered that these laws were an excellent communication tool. These rules allowed benchmarking quickly whether both parties shared the same vision for security. Many meetings successfully started by me introducing these laws, which helped build reciprocal respect and trust between teams. Over time, I found that these laws were also an excellent educational tool. Each law can introduce different technologies and principles of security. They constitute an entertaining way to present security to new students or to introduce security to non-experts. Furthermore, these laws are mandatory heuristics that should drive any design of secure systems. There is no valid, rational reason for a system to violate one of these rules. The laws can be used as a checklist for a first-level sanity check.

Each chapter of this book addresses one law. The first part of the chapter always starts with examples. These anecdotes either illustrate an advantageous application of the law or outline the consequences of not complying with it. The second part of the chapter explores different security principles addressed by the law. Each chapter introduces, at least, one security technology or methodology that illustrates the law, or that is paramount to the law. From each law, the last section deduces some associated rules that are useful when designing or assessing a security system. As in my previous book, inserts, entitled “The Devil is in the details,” illustrate the gap between theory and real-world security.

The book should be available this summer.

Older posts «