Law 9 – Quis custodiet ipsos custodes?

This post is the ninth post in a series of ten posts. The previous post explored the eighth law: If You Watch the Internet, the Internet Is Watching You. This Roman sentence from poet Juvenal can be translated as “Who will guard the guards themselves?” Every element of a system should be monitored. This also includes the monitoring functions. As often some parts of the security model rely on the detection of anomalies, it is key that this detection is efficient and faithful.

Any security process should always have one last phase that monitors the efficiency of the implemented practices. This phase creates the feedback loop that regulates any deficiency or inefficiency of the security process. The quality and probity of this last phase have a strong influence on the overall robustness of the security. For instance, the COBIT framework has one control point dedicated to this task: ME2 – monitor and evaluate internal control.

The beauty of Bitcoin’s model is that every user is the ward that surveys the other users. The Bitcoin system assumes that a majority of users will operate faithfully. The Proof Of Work is the consensus mechanism that enforces, in theory, this assumption. Mining is costly and managing the majority of the hashing power may be impossible for one actor. This assumption may be questionable with new cryptocurrencies that do not have a significant number of users and with the advent of mining pool.

Separate the roles; Divide and Conquer. The scope of controlling and managing roles should be kept as small as possible. Guards should have a limited scope of surveillance and restricted authority. This reduces the impact of a malicious insider or the influence of an attacker who hijacked an administrator or controller account. Where possible, the scope of roles should partly overlap or be redundant between several individuals. This trick increases the chances to detect an error or a mischief from an insider as success would require collusion.

For instance, reduce the scope of system administrators as they have the keys to the kingdom. Nobody should have all the keys of the kingdom. After the Snowden incident, NSA drastically reduced the number of its system administrators.

Read the logs; logfiles are an essential element for monitoring and auditing the effectiveness of the security. They will be useful to detect and understand security incidents. Nevertheless, their optimal efficiency is reached only when they are regularly analyzed to detect anomalies. Ideally, they have to be proactively analyzed. Applying only a-posteriori log analysis is a weak security stance. Logfiles are not to be used only for forensics purpose.

If you find this post interesting, you may also be interested in my second book “Ten Laws for Security.”  Chapter 10 explores in details this law. The book is available for instance at Springer or Amazon.

Law 8 – If You Watch the Internet, the Internet Is Watching You

This post is the eighth post in a series of ten posts. The previous post explored the seventh law: you are the weakest link. With our increasingly connected word, this law becomes more and more important. Most connections are bidirectional. The consequence is that information flows both ways. If you receive information from the Internet, the Internet may collect information from you. If your apparatus is going to the Internet, the Internet may come to your device. Some of these ingress connections may not be solicited.

Controlling what is exchanged, and monitoring who is using the connections is the role of network security. Fortunately, network security is a rather mature science. Thus, the first rule should be the following one.

Do not connect directly to the Internet; the access to the Internet should be carefully controlled. It should have at least a firewall and anti-malware filtering. When possible, implement a Demilitarized Military Zone (DMZ) to create an isolation buffer between the Internet and your network that may discard attackers to intrude it. Not everybody may need to install a DMZ or can install a DMZ, especially at home. However, everybody should install a firewall between his network and the Internet. In a consumer environment, the firewall should by default ban every ingress communication.     
Many specialists claim (rightly) that the notion of perimetric defense is outdated. This does not mean that local networks should not be protected against intrusions or leaks. These network security mechanisms remain mandatory but not sufficient.

Thou will be traced; the digital world increasingly keeps records of all the activities of users. Many Web enterprises build their business model on monetizing the results of this data collection. This data collection may be known and announced, but sometimes also hidden. For instance, spying techniques such as fingerprinting canvas stealthily collect information when people visit web pages. A recent study disclosed that more than 5% of the sites used fingerprinted canvas. This constant monitoring is a threat to privacy and also a potential mine of information for attackers. Some tools, such as the TOR browser, help in preserving anonymity on the Internet.

If you find this post interesting, you may also be interested in my second book “Ten Laws for Security.”  Chapter 9 explores in details this law. The book is available for instance at Springer or Amazon.

Law 7 – You Are the Weakest Link

laws7This post is the seventh post in a series of ten posts. The previous post explored the sixth law: Security is not stronger than its weakest link.  Although often neglected, the seventh law is fundamental.  It states that human users are often the weakest element of the security.

Humans are the weakest link for many reasons.  Often, they do not understand security or have an ill perceived perception of it.  For instance, security is often seen as an obstacle.  Therefore, users will circumvent it when security is an obstruction to the fulfillment of their task and will not apply security policies and procedures.  They do not believe that they are a worthwhile target for cyber-attacks.

Humans are the weakest link because they do not grasp the full impact of their security-related decisions.  How many people ignore the security warnings of their browser?  How many people understand the security consequences and constraints of Bring Your Own Device (BYOD) or Bring Your Own Cloud (BYOC)?  Employees put their company at risk by bad decisions.

Humans are the weakest link because they have intrinsic limitations.  Human memory is often feeble thus we end up with weak passwords or complex passwords written on a post-it.  Humans do not handle complexity correctly.  Unfortunately, security is too complex.

Humans are the weakest link because they can be easily deceived.  Social engineers use social interaction to influence people and convince them to perform actions that they are not expected to do, or to share information that they are not supposed to disclose.   For instance, phishing is an efficient contamination vector.

How can we mitigate the human risk?

  • Where possible, make decisions on behalf of the end user; as the end users are not necessarily able to make rational decisions on security issues, the designer should make the decisions when possible. Whenever the user has to decide, the consequences of his decision should be made clear to him to guide his decision.
  • Define secure defaults; the default value should always be set to that for the highest or, at least, an acceptable security level. User friendliness should not drive the default value, but rather security should.
  • Educate your employees; the best answer to social engineering is enabling employees to identify an ongoing social engineering attack. This detection is only possible by educating the employees about this kind of attack.  Training employees increases their security awareness and thus raises their engagement.
  • Train your security staff; the landscape of security threats and defense tools is changing quickly. Skilled attackers will use the latest exploits.  Therefore, it is imperative that the security personnel be aware of the latest techniques.  Operational security staff should have a significant part of their work time dedicated to continuous training.

Interestingly, with the current progress of Artificial Intelligence and Big Data analytics, will the new generation of security tools partly compensate this weakness?

If you find this post interesting, you may also be interested in my second book “Ten Laws for Security” that will be available end of this month.   Chapter 8 explores in details this law. The book will be available for instance at Springer or Amazon.

Law 6 – Security Is No Stronger Than its Weakest Link

This is the sixth post in a series of ten posts. The previous post explored Law 5: Si vis pacem, para bellum. The sixth law is one of the less controversial ones. Security is the result of many elements and principals that interact to build the appropriate defense. As a consequence, security cannot be stronger than its weakest element. Once more, Chinese general Sun Tzu explained it perfectly.

So in war, the way is to avoid what is strong and to strike at what is weak.

A smart attacker analyzes the full system and looks for the weakest points. The attacker focuses on these points. For instance, in 2012, about 80% of the cyber incidents implying data breach were opportunistic. Furthermore, they did not require proficient hacking skills. The targets were not properly protected. Attackers went after these easy targets.

Another example of attacking the weakest link is the use of side-channel attacks. Side-channel attacks are devastating, non-intrusive attacks that reveal secret information. The information leaks through an unintentional channel in a given physical implementation of an algorithm. These channels are the result of physical effects of the actual implementation. They may, for instance, be timing characteristics, power consumption, generated audio noise, or electromagnetic radiation.

As a general rule, the defender has to know its defense mechanisms. When trying to strengthen the defense, the designer must first focus on the weakest elements of its defense. Both for the defender and the attacker, the difficulty is to identify these weakest elements. They may take many forms: humans (see Law 7), design errors, bad implementations, limitations… White box testing is a good way to identify some weak points.

Know the hardware limitations; in this digital world, most of the technical effort is put into developing software. The focus is often on protecting the executed piece of code. Nevertheless, the code executes on hardware. Hardware introduces constraints that are often unknown to contemporary software developers. Ignoring these constraints may lead to interesting attack surfaces that a seasoned attacker will, of course, use. A typical example is the deletion of data in memory. Hardware memories have persistence even when erased or powered off. For instance, some data may be remaining DRAM several minutes after being powered off. Or, memories may have unexpected behavior when used in extreme conditions. The RowHammer attack is a perfect illustration.

Patch, patch, patch; Security is aging. New vulnerabilities are disclosed every week. As a result, manufacturers and publishers regularly issue patches. They are useless if they are not applied. Unfortunately, too many deployed systems are not properly patched. Smart attackers look first for unpatched targets.

Protect always your keys; Keys are probably the most precious security assets of any secure digital system. Their protection should never be the weakest link. Ideally, these protections should represent the strongest link as they defend the ultimate treasure. Keys need protection not only at rest but also while in use. A software implementation of cryptographic algorithms has to be carefully crafted, especially when operating in a hostile environment. In some contexts, the hardware implementation must resist side-channel attacks. Secure implementation of cryptography is expert work.

Law 5 -Si Vis Pacem, Para Bellum

Si vis
pacem, para
bellum
” (i.e., “who wants peace, prepares for war”) is a Latin adage adapted from a statement found in Book 3 of the Roman author Publius Flavius Vegetius Renatus’s “tract De Re Militari” (fourth or fifth century). Many centuries before, Chinese General Sun Tsu has already claimed in his famous treaty “The Art of War”:

He will win who, prepared himself, waits to take the enemy unprepared.

Cyber security is a war between two opponents. On one side, the security designers and practitioners defend assets. On the other, cyber hackers attempt to steal, impair or destroy these assets. Most of the traditional rules of warfare apply to cyber security. Thus, “The Art of War” is a pamphlet that any security practitioner should have read.

Be proactive; a static target is easier to defeat than a dynamic one. Security defense should be active rather than reactive where possible. Furthermore, security is aging. Thus, the defenders must prepare new defenses and attempt to predict the next attacks. The next generation of defense should be available before the occurrence of any severe attacks. Of course, they must be different from the previous versions. The new defense mechanisms do not need to be deployed immediately. In most cases, their deployment may be delayed until their impact will be optimal. The optimal time may be immediately after the occurrence of an attack, or only once the loss occurred would be higher than the cost of deploying the new version. The optimal time may be when it hurts at maximum the attackers. For instance, a new generation of Pay TV smart card may be activated just before a major broadcast event.

Being proactive is also a rule for day to day defense. Do not wait for that a hack was detected to check your logs. Do not wait for an exploit to hit your system to learn about latest attacks and new tools. Do not wait for a hack to exploit unpatched systems, patch the system as soon as possible.

Design for renewability; according to Law 1, any secure system may be compromised one day. The only acceptable method to address this risk is renewable security. Every secure system must be renewable in the case of a successful hack. Without renewable security in its design, a system is doomed. Nevertheless, to ensure secure renewability, the kernel that handles renewability cannot be updated in the field. This kernel must ensure that attackers cannot misuse this renewability mechanism for their own purpose and that attackers cannot prevent the renewal. This kernel must also make sure that the attacker cannot roll back the updated system to the previously vulnerable version. One element of your trust model is probably that this kernel is secure.

Do not rest on your laurels; complacency is not an acceptable mindset for security practitioners. They must constantly be vigilant. The attackers are adapting quickly to new defenses and are creative. Some attackers are brilliant. If the defender did not detect a breach in the system, it does not necessarily mean that this system is secure. It may be that the breach has not yet been detected.

Law 4 – Trust No One

This is the fourth post in a series of ten posts. The previous post explored Law 3: No Security Through Obscurity. The fourth law is one of my preferred ones. The most futile reason is that I am an X-files fan (I want to believe) and it was a recurrent tagline. Now, the serious reason is that a key element of every system. Usually, my first detection test of snake oil is asking the vendor what the trust model of the system is. If the vendor has not a clear answer, it smells bad. And it becomes worrying if the vendor has no idea what a trust model is.

Trust is the cornerstone of security. Without trust, there is no secure system. It is the foundation of all secure systems. But what is trust? I like to use Roger Clarke’s definition.

Trust is confident reliance by one party on the behavior of other parties.

In other words, trust is the belief that the other parties will be reliable and that they are worthy of confidence. Other parties may be people, organizations such as Certification Authorities (CA), systems, software or hardware components.

Know your security hypotheses;
it is not possible to build a secure system without knowing the security assumptions. They define the minimal set of hypotheses that are supposed to be always true. This is the trust model. It is mandatory to identify and document the trust model thoroughly. Whenever one of these hypotheses is not anymore true, the system may not be secure anymore. Any change in the environment or context may invalidate the assumptions. Therefore, hypotheses must be continuously monitored as the system evolves to check whether they are still valid. If they have changed, then the design should accommodate the new security hypotheses.

An example is the current trust model of Internet when using TLS. The basic assumption is that the CA is trustworthy. Unfortunately, with modern browsers, this assumption is weak. By default, most browsers trust many trusted root CAs. The current Internet has more than 1,500 CA that the browsers trust. Only one wrongdoer amongst them is sufficient to weaken https.

Minimize the attack surface; Medieval castle builders new this consequence of the fourth law. They designed small, thin apertures in the walls that allowed observing at besiegers and the firing at them arrows with bows. The smaller the aperture was, the harder for an attacker it was to hurt the defender with a lucky arrow strike. Attackers will probe all the possible venues for breaking a system. The more the number of possibilities available for the attacker to try, the higher the likelihood that the attacker will succeed in finding an existing vulnerability. It is thus paramount to reduce the attack surface, i.e., the space of possible attacks available to an attacker. For instance, the current migration to the public cloud increases the surface attack compared to the traditional approach based on private data centers. This migration stretches the trust model.

Provide minimal access; a secure system should grant access only to the resources and data that the principal needs to perform his function. Access to any additional unnecessary resources or data is useless and creates an unnecessary potential risk. The consequence is that the role and function of each principal have to be clearly defined and thoroughly documented. For instance, there is no valid reason why the accounting team should have access to the shared repositories of the technical teams. This rule is very similar to the rule of least privilege and an illustration of minimizing the attack surface.

Keep it simple; complexity is the enemy of security. The more complex a system is, the higher the likelihood is that there is an error either in its design or its implementation. The error may turn into a security vulnerability that an attacker may exploit. Many vulnerabilities are due to software bugs. Protocols such as TLS become more and more complex, making their implementations more vulnerable.

Be aware of insiders; while
the attack usually comes from outside the trusted space, unfortunately, sometimes, the attacker may be an insider. The attacker will either accomplish the attack herself or knowingly be an accomplice in it. Sometimes, the insider may even be tricked into facilitating the attack involuntarily, for instance, through social engineering. Therefore, the trust within the trusted space should not be blind. Due to their privileged position, insiders are powerful attackers. Any security analysis has to tackle the insider threat.

Of course, security must trust some elements. There is no security with a root of trust. Never trust blindly. Trust wisely and sparsely.

Law 3 – No Security Through Obscurity

This is the third post in a series of ten posts. The previous post analyzed Law 2: Know the Assets to Protect.

Undoubtedly, this law is one of the most famous laws of security. Unfortunately, its interpretation is too often over simplified. This law is rooted in the 19th century. In his “la cryptographie militaire”, Auguste Kerckhoffs, a Dutch cryptographer, stated: “A [cryptographic system] must require secrecy, and should not create problems should it fall into the enemy’s hands.” In other words, the robustness of a cryptographic system should rely on the secrecy of its keys rather than on the secrecy of its algorithm. As such, a strong assumption is that if an attacker knows the algorithm used, she should gain only a minimal advantage.

Prefer known published algorithms; a
design should only use known, published, cryptographic algorithms and protocols and avoid proprietary solutions. Most recent cryptosystems, such as AES or SHA3, have been selected through a public process with the extensive scrutiny of the cryptography community. Protocols such as PKCS and TLS are public and constantly examined. And regularly vulnerabilities are reported.

Protect the key; keys are the most valuable assets in cryptography. Key management is the most critical and complicated task when designing a secure system. The symmetric keys and private keys have to be protected by all possible means. Hardware vaults such as HSM, or smart cards require more effort to be penetrated than software-based vaults,

A very common misconception about Kerckhoffs’s law is that this law mandates publishing everything, including the source code of the implementation. The implementation details are dependent on the trust model. If the trust model is similar to the model of the following figure, then open source libraries are optimal.

Unfortunately, in some contexts, the trust model looks more like the following figure. Alice cannot trust Bob. Or even, if Alice can trust Bob, she cannot trust the system on which he runs. Ruth may have compromised the system. Digital Rights Management is an example of such hostile environment

In such configuration, open source libraries are not suitable. Indeed, the attacker knows exactly where and when the secrets are handled and thus can extract them without much effort. Under these conditions, proprietary implementations are preferable. In a hostile environment, the implementation should use obfuscation techniques or white-box cryptography.

Security should never rely exclusively on obscurity. Nevertheless, obscurity may be one component of the defense. However, whenever the secret or ruse obfuscated by the obscurity is disclosed, the system should remain secure. In other words, obscurity should not be the centerpiece of the security of any system.