Law 2: Know the Assets to Protect

This is the second post of a series of ten posts. The first one analyzed Law 1: attackers will always find their way.

The primary goal of security is to protect. However, to protect what? “What are the assets to protect?” is the first question that every security analyst should answer before starting any design. Without a proper response, the resulting security mechanism may be inefficient. Unfortunately, answering it is tough.

The identification of the valuable assets will enable defining the ideal and most efficient security systems. The identification should specify the attributes of the asset that needs protection (confidentiality, integrity, anti-theft, availability, and so on). Assets are coming in many forms: human, physical goods, information goods, resources and intangible goods. The four first categories are often well treated. Unfortunately, it is not the case for the last one. Intangible goods are the intangible concepts that define the value of a company. They encompass notions such as brand, reputation, trust, fame, reliability, intellectual property and knowledge. For instance, a tarnished reputation may have serious business impacts.

Once the assets identified, the second step is to valuate them. All assets should not have the same value. For instance, all documents of your company are not to be classified as confidential. If you classify too many documents as confidential, users will become lax, and the mere notion of confidential will become diluted.

Once all the assets identified, it is time to make a threat analysis for the most valuable assets. It is not sufficient to know what to protect to design proper defense. For that purpose, it is key to identify the potential attackers. According to general Sun Tzu in his “Art of War”, it is paramount to know your opponents.

“If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”

The knowledge of the enemies and their abilities is paramount to any successful security system. This knowledge can be collected by surveying the Darknet continuously, hacking forums and attending security conferences (Black Hat, Defcon, CCC, …). There are many available classifications for attackers. For instance,

  • IBM proposed three categories: clever outsiders who are often brilliant people, knowledgeable insiders who have specialized education and Funded organizations that can recruit teams of complementary world-class experts.
  • The Merdan Group defines an interesting five-scale classification: Simple manipulation, Casual hacking, Sophisticated hacking, University challenge and Criminal enterprise.
  • At CloudSec 2015, the FBI disclosed a motivation driven gradation: Hacktivism, Insider, Espionage, Terrorism and Warfare

The practitioner selects the classification that fits best the problem to analyze.

Once the threat analysis is completed, then starts the design of the countermeasures. An important heuristic to keep in mind: “in most cases, the cost of protection should not exceed the potential loss.” Usually, defense is sufficient if the cost of a successful attack is equivalent to or higher than the potential gain for the attacker. Similarly, defense is adequate if its expense is equal to or greater than the possible loss in the case of a successful attack.

Remember: know what you must ultimately protect and against who.

Law 1 – Attackers Will Always Find Their Way

This is the first post of a series of ten posts. The order of the ten laws is not meaningful excepted for this first one. For three reasons:

  • It is the most important law as it is never failed. It should be engraved deeply in the mind of every security practitioner.
  • It is my favorite law. In 1996, when I founded the Thomson Security Laboratories, this law allowed us to enter into the Hollywood arena. We were the first to claim it systematically in front of the MPAA members. At this time, it was not obvious. In 1998, DVD John with DeCSS illustrated its pertinence. Studios started to listen to us. A side effect of the first law is that the world will always need good security practitioners. This is a reassuring effect. J
  • If somebody claim his or her system is unbreakable, then I already know that the system is snake oil.

No secure system is infallible. Any secure system is doomed to fail. Attackers will always find a way to defeat it. Even in ancient mythologies, it was true. For instance, invulnerable heroes such as Greek Achilles or Nordic Siegfried has a vulnerable spot. Along the History, this law has been right. Invincible Roman legions were defeated. Unsinkable RMS Titanic sank. Bletchley Park decrypted German Enigma. Mobile devices are jailbroken.

The only cryptographic system that has been demonstrated to be unbreakable in theory is Shannon’s One Time Pad. Unfortunately, it is not practicable. The symmetric key must be truly random and be of the same size that the clear text. Then, you have the problem to distribute the symmetric key securely, i.e., by secure sneaker net. Not very useful for everyday usage.

There is a strong asymmetry between security defenders and attackers. The attacker needs to succeed only once whereas the defender has to succeed every time. The attacker benefits from all the security technologies and tools that the defender may use. The attacker may put a lot of effort, resources and time for the exploit, as for instance, with high-profile Advanced Persistent Attacks (APT). Nature favors the attacker. The second law of thermodynamics states that entropy tends not to decrease. It highlights that it is easier to break a system than to build it. Creating increases the order, thus reduces entropy. Whereas breaking increases the chaos thus increases entropy. This is the sad, cruel reality of security.

Security designers must never deny the first law, but rather put this heuristic at the heart of their design.

The designer must expect the attackers to push the limits.
Any design operates within a set of limits defined by its initial requirements. The system should work correctly within these boundaries and should be tested within these limits. Unfortunately, an attacker may attempt to operate outside these boundaries to get unexpected behavior. The security designer should ensure either that these limits are out of reach or at least that the system should detect the violation of these boundaries to react accordingly. Typical examples are buffer overflows and SQL injections.

Systems will have vulnerabilities.
Publishing vulnerabilities is one of the best methods to reach a safer cyber world. Not only will the solution provider close the holes but the publication of the vulnerability will also educate the designers. Obscurity is dangerous for security (We will address it with Law 3). Nevertheless, implementers must have a reasonable amount of time to fix the issue before the public disclosure of the vulnerability. This is called responsible vulnerability disclosure.

As any system will be broken, the designed system must be ready to survive by the updating of its defense mechanisms. Without renewability, the system will be definitively dead. Renewability is a mandatory security requirement. The side effect is that the hacking scene must be monitored to learn as soon as possible about breaches and vulnerabilities.

As any defense will fail, a secure system should implement multiple defenses. Medieval builders knew about it. Middle Age castles had several bulwarks to protect the keep. Each one being increasingly higher than the previous one, It should construct successive obstacles that the attacker has to cross successfully. Diversity in protection makes the exploit harder to perform. A little ranting; one the current buzz messages of some vendors is “forget about firewalls and anti-viruses, use new method X”. Perimetric defense is of course not anymore sufficient to defend against modern threats. Nevertheless, the old-fashioned tools are still necessary for in-depth defense. Would you get rid of firewalls, then your network would become the weakest point of your system and would bypass new method X.

As any system will be broken one day, data may be corrupted or lost. Regular, frequent air-gapped backup of all non-constructible data is the ultimate defense. Back-up is today the only effective answer to ransomware (if you do not have a critical issue with data needed immediately, as for instance in hospitals). Air gapped is important to protect against a new generation of ransomware encrypting remote or cloud-based servers.

As a conclusion, never ask the question “if the system would be broken, …” but rather “Whenever the system WILL be broken, …”. The work of the security practitioner is to limit the risks of a breach, to detect its occurrence, and to mitigate the impact of such breach. The following laws will help in this difficult task.

Alea Jacta Est (3): Ten Laws of Security

Once more, the die has been cast. Yesterday, I sent the final version of the manuscript of my second book to Springer.

The title is Ten Laws of Security. For 15 years, together with my previous security team, I have defined and refined a set of ten laws for security. These laws are simple but powerful. Over the years, when meeting other security experts, solution providers, potential customers, and students, I discovered that these laws were an excellent communication tool. These rules allowed benchmarking quickly whether both parties shared the same vision for security. Many meetings successfully started by me introducing these laws, which helped build reciprocal respect and trust between teams. Over time, I found that these laws were also an excellent educational tool. Each law can introduce different technologies and principles of security. They constitute an entertaining way to present security to new students or to introduce security to non-experts. Furthermore, these laws are mandatory heuristics that should drive any design of secure systems. There is no valid, rational reason for a system to violate one of these rules. The laws can be used as a checklist for a first-level sanity check.

Each chapter of this book addresses one law. The first part of the chapter always starts with examples. These anecdotes either illustrate an advantageous application of the law or outline the consequences of not complying with it. The second part of the chapter explores different security principles addressed by the law. Each chapter introduces, at least, one security technology or methodology that illustrates the law, or that is paramount to the law. From each law, the last section deduces some associated rules that are useful when designing or assessing a security system. As in my previous book, inserts, entitled “The Devil is in the details,” illustrate the gap between theory and real-world security.

The book should be available this summer.

Alea Jacta Est (2)

Four years ago, I sent the manuscript of my first book to Springer.   This weekend, it was the turn of my second book: “Ten laws of security.”    It covers the ten laws.  Now, Springer will start the copy editing and once approved by me, it will go to print.  I hope that it should be available for the first semester 2016.

I will keep you informed of the progress.

Lenovo, Superfish, Komodia: a Man In The Middle story

Lenovo has made this week the headlines with the alleged malware: superfish.   Lenovo delivered  some PCx loaded with “bloatware” Superfish.  Superfish provides solution that performs visual search.  Seemingly, Superfish designed a software that allowed to place contextual ads on the web browsing experience.   To perform this highjacking, superfish uses a software stack from Komodia:  SSL Digestor.  According to the site of Komodia:

Our advanced SSL hijacker SDK is a brand new technology that allows you to access data that was encrypted using SSL and perform on the fly SSL decryption. The hijacker uses Komodia’s Redirector platform to allow you easy access to the data and the ability to modify, redirect, block, and record the data without triggering the target browser’s certification warning.

How does Komodia do the decryption without triggering the certificate validation of the browser?   The CERT has disclosed on Thursday the trick with its vulnerability note VU#529496.

Komodia Redirector with SSL Digestor installs non-unique root CA certificates and private keys, making systems broadly vulnerable to HTTPS spoofing

Komodia install stealthily its own root certificate within the browsers’ CA repository.   The stack holds its private key. This allows to ‘self-sign’ certificate to forge SSL connection.  The software then generates a typical Man In The Middle.   Despite the private key was encrypted, it was possible to extract some corresponding private keys (easy to guess the password; komodia).  This means that as long as the root key is not erased from browsers’ repository, an attacker may use the corresponding private key.  The attacker may sign malware that would be accepted by the machine, and generate phony certificates for phishing.   In other words, other principals than Superfish may use the hack for infecting Lenovo computers.

Lenovo provided a patch that removed the Superfish application.   Unfortunately, the patch does not erase the malicious certificate.  Microsoft provided such patch, and Mozilla should soon revoke it.

This is a perfect example of supply chain attack. The main difference is that the supplier voluntarily infected its product.    Do never forget law 4: Trust No One.

PS:  at the time of writing, the Komodia site was down, allegedly for a DOS.  It may also be because too many people try to visit the site.

Who is monitoring your baby?

Data Watchdog announced that a Russian website featured a database listing of about 73,000  streaming IP webcams or CCTV whose owners are not aware that their webcam is broadcasting the video. The webcams are located all over the world. They are used for offices, baby monitoring, shop’s monitoring, pubs, etc.  All major manufacturers were present amongst the breached webcams.  The webcams were discovered by Internet scanning and trying the default password.  This is a good illustration of Law 8: If you watch Internet, Internet is watching you.  The UK Information Commissioner’s Office recommends changing the default password of the camera and when not needed disable remote access.

The site claims to do that for educational purpose.   This is what the site claims when accessing it.  It seems that it is efficient, as there are less and less listed feeds.

Sometimes administrator (possible you too) forgets to set the default password on security surveillance system, online camera or DVR. This site now contains access only to cameras without a password and it is fully legal. Such online cameras are available for all internet users. To browse cameras just select the country or camera type.

This site has been designed in order to show the importance of the security settings. To remove your public camera from this site and make it private the only thing you need to do is to change your camera default password.

Several interesting lessons:

  • As usual, default password are incriminated.  Users, and even professionals as it seems that CCTV are also listed, do not change the default password.  Manufacturers may not want to enforce the change of the default password, as it creates issues when users forget their password, but they should at least propose it the first time the user boots the device.
  • People are not good with security.  With the Internet of Things (IoT), there will be more and more connected devices.  This means that there will be more and more vulnerable devices on the Net.  IoT may make the Internet more brittle.
  • Who will inform the owners of these spied webcams that they are spied?  The remedy is simple, but the victims should at least be aware that they should apply this remedy.

By the way, did you change the default password of all your devices?  If not, I plead you to do so.

Target and FireEye

Beginning of December 2013, US retail Target suffered a huge leak of data: 40 million valid credit card information were sent to Russian servers. This leak will have serious financial impact for Target as there are already more than 90 lawsuits filed against Target.

Target is undergoing deep investigation to understand why this data breach occurred. Recently, an interesting fact popped up. On the 30th November, a sophisticated, commercial, anti-malware system FireEye detected the spreading of an unknown malware within Target’s IT system . It spotted the customized malware that was installing on the point of sales to collect the credit card number before sending them to three compromised Target servers. Target’s security experts based at Bangalore (India) reported it to the US Security Operation Center in Minneapolis. The alert level was the highest from FireEye. The center did not react to this notification. On 2nd December, a new notification was sent without generating any reaction.

The exfiltration of the stolen data started after the 2nd December. Thus, if the Security Operation Center would have reacted to this alert, although it may not have stopped the collection but at least it would have stopped the exfiltration to Russian servers.

As we do not have the details on the daily volume of alerts reported from Bangalore to the Security Operation Center, it is difficult to blame anybody. Nevertheless, this is a good lesson with the conclusions:

  • Law 10: Security is not a product but a process. You may have the best tools (and Fire Eye is an extremely sophisticated one. It mirrors the system and runs the input data within the mirror and analysis the reactions in order to detect malicious activities). If you do not manage the feedback and alerts of these tools, and take the proper decision, then these tools are useless. Unfortunately, the rate of false error is too high to let current tools take such decisions
  • Law 6: You are the weakest link; The Security Operation Center decided not to react. As FireEye was not yet fully deployed, we may suppose that the operators may not fully trust it. The human decision was wrong this time.