Password complexity

Password complexity is one of the top conflictual topics of security. According to NIST, many companies may over-complicate their password policies.

In 2003, Bill BURR (NIST) established a set of guidelines for passwords asking for long passwords. Since then, many policies requested these complex, lengthy passwords mixing characters, digits and special characters. Recently, he confessed that he regretted to have written these guidelines. In June 2017, NIST published a more recent version of the NIST 800-63B document. These guidelines are user-friendly.

In a nutshell, if the user defines the password, then it should be at least eight characters long. If the service provider generates the password, it should be at least six characters long and can even be numerical. The service provider must use a NIST-approved random number generator. The chosen or generated password must be checked against a blacklist of compromised values. There are no other constraints on the selection.

On the user-friendly side, NIST recommends:

  • The password should not be requested to be changed unless there is evidence that it may be compromised.
  • The user should be allowed to use the “paste” command to favor the use of password managers
  • The user should be able to request the temporary display of the typed password.

Additional constraints are on the implementation of the verifier. The verifier shall not propose any hint. The verifier must implement a rate-limiting mechanism to thwart online brute-force attacks. The password shall be stored as a salted hash using an approved key derivation function such as PBDKDF2 or Balloon with enough iterations (at least 10,000 for PBKDF2).

Appendix A of the NIST document provides rationales for this simplification. For instance,

Many attacks associated with the use of passwords are not affected by password complexity and length. Keystroke logging, phishing, and social engineering attacks are equally effective on lengthy, complex passwords as simple ones.

Or

Research has shown, however, that users respond in very predictable ways to the requirements imposed by composition rules [Policies]. For example, a user that might have chosen “password” as their password would be relatively likely to choose “Password1” if required to include an uppercase letter and a number, or “Password1!” if a symbol is also required.

A few cautionary notes; the addressed threat model is an online attack. It does not adequately cover offline attacks where the attacker gained access to the hashed password. The quality of the implementation of the salted hash mechanism is paramount for resisting offline attacks. Furthermore, it should be hoped that a theft of salted hash database should be identified and would trigger the immediate modification of all passwords, thus, mitigating the impact of the leak. NIST recommends using memorized secrets only for Assurance Level 1, i.e.,

AAL1 provides some assurance that the claimant controls an authenticator bound to the subscriber’s account. 

Higher assurance levels require multi-factor authentication methods. The guidelines explore them in depth. It may be the topic of a future post.

NIST is a reference in security. We may trust their judgment. As we will not get rid soon of the password login mechanism, we may perhaps revisit our password policy to make it user-friendlier and implement the proper background safeguard mechanisms.

I wish you a happy, secure new year.

Law 9 – Quis custodiet ipsos custodes?

This post is the ninth post in a series of ten posts. The previous post explored the eighth law: If You Watch the Internet, the Internet Is Watching You. This Roman sentence from poet Juvenal can be translated as “Who will guard the guards themselves?” Every element of a system should be monitored. This also includes the monitoring functions. As often some parts of the security model rely on the detection of anomalies, it is key that this detection is efficient and faithful.

Any security process should always have one last phase that monitors the efficiency of the implemented practices. This phase creates the feedback loop that regulates any deficiency or inefficiency of the security process. The quality and probity of this last phase have a strong influence on the overall robustness of the security. For instance, the COBIT framework has one control point dedicated to this task: ME2 – monitor and evaluate internal control.

The beauty of Bitcoin’s model is that every user is the ward that surveys the other users. The Bitcoin system assumes that a majority of users will operate faithfully. The Proof Of Work is the consensus mechanism that enforces, in theory, this assumption. Mining is costly and managing the majority of the hashing power may be impossible for one actor. This assumption may be questionable with new cryptocurrencies that do not have a significant number of users and with the advent of mining pool.

Separate the roles; Divide and Conquer. The scope of controlling and managing roles should be kept as small as possible. Guards should have a limited scope of surveillance and restricted authority. This reduces the impact of a malicious insider or the influence of an attacker who hijacked an administrator or controller account. Where possible, the scope of roles should partly overlap or be redundant between several individuals. This trick increases the chances to detect an error or a mischief from an insider as success would require collusion.

For instance, reduce the scope of system administrators as they have the keys to the kingdom. Nobody should have all the keys of the kingdom. After the Snowden incident, NSA drastically reduced the number of its system administrators.

Read the logs; logfiles are an essential element for monitoring and auditing the effectiveness of the security. They will be useful to detect and understand security incidents. Nevertheless, their optimal efficiency is reached only when they are regularly analyzed to detect anomalies. Ideally, they have to be proactively analyzed. Applying only a-posteriori log analysis is a weak security stance. Logfiles are not to be used only for forensics purpose.

If you find this post interesting, you may also be interested in my second book “Ten Laws for Security.”  Chapter 10 explores in details this law. The book is available for instance at Springer or Amazon.

What the Public Knows About Cybersecurity

In June 2016, The US Pew Research Center asked 1,055 US adults 13 questions to evaluate their knowledge about cyber security. The questions were ranging from identifying a suitable password to identifying a two-factor authentication system.

The readers of this blog would belong to the 1% of the sample (i.e., 11 individuals) who made no mistake. To be honest, the non-US readers may fail at the question related to US credit score (“Americans can legally obtain one free credit report yearly from each of the three credit bureaus.” This question is not directly related to cybersecurity and is purely US-related. I must confess that before moving to the US, I would have had no ideas of the right answer)

Not surprisingly, the success ratio was low. The average number of correct answers was 5.5 for 13 asked questions! ¾ of interviewees correctly identified a strong password. About half of the individuals accurately spotted a simple phishing attack. Only 1/3 knew that https means the connection is encrypted.

There was a clear correlation between the level of education and the ratio of proper answers. Those with college degrees or higher had an average of 7 right answers. The impact of the age was less conclusive.

You may take the quiz.

Lessons:

The results are not a surprise. We do not see an increase of awareness. This study should be a reminder that we, the security practitioners, must educate people around us. It is our civic responsibility and duty. This education is the first step towards a more secure connected world. Without it, the connected world will become a hell for most people.

The paper may be useful in your library

Ref:

Olmstead, Kenneth, and Aaron Smith. “What the Public Knows About Cybersecurity,” March 22, 2017. http://assets.pewresearch.org/wp-content/uploads/sites/14/2017/03/17140820/PI_2017.03.22_Cybersecurity-Quiz_FINAL.pdf

Security assessment: white or black box?

White or Black: Which is the best choice? Is white box testing cheating? To me, the answers seem trivial. Nevertheless, recent discussions highlighted that it is not clear for everybody.

White box testing means that the security assessor has access to the entire documentation, the source code and even sometimes an instrumented target of evaluation (TOE). Black box testing means that the security assessor has only access to the TOE and publicly available information. Thus, black box testing mimics the configuration encountered by hackers. Of course, hackers will use any possible means to collect related non-public information, for instance, using social engineering.

Many people believe that black box evaluation is the solution of choice. As it is similar to the hackers’ configuration, it should provide a realistic assessment of the security of the TOE. Unfortunately, this assumption is wrong. The main issue is the asymmetry of the situation. On one side, the black box evaluation is performed by one evaluator or a team of reviewers. On the other side, a legion of hackers probes the product. They may outnumber the evaluators by several magnitudes. They spend more time to discover vulnerabilities. Mathematically, their chance to find an exploit is higher than the likelihood of the evaluators to find the same vulnerability.

Do you evaluate your security to know whether an attacker may breach your system? According to Law 1, this will ineluctably happen. Then, if your evaluation team has not found any problem, you may only conclude that there were no blatant vulnerabilities. You have no idea whether (or rather when) a hacker will find a vulnerability.

Alternatively, do you evaluate the security to make your product more secure, i.e., with fewer vulnerabilities? In that case, you will give the evaluators the best tools to discover vulnerabilities. Therefore, white box testing will be your choice. For the same effort, white box testing will find more security issues than black box testing. Furthermore, the team will pinpoint the faulty part of the system whereas black box testing will disclose how the attacker may succeed but not where the actual issue is. Furthermore, let us assume that the white box assessment discovered a vulnerability through code review. The white box tester has just to explain the mistake and its consequences. For the same vulnerability, the black box tester has to blindly explore random attacks until he/she finds the vulnerability. Then, the evaluator has to write the exploit that demonstrates the feasibility of the attack. The required effort is too big. Thus, in term of Return On Investment (ROI), white box testing is far superior to black box testing. More discovered vulnerabilities for the money!

Fixing a vulnerability discovered by white box testing may also be cheaper. It is well known that the earlier a bug is found during the development cycle, the cheaper it is to fix. The same goes for vulnerabilities. As some white box security testing can occur before final integration (design review, code review…), fixing it is earlier thus cheaper than for black box testing which occurs after the final integration.

White box security testing is compliant with Kerckhoffs’s law. The selection of new cryptographic primitives uses white box testing. The algorithm is published for the cryptanalysts to break. ISO27001 is a kind of white box evaluation.

When a company claims that a third party audited the security of its product, in addition to the identity of this third party, it would be great to disclose whether it was white or black box testing. I would favor the first one.

Thus, when possible, prefer white box security testing over black box testing. This is the wise, winning choice. The bounty programs are the only exception. They operate as black box testing. They should be a complement to white box testing but never a replacement. Their ROI is high as you pay only when successful.

 

 

 

Malware Lets a Drone Steal Data by Watching a Computer’s Blinking LED

This news initially published by Wired has made the headlights of many news and blogs. Thus, I had to dive in and read the paper. This team already disclosed in the past the use of some covert channels such as temperature. Three researchers, GURI M., ZADOV B., and ELOVICI Y. have devised a new way to breach airgap. They use the hard-drive (HDD) LED as a covert channel. By reading from the hard drive with a given sequence, it is possible to control the LED without having to be a privileged user. The reading sequence modulates the emitted light that carries the covert channel.

They experimented with several normal computers and concluded that they were able to reach about 4K bit/s by reading 4KB sectors. Obviously, such throughput does require special equipment to record the blinking LED. Typical video cameras will not allow more than 15 bit/s depending on their frame per second (fps). Do not forget Shannon’s theorem about sampling. Thus, they used a photodiode or a specialized amplified light detectors. Only such kind of equipment can guarantee a good detection rate.

Using the HDD reading for a covert channel is not a new method. At Black Hat 2008, FILIOL E. et al. disclosed such attack but they used the clicking of the hard HDD, i.e., acoustic channel, rather than the LED, i.e., optical channel. This is an interesting presentation of many covert channels.

The new attack is nice and adding the drone component guarantees the buzz. Nevertheless, I believe it is not as dangerous as publicized. The issue is the malware itself. The malware has to be the exclusive entity accessing the HDD during the transmission. Indeed, if any concurrent process uses the HDD, it will mess up with the emitted message. Therefore, the researchers recommend turning off the disk caching (drop_caches for Linux). What is the likelihood that an air-gapped computer can run a malware as the exclusive process without being noticed? One of the characteristics of the malware is that it should be stealthy, thus probably not being alone to access the HDD.

The second issue is the synchronization with the spying eyes. The evil maid scenario (or evil drone) does not seem realistic. The malware should execute only during the presence of the spy; else it will be noticed (due to the exclusivity of access to HDD). The spy cannot signal its presence to the malware as the malware is air gapped thus cannot receive any incoming message. Thus, either they have to define in advance some rendez-vous, or the malware has to run repeatedly for a long period, i.e., reducing its stealthiness. If the spying device is “fixed,” using cameras is not realistic due to their low bandwidth, thus requesting the malware to run for long periods. Nevertheless, the spy may have installed special equipment and record everything and analyze later the recorded light and look for the malware sequences when the malware wakes up and plays. The spying device will have to exfiltrate stealthily a far larger message than the covert message, increasing the risk to be detected.

The attack is possible but seems more complex than what is publicized. The paper’s proposed countermeasures disclose the defense:

Another interesting solution is to execute a background process that frequently invokes random read and write operations; that way, the signal generated by the malicious process will get mixed up with a random noise, limiting the attack’s effectiveness.

As already told, I believe that in most cases, more than one process will be executing and accessing the HDD. If you are paranoid, you can always hide the LED. 

Reference:

Guri, Mordechai, Boris Zadov, and Yuval Elovici. “LED-It-GO Leaking a Lot of Data from Air-Gapped Computers via the (Small) Hard Drive LED,” February 2017. http://cyber.bgu.ac.il/advanced-cyber/system/files/LED-it-GO_0.pdf

Calmette, Vincent, Stephane Vallet, Eric Filiol, and Guy Le Bouter. “Passive and Active Leakage of Secret Data from Non Networked Computer.” Black Hat 2008, Las Vegas, NV, USA, 2008. https://www.researchgate.net/publication/228801499_Passive_and_Active_Leakage_of_Secret_Data_from_Non_Networked_Computer

Law 7 – You Are the Weakest Link

laws7This post is the seventh post in a series of ten posts. The previous post explored the sixth law: Security is not stronger than its weakest link.  Although often neglected, the seventh law is fundamental.  It states that human users are often the weakest element of the security.

Humans are the weakest link for many reasons.  Often, they do not understand security or have an ill perceived perception of it.  For instance, security is often seen as an obstacle.  Therefore, users will circumvent it when security is an obstruction to the fulfillment of their task and will not apply security policies and procedures.  They do not believe that they are a worthwhile target for cyber-attacks.

Humans are the weakest link because they do not grasp the full impact of their security-related decisions.  How many people ignore the security warnings of their browser?  How many people understand the security consequences and constraints of Bring Your Own Device (BYOD) or Bring Your Own Cloud (BYOC)?  Employees put their company at risk by bad decisions.

Humans are the weakest link because they have intrinsic limitations.  Human memory is often feeble thus we end up with weak passwords or complex passwords written on a post-it.  Humans do not handle complexity correctly.  Unfortunately, security is too complex.

Humans are the weakest link because they can be easily deceived.  Social engineers use social interaction to influence people and convince them to perform actions that they are not expected to do, or to share information that they are not supposed to disclose.   For instance, phishing is an efficient contamination vector.

How can we mitigate the human risk?

  • Where possible, make decisions on behalf of the end user; as the end users are not necessarily able to make rational decisions on security issues, the designer should make the decisions when possible. Whenever the user has to decide, the consequences of his decision should be made clear to him to guide his decision.
  • Define secure defaults; the default value should always be set to that for the highest or, at least, an acceptable security level. User friendliness should not drive the default value, but rather security should.
  • Educate your employees; the best answer to social engineering is enabling employees to identify an ongoing social engineering attack. This detection is only possible by educating the employees about this kind of attack.  Training employees increases their security awareness and thus raises their engagement.
  • Train your security staff; the landscape of security threats and defense tools is changing quickly. Skilled attackers will use the latest exploits.  Therefore, it is imperative that the security personnel be aware of the latest techniques.  Operational security staff should have a significant part of their work time dedicated to continuous training.

Interestingly, with the current progress of Artificial Intelligence and Big Data analytics, will the new generation of security tools partly compensate this weakness?

If you find this post interesting, you may also be interested in my second book “Ten Laws for Security” that will be available end of this month.   Chapter 8 explores in details this law. The book will be available for instance at Springer or Amazon.