Preventing weak passwords by reading your mind

This is what the site Telepathwords proposes. This site estimates the strength of a password. The interesting part of this Microsoft Research site is the used heuristics.

After each dialed character, it attempts to guess what the next character. if it guessed right, then the character is considered as weak (indicated by a red cross). How does it guess the characters?

Telepathwords tries to predict the next character of your passwords by using knowledge of:

  • common passwords, such as those made public as a result of security breaches
  • common phrases, such as those that appear frequently on web pages or in common search queries
  • common password-selection behaviors, such as the use of sequences of adjacent keys

It considers the password strong if it has at least six non guessable characters.

Of course, the strength of the system relies on the richness of its dictionaries of common passwords and common phrases. Obviously, the game was to play with it. My first thought was that it would be purely English centric. Thus, I tried French and the first one was azerty. Azerty of course was weak. “abrutifrançais” (or French idiot) was a strong password even without the special character ç  “Je pense donc je suis” was also middle (as it guessed the end) . Let’s go further and switch to Latin. “CogitoErgoSum” was also weak as well as “venividivici”.  But “aleajactaest” was extremely robust!!

For the fun, I checked consistency with Microsoft Password Checker. The answers are not consistent. For instance, “CogitoErgoSum” turns out to be strong whereas “aleajactaest” is medium.

As always, it is always rather easy to trick this type of sites. Nevertheless, the site clearly explains that it will not detect all weak passwords, especially from languages other than English

Has NSA broken the crypto?

With the continuous flow of revelations by Snowden, there is not one day without somebody asking me if crypto is dead.  Indeed, if you read some simplifying headlines, it looks like the Internet is completely unsecure.

 

Last Friday, Bruce Schneier published an excellent paper in the guardian : “NSA surveillance: a guide to staying secure.”  For two weeks, he has analyzed documents provided by Snowden.   From this analysis, he drives some conclusions and provides some recommendations.  In view of the security profile of Bruce, we may trust the outcome.  I recommend the readers to read the article.

My personal highlights from this article.

  • The documents did not present any outstanding mathematical breakthrough.   Thus, algorithms such as AES are still secure.
  • To “crack” encrypted communications, NSA uses the same tools than hackers but at a level of sophistication far higher.   They have a lot of money.  The tricks used:
    • Look for used weak algorithms
    • Look for weak passwords with dictionary attacks
    • Powerful brute force attacks
  • The two most important means are:
    • Implementing back doors and weakening commercial implementations (poor random generator, poor factors in Elliptic Curve Cryptosystems (ECC), leaking keys…).   The same is true for hardware.

As was revealed today, the NSA also works with security product vendors to ensure that commercial encryption products are broken in secret ways that only it knows about.

    • Compromising the computer that will encrypt or decrypt.  If you have access to the data before it is secured, then you do not care about the strength of the encryption.

These are hacker tools designed by hackers with an essentially unlimited budget. What I took away from reading the Snowden documents was that if the NSA wants in to your computer, it’s in. Period.

His recommendations are common sense.   The most interesting one is to avoid using ECC as NSA seems to influence the choice of weak curves and constants in the curve.

 

His final statement

Trust the math.

is OK, but I would add “Do not trust the implementation.”  Always remember law 4: Trust No One.

Do users care about security warnings?

This is an important question.   The common belief in the community is that people are oblivious of security issues.  They will not care.   Akahawe (Berkley) and Felt (Google) launched and empirical study by observing more than 25 million real interactions during security warnings for Chrome and Firefox browsers. This recent study was conducted during May and June 2013.    They collected information using the in-browser telemetry system.  For memory, the telemetry system is switched on voluntarily by users.   The researchers studied phishing warnings, malware warnings and SSL warnings.  They measured the click-through ratio, i.e. the number of times, users click through to view the corresponding page

First some raw data extracted from their paper.

Firefox Chrome
Malware 7.2% 23.2%
Phishing 9.1% 28.1%
SSL 32.2% 73.3%

The good news is that the majority of users take into account the security warnings in case of malware or phishing.  As the detection mechanism uses Google’s Safe Browsing List, the ideal ratio should be near 0% as the ratio of false positive in the list is extremely low.   For SSL warnings, the ratio is significantly higher.   Of course, there are many legitimate sites that generate such warnings (misconfiguration of the server, self signed certificates…).  Thus, the ideal ratio may not be null.  Nevertheless, the ratio seems high.

Interestingly also, Chrome has a higher click-through ratio than Firefox.  In other words, Chrome users take less care of the warnings.  In the case of SSL, the huge difference (+40%) can be explained because for several reasons, Chrome users receive more warnings.  For instance, by default, Firefox memorizes an accepted SSL warning whereas Chrome will repeatedly present the same warning.

Some interesting findings:

  • Consistently, Linux users did have a higher click-through ratio than  other operating systems’ users. Two reasons may explain it:
  • They feel more confident in their skill set because they are tech savvy, and have less risk aversion than average users.
  • They feel that being under Linux prevents them from security issues.  Unfortunately, that is not true for phishing or SSL.
  • The number of clicks to go through the warning did not impact the ratio.  To accept malware or phishing, you need one click with Mozilla and two clicks with Chrome.
  • Users who discarded the warnings spend less time on the page (1.5s) compared to users who took into account the warnings (3.5s).

In any case, a good reading…

D. Akhawe and A.P. Felt, “Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness,” 2013 available at http://research.google.com/pubs/archive/41323.pdf.

If your power adapter could recover your lost password?

This is the idea that Apple protected by a patent.   The basic idea is that a familiar peripheral could serve as a vault for the recovery process of lost credentials.

Claim 1: A method of storing a password recovery secret on a power adapter, the method comprising:

  • receiving a password recovery secret associated with a computing device at an electrical power adapter via an interface with the computing device; and
  • storing the password recovery secret on a memory in the electrical power.

The peripheral would store the memorized password encrypted with a identifier unique to the main device.   This means that there is a pairing between the device and the peripheral.  In other words, it is useless to steal the peripheral to try to extract the stored password.  The claims specifically cites electrical power adapter and non-transitory computer-readable storage medium.

To recover the lost password, you will have to start a procedure of recovery.   The recovery procedure returns the encrypted password that can be decrypted if recovered by the proper device.   It can also share the secret between the peripheral and a remote server.

You may already have spotted the tricky part of the game:  how do you trigger the recovery procedure?  The patent does not tackle this issue.   If Alice is able to trigger it only because she has access to both the portable and the power adapter, then of course game over.   Steal both of them, then you can get access to the computer by recovering the secret and changing the password.   It would make the system even weaker than before.  If  Alice needs a secret to  trigger it, then we’re back to the starting point.  The likelihood that she forgot this recovery secret is even higher than forgetting the day to day password!    By the way, this is always one of the difficult parts of every recovery system.

Will we see that in one of the next MacBook generations?

LinkedIn password leak

OK, now everybody should be aware that about 6.5 millions hashed passwords did leak out from LinkedIn.   On 6th June, the information was starting to buzz around.   The same day, LinkedIn confirmed that some of the alleged leaked passwords were real.  Soon the leak was confirmed and LinkedIn requested the compromised users to change their password.  I was among the happy compromised users.

What is the problem.   You never store passwords in the clear on the computer.  In fact, the good practice is to store the hashed password rather than the password itself.

 hashedPassword=Hash(password)

To test the validity of the password you check whether the hash of the proposed password fits the stored, hashed password.  The hash is the result of a one way function (SHA1 and MD5) are examples of such one way function).  It is extremely difficult (not anymore impossible in the case of SHA1) to create a valid entry of a one way function that matches a given hash value.  In other words, having a hashed password, it is extremely difficult to guess a valid password matching this hashed password.

Then, we are safe.  No.  Where is the problem?   It comes from rainbow tables.  Rainbow tables are huge precomputed values for a given known hash function.  Ophcrack is such an example of rainbow tables.  If the password is part of this dictionary, then it is extremely fast to find it.

Indeed, the good security practice for storing password is to use salted hash.  A salted hash is a one way function that uses an additional “secret” value called the salt.  In that case, usual rainbow tables do not anymore work.

hashedPassword=Hash(password + salt)

Unfortunately, it seems that LinkedIn did not use salted password.

Lesson: Use always the best practice in security.  If we are using so many tricks, it is often because there are good reasons.  Often the result of a lesson coming from a hack.

Sony once more under fire, but proper reaction

Philip Reitinger, CISO of Sony, has announced that about 93,000 accounts on Sony’s systems have been compromised.  They monitored a suspect massive set of trials of login/passwords.  Most of them were unsuccessful, but about 93,000 succeeded.  Most probably, the attackers get access to a database of plugin/passwords of another web site (such information is available on the Darknet).

Some people use the same login/password for different sites.  These persons may be the victims of this attack.

We must congratulate Sony for its reaction:

  • Transparency;  they were clear on what happened, and provided the data.  The reaction of customers was extremely positive
  • Monitoring:  this proves that Sony is carefully monitoring activities to detect strange behaviour or patterns.  This is key in security.

Lessons:

  • Customers are ready to hear the truth in case of attack.  I would even guess that they would rather be aware than listen about it once it is far too late.
  • Do not use the same password for all sites, at least not for the critical ones.

Where Do Security Policies Come From?

In a paper presented at the 6th Symposium on Usable Privacy and Security, DINEI Florencio and CORMAC Herley, Microsoft Research, examined the policy ruling the passwords of 75 Internet sites. The type of websites ranged from very popular sites/services such as Facebook or Paypal to more confidential ones such as governmental agencies.

They evaluated the strength of the enforced policy with the equation N.log2(C) where N is the minimum size of the password and C is the cardinality of the allowed character set. Obviously, this equation is not a perfect evaluation of the constraints because it does not take into account constraints such as mandatory use of digits or special characters. Nevertheless, the result is simple (and perhaps not too surprising)

The size of the site, the number of user accounts, the value of the resources protected, and the frequency of non-strength related attacks all correlate very poorly with the strength required by the site.

In other words, the sites with the most constraining policies are not necessarily the sites which are at most at risks. For instance, Gmail or Paypal do not have strong constraints. Most often, the sites with most constraining policies do have no incentives to have numerous visits or have a captive “audience”. The constraints were more driven by the need to attract visitors than by security itself.

It is the usual trade-off between security and usability. Facebook that is paid by advertising needs frequent visitors. A too complex password policy may rebuke many users and thus make the site less attractive.

The authors advocate that there is most probably no need of strong password policy because strategy to defeat online brute force attack should be deterrent enough. They cite Twitter that recently banned the 370 most common passwords. According to them, strong passwords are most probably only useful in case of an access to the hashed password files. (Remember the use use of rainbow tables)

Their view on the trade-off between usability and security is interesting.

When the voices that advocate for usability are absent or weak, security measures become needlessly restrictive.

I let you savor this statement. Any reactions?

The paper is available here.