“Securing Digital Video” is now available!

My book, “Securing Digital Video: Techniques for DRM and Content Protection” is now available on sale.   It can be found directly at Springer (about one week delay), from US amazon (2-4 weeks delay) and from French Amazon (available only in August).

This is the last step of a long process.  I hope that the reader will enjoy it and that it will be useful to the community.   More details on the book are available here.

I would be glad to hear your suggestions, appreciations (even negative ones), and answer any question.  For that, use preferably the address book@eric-diehl.com.  I will always answer.

HADOPI: a little insight view

In may 2011, French HADOPI mandated an expert, Dadid Znaty, to evaluate the robustness of the system that tracks infringers on P2P.  The objectives were:

  1. Analyze the method used to generate fingerprints
  2. Analyze the method used to compare sample candidates with these fingerprints
  3. Analyze the process that collects the IP addresses
  4. Analyze the workflow

On January 16, 2012, Mr Znaty delivered his report.  A version without the annexes was published on HADOPI site for public dissemination. The report concluded that the system was secure.

Conclusion : en l’état, le processus actuel autour du système TMG est FIABLE.  Les documents constitués du procès verbal (saisine), et si nécessaire du fichier complet de l’oeuvre (stockée chez TMG) associé au segment de 16Ko constituent une preuve ROBUSTE.

Le mode opératoire utilisé permet donc l’identification sans équivoque d’une oeuvre et de l’adresse IP ayant mis à disposition cette oeuvre.

An approximate translation of this conclusion is

Conclusion: The current process of TMG’s system is RELIABLE.  The documents, the minutes, and if necessary the complete opus (stored by TMG)  associated to the 16K segment are a ROBUST proof.

The workflow allows unambiguous identification of a piece of content and the IP address that made it available.

Quickly, content owners complained that sensitive information may leak from this report.  Therefore, it was interesting to have a look to this report.

The report is not anymore available on the HADOPI site.  The links are present, but there is no actual download.    Sniffing around, you may easily find copies of the original report (for instance here).   Once we have it, what is leaking out?

Most probably for the experts, nothing really interesting.   We learn a lot on the process of identification of the right owners of a content.  This part is well described in the document.  When we look on the technical side, no details.  the expert was always answered that the technology providers will not give any details on the algorithms.   Therefore, to validate the false positive rate, the expert checks if there is any content inside the reference database that share the same fingerprint.  The answer is no (excepted for one case where they fed twice the same master  :Pondering: ).   Conclusion: no false positive!  I let you make your own conclusion.

The annexes that may have some details were not published.  I have not found a copy on the net.  What bit of information could we grasp:

  • There are two technology providers for the fingerprint.  They are “anonymized” in the document for confidentiality  (sigh! )  We can guess that the audio fingerprint provider is not French as a quote of an answer was in English.  This is not a surprise as to the best of my knowledge there is no French technology commercialy available.
  • They look for copyrighted content on P2P networks using keywords.  Once a content is spotted, its fingerprint is extracted and compared to the master database.  If the content fits, its hashcode is recorded (most probably the md5 code).   Then, TMG can look for this md5 sample and record the IP address.
  • The content is recognized if there is a ordered sequence of fingerprints.   The length of the sequence seems to depend of the type of content and the rights owner.  For audio, 80% of the duration.  For video, in the case of ALPA, 35 minutes…

In conclusion, no a great deal…

 

The older, the more security concerned

This is the conclusion of a study performed by Dimensional Research for ZoneAlarm (a division of Check Point).   For memory, ZoneAlarm offers a free Antivirus and Firewall, as well as two paid security suites.

The result is not surprising.  One of the questions requested to rate relative importance of computer related activities between Community, Entertainment, Information, Productivity, and Security.  Following picture summarizes the rating.

Without surprise, for younger generations, the computer is mainly used for entertainment and community (40% compared to the 8% of the baby boomers).    Security and privacy will be sacrificed if they interfere with the access to community.   This is normal in view of the addictive behavior related to social networking.   I would guess that this trend will grow the pyramid of ages as more and more people will be enrolled in social networking (Facebook has more than 1 billion accounts, whereas Twitter has more than 500 millions).

Interestingly, Gen Y (18-25 years) believes to be more knowledgeable about security than baby boomers (63% versus 59%) but suffered of more security incidents in the last two years.   This most probably comes from different activities and larger exposition to risk by  more risky sites.

And without a surprise, the cost of security is one excuse for not implementing security solutions.   Which highlights that some vendors such as ZoneAlarm, or Avast do not a good work on communication as they all offer free versions of their tools.  Trans generational, half of the respondent estimated that security should be free.

Lessons:

  • Ideal security should be transparent for users (price, and ease of use).  It must not impair the user experience.
  • Expect many more attacks on social networks in the future.  Many people will not sacrifice their community for a more secure environment.   This is usual for addiction.

 

LinkedIn Password Leak (2)

After the leak of 6 millions of non salted passwords of LinkedIn, a new episode in the story.  Katie SZPYRKA, residing in Illinois, premium member of LinkedIn, sues LinkedIn for

… failing to properly safeguard its users’ digitally stored personally identifiable information (“PII”), including e-mail addresses, passwords, and login credential;

She claims that LinkedIn fails to properly encrypt its users’ PII

… LinkedIn failed to adequately protect user data because it stored passwords in unsalted SHA1 hashed format.  The problem with this practice is two-fold.  First, SHA1 is an outdated hashing function, first published by the National Security Agency in 1995.  Secondly, storing users’ passwords in hashed format without first “salting” the passwords runs afoul of conventional data protection methods, and poses significant risks to the integrity users’ sensitive data.

The second statement is true.  I would be more cautious with the first one.  There are known attacks on SHA1.  it is why there is a new challenge to find a new replacement to SHA1.  Nevertheless, they are not easy and simple.  Using SHA1 was not the problem.  Using salted SHA1 for storing passwords is still a good practice for several years.

She also complains that the attack was using an SQL injection and that the site was not properly protected against this type of attacks, despite the existence of a NIST checklist to prevent them  :Weary:

An interesting statement

… free account users buy products  and services by paying LinkedIn in the form of contact information (first name, last name, and an email address)

That’s true.  I would even add by her/his network information that allows to better profile the user.

The outcome of this action will be interesting.   How many web sites would be under the same threat?  The main problem is to decide whether it is pure negligence or a vulnerability as there will always be in web sites (or nay products).  Zero vulnerability will never exist.  If each breach would end up in a class action, this would most probably the end of Internet.

The filing is available here.

Yahoo private key in the field

End of May, Yahoo issued new browser AXIS and its extensions for Chrome and Firefox.  As usual, the extensions were signed.  Unfortunately, Yahoo made a mistake.  On 23rd May, security researcher Nik Cubrilovic disclosed that the private key used to sign the applet was present in the applet!

Private keys should never be published.  They are the secret part and the root of trust.  You sign the applet with your private key to prove that you’re the originator.  Any principal who has your public key can check whether the signature is valid.  Thus, you should never disclose your private key.  If an attacker has your private key, she can impersonate you.  This means that, with the Yahoo private key, you can sign any piece of software and pretend it to be issued by Yahoo.  A perfect tool for malicious applets!

Yahoo quickly released a new version using a different signing key and published the corresponding certificate.  Yahoo will revoke the leaked key.

It is difficult to understand how such a mistake could happen.  There are at least two errors:

  • The developer(s) who wrote the applet was not understanding what a signature is.  There is no rationale why a private key should be in the code.
  • The signing private key was available in the clear to developer.  Good practice is to have the private key in a hardware secure module (HSM), such as a smart card. The module performs the signature thus the private key does never leave the module.

Law 6:  You are the weakest link.  Once more, a human error.

LinkedIn password leak

OK, now everybody should be aware that about 6.5 millions hashed passwords did leak out from LinkedIn.   On 6th June, the information was starting to buzz around.   The same day, LinkedIn confirmed that some of the alleged leaked passwords were real.  Soon the leak was confirmed and LinkedIn requested the compromised users to change their password.  I was among the happy compromised users.

What is the problem.   You never store passwords in the clear on the computer.  In fact, the good practice is to store the hashed password rather than the password itself.

 hashedPassword=Hash(password)

To test the validity of the password you check whether the hash of the proposed password fits the stored, hashed password.  The hash is the result of a one way function (SHA1 and MD5) are examples of such one way function).  It is extremely difficult (not anymore impossible in the case of SHA1) to create a valid entry of a one way function that matches a given hash value.  In other words, having a hashed password, it is extremely difficult to guess a valid password matching this hashed password.

Then, we are safe.  No.  Where is the problem?   It comes from rainbow tables.  Rainbow tables are huge precomputed values for a given known hash function.  Ophcrack is such an example of rainbow tables.  If the password is part of this dictionary, then it is extremely fast to find it.

Indeed, the good security practice for storing password is to use salted hash.  A salted hash is a one way function that uses an additional “secret” value called the salt.  In that case, usual rainbow tables do not anymore work.

hashedPassword=Hash(password + salt)

Unfortunately, it seems that LinkedIn did not use salted password.

Lesson: Use always the best practice in security.  If we are using so many tricks, it is often because there are good reasons.  Often the result of a lesson coming from a hack.

Hacking reCaptcha

reCaptcha is the captcha by Google.  The hacking team DefCon 949 (DC949) disclosed at the conference LayerOne their method to break captcha.  The astonishing, announced accuracy is 99%.  Some interesting lessons from this hack

  • The method to break reCaptcha attacked the audio part. Normally, reCaptcha proposes challenges coming from altered scanned words from books, and you have to write them.  Thus, it should have  a large samples of challenges.  The trick: reCaptcha has a mode for visually impaired people.  The challenge is now audio with words on noisy background.  The vocabulary is limited to 58 words, and the background is a mix of a limited number of audio sequences.  Thus, there were far less audio challenges than visual challenges.  Thus, the attackers went against the easiest challenge.  As a cryptography metaphor, they had the choice between a large key or a small key for the same final result.
    A nice illustration of law 6: “Security is not stronger than its weakest link”.   Audio challenge was the weakest link.
  • Before the conference, Google updated its algorithms, thus defeating he hack.  This spoiled a little bit the presentation. Nevertheless, it removed nothing to the quality of the attack.  When reading blogs coverage, I had sometimes the feeling that some people thought that it was unfair behavior.  No!  It is the right thing to do.  It is always a cat and mouse game.  The mouse has to run fast.

The presentation is available on youTube
httv://www.youtube.com/watch?v=rfgGNsPPAfU