Watermark and privacy

The Center for Democracy & Technology (CDT) issued an interesting paper titled “Privacy principles for digital watermarking“. CDT published similar principles of other technologies such as RFID or DRM.

The document proposes eight principles:
1. Privacy by design; Interestingly in this principle, CDT recommends that the digital watermark technology providers imposes, by contract binding, to the application designer to respect privacy issues. This is highly ethical but is it realistic in business environment?
2. Avoid embedding independently useful identifying information directly in watermark; in other words the payload should look random without access to relevant information
3. Provide notice to end-users; CDT provides an interesting rationale to inform end users if the watermark is used against copyright infringement. End user should secure his/her content to avoid theft by third parties; else they may suffer from legal actions.
4. Control access to reading capability
5. Respond appropriately when algorithms are compromised; Their recommendations is not to renew the algorithms as technologists would recommend. Rather, CDT recommends to publish a notice if the hack allows watermark forging. I am not sure that this will be loved by technology provider
6. Provide security and access control for back-end databases
7. Limit uses for secondary purposes
8. Provide reasonable access and correction procedures for personally identifiable information

The principles are sound and many of them apply to other security related techniques. Of course, in view of the goal of its editor, some recommendations are Utopian. This document is worth reading.

FBI warning against counterfeited CISCO routers

Beginning May, FBI issued a warning about counterfeited CISCO routers. The US government, university, and companies were purchasing top notch routers from CISCO. In fact, their retailers were sourcing in China with counterfeited material. Thus, more than 3,500 gears were installed in critical places with counterfeited materials.

The problem is that nobody knows if there was no trapdoor installed in these routers. Backdoor in sensitive places would be very strong weapons for any attacker. Currently, we don’t know if is a part of warfare or just a traditional counterfeiting operation.

In order to limit the expenses, more and more governments and even armies use main street devices for their infrastructure. They do not anymore build their equipment. This means that they change their trust model. They are using the same trust assumption as we, common mortals, use: trust your supplier.

Of course in case of counterfeited material, this assumption is extremely weak. The risk is not only the presence of trapdoors, but simply the quality of the device or software itself. On critical equipment, the reliability may be lower than expected.

Nevertheless, is this assumption true for genuine equipment? This reminds me the accusation of NSA trapdoor in Microsoft cryptographic API. Researcher discovered the presence of key called NSA_key! (see cryptome.org). This ended up with some governments requiring to use exclusively Open Source in some parts of their IT infrastructure to avoid potential trapdoors.

To view the presentation of FBI, visit abovetopsecret.com

History: the secure line between Kremlin and Elysée

In January 1968, France sold to USSR equipment to securely cypher the direct line between Kremlin and Elysée (The French equivalent of US White House). The price of the equipment was about 125,000F. The simplified description of the equipment clearly shows that it is based on One Time Pad. The devices were encrypting and decrypting with random tape (appareils de chiffrement et de déchiffrement par bandes aléatoires).

It was common knowledge that the protection of White House and Kremlin’s direct line was one time pad. It was also the case for the line between France and USSR but with French equipment.

Are they still using one time pads? or less theoretically secure systems but more user friendly?

For more information, read Quand l’Elysée équipait le Kremlin (in French)

Predictable random generator in Debian’s OpenSSL

On 13th may, Debian announced that Luciano Bello discovered a weakness in the random generator used for OpenSSL. A line of software was removed “for quality reasons”.

/*
* Don’t add uninitialised data.
MD_Update(&m,buf,j); /* purify complains */
*/

Checking tools such as Purify or Valgrind complained that variable buf was not initialized. Thus, it was decided to remove this line. Unfortunately, the random generator used two parameters as random seed: its process ID and this random buffer buf! The range of value of process ID is 32,768. In other words, without the contribution of buf, the seed of the random generator was too small. The random generator was predictable. The keys generated by DEBIAN OpenSSL are predictable, thus weak.

Of course, the mistake has immediately been corrected. The first weak version has been published in September 2006. All cryptographic keys generated by these versions of OpenSSL should be treated as compromised material. New keys should be generated with the latest version. Other distributions of OpenSSL are not concerned. Nevertheless, they may handle DEBIAN generated keys and thus be in danger when using these keys.

Conclusions:

  • Quality checking tools are useful tools. Nevertheless, their results have to be used with judgment. This is specially tool in the field of security where sometimes it is mandatory to “violate” quality heuristics. A typical example is code obfuscation which objective is to artificially increase the complexity of software (whereas quality requests to reduce the complexity)
  • It took more than 18 months for somebody to detect the impact of this modification.
  • Being paranoid, I would say this delay is rather sufficient for a well organized attacker to maliciously had some reasonably smart trapdoor in an open source package and then exploit it against her target.
  • Open source allowed to detect this weakness :Happy: Open source allowed also to introduce this weakness :Sad: Nevertheless, I believe that the pros are higher. Probably there is a critical size of reviewers to reach for gaining some confidence.
  • Not everybody is able to write (and understand) security code.

Thanks to Gomor for the link.

Mashup security

A new trend in Web design is to add many mashup gadget on Web2.0 sites. Many sites offer huge libraries of such mashups. Adding mashups to sites is extremely simple. Mashups add easily more features, a more professional look, … Unfortunately, they add also potential vulnerabilities.

A mashup is a piece of source code (often java) with Ajax framework . It has a known “documented” set of features. But, are there no hidden features? Potentially, some code could create leakage of data. It is interesting to see that people may be very careful with incoming mails, but totally unaware of mashups and accept anyone as soon as it is good looking. Once more it is a question of trust. Do you trust the developer of the mashup?

IBM has proposed an authentication framework for mashups: SMash. It is an open source project. This is a first step. But that the source is authenticated does not mean that the mashup does not carry a bad payload. The question should be do you know the authenticated entity? do you trust it? Can you examine the code?

Other companies such as MICROSOFT are also working on the topics. No doubt mashup security may become soon a hot topics once the first malware mashups will become mainstream.

Is open source more secure?

Always in the same issue of 2600, phundie describes an attack on GnuPG: an open signature programme. He used Linux command LD_PRELOAD to overload a shared library. By analyzing the software in passphrase.c from the GPG distribution, he spotted the use of functions read() and memcpy(). He wrote a software to overload them and to dump the data in a file. Later, it was rather simple to spot the potentially dialed passphrase.

In the paper, he proposes several countermeasures such as using only static binary, rewrite its own procedures, or verify that LD_PRELOAD is not modified.

This paper clearly illustrates that open source is not adapted to hostile environment. It gives a strong advantage to an attacker who controls the host. It would be interesting to write a good paper analyzing the trust model of open source software highlighting the assumptions. Any volunteer to be co-author?

Book: The Big Switch

Nicholas CARR was the author of Does IT Matter? In this first book, he questioned the future role of IT. He was forecasting the end of IT. In this new book, he continues his prediction with the advent of cloud computing.

He forecasts that computing power will become an utility as power supply. He makes the parallel with the transition to electricity power. Big companies such as Amazon (Elastic Compute Cloud EC2) or Google are offering grid computers to external companies. The interesting part of the book is the analysis of the impact it will have in conjunction with the advent of Web2.0 It has already allowed small companies to succeed without having huge IT infrastructure.

The book also highlights the current trends of Web2.0. Chapter 7: From the Many to the Few is extremely interesting. It describes how companies such as YouTube, or PlentyOfFish are using, for quite nothing, mobs of good willing “content creators”. Chapter 8: The Great Unbundling is about the transformation of content consumption. He predicts that the future of Internet will not be as bright as expected.
“But it’s clear that two of the hopes most dear to the Internet optimists-that the Web will create a more bountiful culture and that it will promote greater harmony and understanding-should be treated with skepticism. Cultural impoverishment and social fragmentation seem equally likely outcomes.”(extract)

The security threats highlighted in the book are the typical malware and privacy issues.

A book to read because it sheds a provocative light on the future of Internet.