Newsletter #11 is available!

With some delays, we have issued the fall version. This quarter you will discover some news, more about Defcon and Black hat conferences (from one attendee with a liking for reverse engineering), more about the famous DNS weakness, and some thoughts about fighting piracy on P2P (with some explanations about French HADOPI, a story that is regularly followed in this blog).

You may find it at Security Newsletter #11
Do not hesitate to post your comments and remarks here.

Feedback from ACM DRM Workshop

On Monday, I attended the 8th ACM DRM workshop. Here are my feedbacks on this workshop.

There were two invited talks.
KAHN Robert (from CNRI) presented The role of identifiers in information access . The talk was about the Digital Object Architecture (DOA). The idea behind that is to redraw Internet from a communication centric system to a digital object centric system. Every digital object would be identified by a unique handle and servers/proxies would resolve it and provide the actual location of repository (reminds you something :Wink:?, Kahn is behind TCP/IP). This is what is used for DOI.
The link with DRM? The message was that it is important to separate the terms and conditions (expressed as metadata) from the actual enforcement. I fully agree. . His attempt to apply it to the Broadcast Flag was more dubious.

The second invited speaker was YACOV Yacobi. He is the lead of Microsoft’s anti-piracy group. He presented Content Identification He tackled three issues: piracy versus counterfeiting, new DRM and economics of fight against counterfeiters. His distinction between pirated goods and counterfeited goods seemed not extremely good to me. A counterfeited good is a physical good that looks like the original and is sold at about the same price. Thus, the sorting is mainly on the price.
His new DRM approach was the use of media hashing (what we currently call fingerprinting or perceptual hash). Clearly, he was not aware of the state of the art in the field, both about existing solutions, and approaches like the one proposed by Philips many years ago.
In the last part, he presented a complex modeling of economics to determine the optimal effort in counterstriking counterfeiters. It would have been more interesting to focus all his presentation only on this topic.

About the other papers:
JIN Hei (IBM) presented Adaptive traitor tracing for anonymous attack. The starting point is the sequence keys traitor tracing scheme of AACS. It was an extensive analysis on how many movies you had to retrieve to safely incriminate one infringer within a non cooperating coalition. The figures are still very high. As we stated many years ago, sequence keys will probably never be useful in AACS. Furthermore, the analysis assumes that the infringer does not collude content with other members. Would I be an attacker, this is what I would do. Nevertheless, nice theoretical work using probability.

YUNG M. (Microsoft) presented Public-key traitor tracing from efficient decoding and unbounded enrollment . A traitor tracing scheme based on El Gamal. I will let Marc JOYE comment :Wink:

JAMKHEDAR Pramod presented Formal Modeling of Rights. He proposed a scheme that should encompass any Rights Expression Language. Compared to the work of GUTH or CHONG, there is the addition of obligations inside the model. Obligations are external conditions that have to be accomplished prior to granted this action.

DOERR (with Ton KALKER) presented Design rules for interoperable domains – Controlling content dilution and content sharing . It was a presentation of two interesting concepts of CORAL: the rights token (a REL that is independent from the DRMs) and management of domains. The most interesting part was the ideas on how to control the size an dilution. He proposed three mechanisms: proximity, cardinals, and time-out. I think that we did not dare to embed time-out for contents within DVB-CPCM. I am not sure that people would appreciate.

Discussions were extremely interesting. Long discussion with the representative of EFF (but that is another story)

And of course, I presented my paper A Four Layer Model for Security of DRM

Is WIFI still secure

This week, several news seemed to shake the basement of WIFI security. The first news was about WPA/WPA2 and the second one about WEP.

ElcomSoft is a company that designs tools to retrieve lost passwords. Their latest product adds two new features. First, it distributes the workload on distributed computers. Second, it may use NVidia Graphical Processing Unit (GPU) to gain a factor 20 in processing time compared to simple CPU. They announced a gain of 100 for cracking WPA/WPA2 passwords.

Of course, immediately the press has “reported” this exploit without often many insights. I have even seen some blogs reporting a gain of 10,000. The “exploit” of ElcomSoft is to use GPU and distributed computing. This is not new. Remember the use of several PS3 with cells to create collisions for SHA1 (See Security Newsletter #9). ElcomSoft still uses brute force against WPA/WPA2. Thus, good luck and a lot of patience.

The second news is that a Japanese researcher, Masakatsu MORII, who succeeded to crack WEP key in less than 1 second. He announced this exploit at CSS2008. The Japanese presentation is available at http://srv.prof-morii.net/~morii/image/CSS2008/CSS081010_WEP_slide.pdf (password WPE2008). We will have to wait some time to get an English version. It will be interesting to analyze the attack to see if it opens new methods to break keys. He drastically accelerated compared to the last exploit at 6 minutes. Nevertheless, WEP is considered for many years as too weak to protect Wifi. This is just nailing once more WEP’s coffin.

Was security of Wifi reduced this week? Clearly not with these announcements. The first one seems to be more a promotional trick to increase awareness of ElcomSoft. The second one hacks an already dead algorithm. By the way, check that you do not use WEP to protect your personal wireless network. I am sure you are already using WPA2

The economics of information security

Ross ANDERSON and Tyler MOORE wrote an interesting state of the art about economics of information security. Why does economics matter? The obvious answer is that it is about money. And money is one major driving factor of the software industry. This paper highlights a more compelling argument: many security failures come from unaligned incentives rather from bad design. For instance, I will suffer of the inadequacy of the Operating System to prevent a virus to crash my computer and not the OS’s editor (especially if it is in a dominant portion). Another example, the editor of a player reading AACS protected content does not suffer from the loss due to content piracy.

The survey explores many fields of information security and shows how economic analysis can help to understand failure or can strengthen security. For instance, to trigger network effect, it may be economically wise to lower security (at least security should not get in the way of potential customers) to become more attractive. Once the threshold passed, then too strong security can be a good way to lock in the market (second part of a good network effect). Another interesting topic is secure software development. It seems that should have few but extremely competent developers (in security) and have a lot of testers.

I am not fully aligned with the conclusions on DRM and Trusted Computing. But, here we may object that we do not have the same incentives :Happy: .

Definitively, a paper to read. Furthermore, taking into account economics in the design is probably a good thing. I will have to dive in game theory.

The paper is available here

The DRM game

Heileman G. and Jamkhedkar P. are regular contributors for ACM DRM workshop. For many years, they have presented a paper at each workshop. An their papers are worthwhile.

Last year, they presented an interesting http://portal.acm.org/citation.cfm?id=1314287. It analyzed the different possible strategies for Vendor and Consumer using the game theory. The model was rather simplistic. Thus, there was no big surprise in the outcomes especially when analyzing the baseline game (section 2). Would DRM be unbreakable, Vendor should always sell protected content. For Vendor, it is important decrease the utility of downloaded content versus sold content. Only common sense.

The paper becomes more interesting with section 3 when it analyzes the sub-games. What does the consumer do with content and how Vendor reacts. One outcome is that the higher the penalty, the less Consumers Vendor has to sue. The interesting part is the description of a distribution mechanism with a trust valuation that defines the cost of the content and the associated bonuses. This is a trend that was initiated for many years by Philips labs based on the use of forensic watermark.

I have always problem with that philosophy because it relies on the rather strong assumption that the trust evaluation will work. I have many doubts about that, especially with B2C traitor tracing. Today, you have only limited number of sources on P2P networks, and they do not collude. Let’s now suppose that Consumers understand that they may cheat if either they collude or they issue more instances of sources just to dilute the system… I do not even speak about attacking the reputation system (look in electronic auctions).

Nevertheless, game theory seems an interesting tool to explore strategies. We may expect to see papers in the future with more complex models. I would like to see one which would differentiate authors from vendors/distributors and vendors from authorities.

Securing Virtual Worlds

Dr Igor MUTTIK, McAfee Labs, edited a document entitled “Securing Virtual Worlds Against Real Attacks”. The document is interesting. It is very IT security oriented in that it works on the traditional problems related to IT in a client server environment. The only specificity to the Virtual Worlds is the fight against cheating applet. Thus, only good advises but nothing revolutionary. In other words, he did not explore the new threats specific to Virtual Worlds (and there are many).

Nevertheless, he gives interesting advises for potential researchers for in-game threats. They will need

  • better than average knowledge of the environment
  • better access to the environment
  • clearance from the employer to run tests for malware in various gaming environments. This need of clearance is applicable to any researcher who handles malware.
  • enough demand from customers to justify research and development for such security solutions. For him, customers are the gamers. this is a typical bias coming from an anti-virus company. the customer is the end user. I believe that the customers are the game editors. If their game will be plagued by security flaws, making it not fun to play, then gamers will escape to another world.

Virtual worlds will be under fire of typical malwares but also new threats specific to them. Gamers will request their favorite virtual world to be safe (from the computing environment point of view, not from the game play point of view). There is a need to study these problems with a scope larger than traditional IT security

Risky IT managers

Company Cyber-Ark has conducted an interview of 300 IT managers. According to their press release, there are some interesting (worrying) outcomes:

  • 88% of the interviewed IT managers admit that they would steal sensitive data if being layed off. A third of them would leave with the list of privileged passwords that give access to root resources!
  • More than a quarter of the interviewed managers announced to have faced problems of leaking or stolen data. Economic intelligence (nice euphemism for industrial spying) is a reality.

The report seems to show that bad practice with sensitive data and password are still very generalized.

88% is awfully worrying. With the generalization of outsourcing storage (Sharepoint, …) or outsourcing computing power (cloud computing), this problem will become more and more problematic. Outsourcing is changing the trust model of IT. Some trust hypothesis may weaken. Will you trust as much IT administrators from outsourcing companies than your ones. Are you sure that they can be trusted? Will you audit their security policies and their compliance to them? Storage of sensitive data will become more and more complex.

I have not read the report. I will try to get access to it (not directly available on their site) and will come back to you with the best parts.