Security Newsletter #16 is available

The summer edition of Technicolor Security Newsletter is available.

Our guest is Chris Carey, the CTO of Paramount. He presents the new threats and types of piracy that studio are facing. Extremely interesting.

Stéphane Onno describes some vulnerabilities of deployed embedded devices. Patrice Auffret and Mohamed Karroumi shed some lights on the latest attack on OpenSSL. Olivier Courtay and Antoine Monsifrot will introduce you to the basics of Trusted Platform.

I hope that you will enjoy reading it. Do not hesitate to provide some feedbacks.

To subscribe, send a mail at security.newsletter@technicolor.com

Identifying providers and downloader in BitTorrent

A team of five INRIA researchers presented an interesting paper at 3rd Usenix workshop on large Scale Exploits and Emergent Threats: Spying the World from your Laptop – Identifying and Profiling Content Providers and Big Downloaders in BitTorrent. The title says everything.

Using a single machine and some “flaws” in BitTorrent protocol, they collected and analyzed 148 million IP addresses involved in more than 2 billion instances of downloads. Then, they tried to identify the content providers and the big downloaders.

For instance, for the content providers (i.e. the person who generated the first torrent of a content), they spied the tracker sites to identify new torrents. If a torrent appeared with only one source address, then it was the address of initial content provider!

With no surprise, they discovered that most of the illegal contents are provided by a limited number of content providers. The distribution shape is very long tail oriented. The top 100 contributors provide about 30% of the contents on BitTorrent! The hosting centers of the initial seeds are mostly in France and Germany but the content providers themselves were from other countries.

Interestingly, they discovered that big downloaders where often hidden behind proxies, Tor or VPN. They also identified some monitoring “sites”.

A nice view of the P2P activity.

Intellectual Property: Observations on Efforts to Quantify the Economic Effects of Counterfeit and Pirated Goods

How much does piracy and counterfeiting cost to the industry? This is an extremely valuable question. Depending whom you are listening the data change in incredible ratio.

Are their any reliable figures? This was teh question that the United States Government Accountability Office (GAO) tried to answer following a request of the Congress. Last month GAO published its 41 page answer.

What is the answer? I will quote an excerpt of the executive summary.

We determined that the U.S.
government did not systematically collect data and perform analysis on the impacts of counterfeiting and piracy on the U.S. economy and, based on our review of literature and interviews with experts, we concluded that it was not feasible to develop our own estimates or attempt to quantify the economic impact of counterfeiting and piracy on the U.S. economy.

In other words, according to GAO, it is not possible to have reliable data. Nevertheless, the report makes an exhaustive review and analysis of the numerous reports proposing data. Each time, GAO explains the weaknesses in the methodology. The report offers an interesting exhaustive bibliography of existing reports on piracy.

At no moment does GAO take a position if the data are under estimating or over estimating the real data. it just states that there is no reliable way to estimate it. Which is totally logic. How can estimate something that you cannot measure, that you do not know, … Would the institutions have precise knowledge, they would then be in capacity to stop it.

In addition, the report gives a good qualitative analysis of the consequences of pîracy. The “positive” effect is rather anecdotic, although the argument that IT and telco industry did benefit from digital piracy was already claimed by Olivier BOMSEL.

Conclusion: Piracy and counterfeiting arereal. They have negative effects but nobody can give a reliable estimation of the real impact.

Facebook – Another breach in the wall

This is the title of a presentation that George Petre gave recently at the MIT spam conference. George is the head of the Threat Intelligence Team of anti-virus company BitDefender.

His team experimented the use of social networks as spam vector. And the results are impressive (frightening?). Social networks are great for spams.

One of the side results of the study is the evaluation of user acceptance of new ”friends”. They created three types of profiles. The first one had the minimal allowed details (without picture), the second one had a picture and some more details and the third one was extremely complete.

Just one hour after starting to add people to each profile, we managed 23 connections with the 1st profile, 47 with the 2nd profile and 53 with the 3rd profile.

Amazing! You don’t even not need to be a social engineer.

And of course, once you are a friend, people have a natural tendency to trust you and accept any of your proposed links.

The full paper is available here. If you are worried about social networks, read this paper and you will be even more worried. The remedy seems simple: accept as friend only people that you know and trust. Unfortunately, this is contrary to the drive to have a high score of friends.

Security Newsletter #15 is available

The new issue of the Technicolor Security newsletter is available. It comes with a new skin that fits our new branding: Technicolor.

I am proud that our guest was Bruce SCHNEIER. I suppose that I do not need to introduce him. As usually, we invite sometimes people who do not totally share our view. Obviously, Bruce’s position on DRM is not aligned with mine. Nevertheless, exchanging points of view is how the world evolves.

The other topics are the TLS renegociation vulnerability, a presentation about free DNS topic and the last part on forensics.

Hoping that you will enjoy reading it.

Next issue is due in June 2010

Privacy notices as “Nutrition” Label

Reading privacy notices on online sites is a difficult task. Currently, they are displayed in lengthy textual pages with legal mambo-jumbo. How many brave people try to complete this unpleasant reading? I suppose that excepted privacy lawyers, quiet nobody.

As a consequence, people give up their privacy and accept the privacy rules without knowing what they are.

Under the lead of Cranor Lorrie, a team of researchers from Carnegie Mellon propose in a paper to be presented at CHI10 an interesting approach: Let’s display the privacy policy in a way similar to nutrition labels.

We are now all familiar with nutrition labels that allow you to have a look at carbs, proteins… (at least if you are concerned about your figure and/or health  :Happy:  ). They propose a table which rows indicate the potentially collected data whereas each column defines the potential use. The cell has five color codes: Will use, opt in, opt out, will likely not use, will not use.

They compared different forms of policy displays. Guess what? The standardized privacy label won.

This proposal is clearly a progress. Now, a more worrying question: how many people would choose their social network depending on the privacy policy? How many people would not join the latest buzz hot need-to-be social network due to privacy issues? I’m afraid not so many.  :Sad:

Nevertheless, people would have at least the possibility to choose. This would be better than the current situation.

ReFormat: Automatic Reverse Engineering of Encrypted Messages

Five researchers, Z. WANG, X. JANG, W. CUI, W. WANG and M. GRACE presented, according to me, a nice piece of work at Esorics 2009.

The objective was to automatically reverse engineer encrypted messages without breaking the algorithms. The basic idea is simple. When a piece of software receives an encrypted message, it performs two steps (regardless of the used cryptographic algorithms and protocols). First, it decrypts the message and then it processes the clear message. This means that the message is during a while in the clear in the memory. if you identify the location of this buffer, and when it is used, then game over.

To succeeed, they used two tricks. The first was to distinguish between decryption routines and normal processing routines. Cryptographic functions use far more bit wise and arithmetic operations than normal software. They measured (on OpenSSL) that more than 80% of the operations were bit wise and arithmetic for cryptographic functions. The rate dropped beneath 25% for normal processing. This heuristic allows to detect the encryption/decryption phases.

The second step is to locate the buffer containing the clear text. They identify all the buffers that are written while in decryption phase. Then, they identify all the buffers that are read during the processing phase. The expected buffer should be in the intersection between the two sets.

Obviously, there are many ways to deter this attack. For instance code obfuscation may change the rate. Dynamic code encryption is of course a must. Nevertheless, I found the approach extremely clever.

Once more, it proves that writing secure implementations is extremely difficult. And it requires clearly a twisted mindset. :Happy:

If you are interested in tamper resistance, you have to read this paper. It is available here.