A glimpse at hacking mentality

While reading spring 2008 issue of hacker magazine 2600, I had fun with the paper Password Memorization Mnemonic from Agent Zero. The paper in itself is not extraordinary. Agent Zero has reinvented the notion of key derivation. He proposes, in a non formalized way, to use a password generating function for each site that would use the name of the site has parameter. He ends up with passwords in the format <site name><code name><number>. This is a typical trick and you may devise your own function adding for instance special characters.

Is it a good trick? In fact, it is hardly more secure than using the same strong password on all sites. The security relies on the secrecy of the <code name> and of the algorithm (Kerckoff!). And with such a weak algorithm (mandatory weak because it is a mnemonic), if you have the password for one site, it is not difficult to guess the algorithm.

The interesting point comes at the end of the paper. Some sites, for instance mySpace, limit the length of the passwords. This ruins the algorithm. Normal users would propose a derived function that would concatenate to stick in the requested length. But Agent Zero is a hacker, therefore he proposes:
1. Find a similar site with a better password policy.
2. Crack the webpage, system, or server. Show the webmaster or system administrator just how weak their current policy is, thereby spurring them to strengthen it. Admittedly, this is a more extreme-not to mention illegal-road to take, but it has been taken, and it has gotten results.
(Extract)
I love option 2. Definitively another mentality  :Wink:

Book: The Big Switch

Nicholas CARR was the author of Does IT Matter? In this first book, he questioned the future role of IT. He was forecasting the end of IT. In this new book, he continues his prediction with the advent of cloud computing.

He forecasts that computing power will become an utility as power supply. He makes the parallel with the transition to electricity power. Big companies such as Amazon (Elastic Compute Cloud EC2) or Google are offering grid computers to external companies. The interesting part of the book is the analysis of the impact it will have in conjunction with the advent of Web2.0 It has already allowed small companies to succeed without having huge IT infrastructure.

The book also highlights the current trends of Web2.0. Chapter 7: From the Many to the Few is extremely interesting. It describes how companies such as YouTube, or PlentyOfFish are using, for quite nothing, mobs of good willing “content creators”. Chapter 8: The Great Unbundling is about the transformation of content consumption. He predicts that the future of Internet will not be as bright as expected.
“But it’s clear that two of the hopes most dear to the Internet optimists-that the Web will create a more bountiful culture and that it will promote greater harmony and understanding-should be treated with skepticism. Cultural impoverishment and social fragmentation seem equally likely outcomes.”(extract)

The security threats highlighted in the book are the typical malware and privacy issues.

A book to read because it sheds a provocative light on the future of Internet.

Oracle wants secure coding aware students

In her blog, Mary Ann Davidson, CSO at Oracle, is highlighting a weakness in the supply chain of software. She castigates US universities for not training software students in secure coding. She is awfully right, and it is not limited to US universities. Secure coding should be part of the normal programme of software development like methodologies, algorithmic and languages. Very few students have this secure coding background when joining the industry. Unfortunately, security becomes pervasive.

If students would have secure code lectures, this would not mean that they would become good at secure coding. It requires a given mindset (hacker minded?). Nevertheless, we could expect some benefits:

  • Some elements of secure coding in their day to day work
  • Avoid some basic errors in their production
  • And most important, they would be security aware. They would ask knowledgeable people to put the right solution in place. They would avoid writing software with highways for hackers. They would be more robust against social engineering.

One of the challenges for teaching secure coding is that secure coding is not as advanced in formalization then other elements of software programming. Secure coding is very much based on heuristics and some pinches of black art. Academic communities should invest more in this field. More conferences should treat this topic. Furthermore, practitioners should teach in universities. Only real practical knowledge can generate secure code. Industry should help universities in this challenge.

She proposes also to have students hack each other solutions. This would be a revolution, but a good practice. It creates the right mindset. Hackers are used to that at conferences such as DefCon, Black Hat or Chaos Computer Camp. Even some governments experiment such challenges (See Défi Sécurité Système d’Exploitation Cloisonné et Sécurisé pour l’Internaute ). Should we not have such hacking challenges between universities?

I would like just to cite a dreadful statement, unfortunately true.
We simply – and collectively – must evolve to defensive mindsets delivering defensible code lest none of us survive in a hostile world.

Second Life: An additional frontier to secure the enterprise?

3 april, IBM and Linden Labs (LL) made an interesting announcement. IBM will host its own private islands on Second Life. See the Reuter news.
If you acquire, or rent a land in Second Life (SL), you may define who can access it. If you expect to open a shop, then it will be open to the public. If you want it to become the headquarter of your guild of hackers, then you will grant access only to the members of the guilds. So, a company may have meeting rooms for virtual meetings ony accessible to the avatars of its employees. The access control is performed by LL servers.

In the case of IBM, the server(s) managing IBM’s islands will be behind IBM’s firewall, i.e. within IBM’s cybersphere and not anymore LL’s cybersphere. When the avatar of IBM employee navigates in public SL, then it is managed by LL. Once it enters IBM’s island, it is managed by IBM dedicated server.

Of course, this should bring greater control and security for IBM. There are some interesting problems behind that:

  • In theory, an avatar can bring a virtual asset from the public SL into the private island
  • In theory, an avatar cannot bring a virtual asset from the island to the public SL.

For that to be true, it would mean that there is a total isolation between the two worlds. Ideally, the avatar in the island should be different from the avatar in public SL. The public avatar could pass his/her clothes and belongings to the island one. But the island one could not pass anything to the public one. This means also that there would be no retrofit from what happened on the island to the public SL. Every transfer from island to public domain may become a potential leakage (through scripting, …)

In any case, the fact to allow an avatar to bring a virtual asset into the island is a potential breach of security. A forged virtual asset could contain a virus or a Trojan. Of course, we may expect that the servers are inside a firewalled domain within IBM infrastructure. By the way, even while in the public domain, SL may already have a foot inside IBM firewall through the computer of the owner of the avatar.

Would it not have been safer to create its own IBM meeting virtual world totally independent from SL (even if using LL software)? But it would be probably less glamorous.

Wide distribution of fingerprints

In issue 92 of “die Datenschleuder”, the official magazine of Chaos Computer Club (CCC), on a plastic foil, you may find the fingerprint of German interior minister Wolfgang Schlauble. According to CCC, applying the foil on the biometrics reader to be used for German passport may impersonate the minister. CCC could not test it. Nevertheless, the hackers claim that they experimented with them.

One of the challenges of biometrics is to verify that the measured biometrics are from a living principal. For instance, new generation of fingerprint measures temperature of the finger, blood pressure, or resistivity of the skin. This may allow to detect fake fingers. Of course, another potential weakness is impersonation after the physical capture. In this case, all the additional measurements are useless.

This story, regardless of its potential veracity, highlights the inherent limitations of biometrics. It is possible to revoke a compromised key. It is impossible to revoke a compromised biometrics identity. If your fingerprint is available for a given technology, there is noway to stop it.

If this risk of capturing biometrics is real, then biometrics should be used only on two-factor authentication. In this configuration, the compromise of biometrics identity can be partly compensated by the second factor. In fact, in this case, the authentication is reduced (for the compromised identities) to a one-factor authentication. This is better than nothing. An upgrade of the biometrics method that would cope with the attacks would allow to re-validate the value of the biometrics.

In any case, generalization of biometrics will open a new black market: forged biometrics identity.

PS: “Die Datenschleuder” could be translated as “the data sling”.

The crusade: DRM sucks

There is a terrible crusade against DRM. Many bloggers try to illustrate that “DRM sucks”. As for all crusades, arguments are sometimes true, and sometimes wrong.

A famous blogger claimed that he had a perfect example of why DRM sucks. Following the death of HD DVD, it seems that the newest version of Cyberlink’s PowerDVD, one of the most used DVD software player, does not anymore support HD DVD. That was fast. According to the blogger that was the fault of DRM.

Unfortunately, this is the worst example. HD DVD and BluRay share the same basic DRM: AACS. Of course BluRay has in addition BD+. Nevertheless, the basic DRM is identical. The lack of interoperability is due to intrinsically different formats at every level (physical, organization, coding) except for DRM.

I suggest a better historical example of sucking DRM: VHS and Betamax  :Wink:

Establishing end to end trust

Microsoft issued an extremely interesting white paper: Establishing end to end trust. It has been presented at RSA2008. The paper is worth reading. The main idea is that a trusted stack (encompassing hardware trust, OS trust, application trust, data trust and persona trust) and the ability to audit for accountability should make a more secure Internet.

It is interesting to note the extreme caution Microsoft takes on the topic of privacy and identity. Section IV is a fully dedicated cautionary note. Clearly, Microsoft fears that this initiative is considered as a Big Brother initiative. This is probably a sequel of the backlash on palladium.

I will focus on the notion of trusted stack. This is an addition to previous post on XBOX hack. The trusted stack is based on signature. According to the paper, there will be three categories.
“Even if code is signed, however, it will still fall into one of three buckets. There will be code that is signed by a known entity (e.g., Microsoft, Oracle, Adobe) that is trusted due to past experience, brand reputation or some other factor; there will be code that is signed but known to be malware (e.g., spyware, which can then be blocked); and there will be code signed by entities that are not known to the user.”
The paper clearly highlights the importance of the criteria to obtain the signature. If they are weak, then the trust is weak. The concept of signature relies on the fact that an authority, often called trusted third party, provides signature keys and associated certificates only to compliant and trusted principals. We expect the trusted third party do correctly its job. One of the strength of PC is the wealth of available shareware and freeware. There are thousands of small software publishers in the world. Thus, thte authority will never be possible to know if they are trust worthy. Will these publishers be allowed to sign?

To compensate, Microsoft proposes a reputation platform. Unfortunately, like in all reputation system, it has limitations. Reputation will increase only with the number of users recommending the software, i.e., the number of people taking the risk. Furthermore, many people will not check ( the same people that do not use an antivirus or do not update their software).

Furthermore, as explained in previous post, signature does not mean that the software is secure. Only peer auditing of the software before signing the application may give this assurance.

In other words, trusted stack as described will end up with the following situations:

  • Signed software that we trust because they are open source or from a publisher we trust.
  • Signed software that we do not know if we can trust.

It is still up to the user to decide if he takes the risk. In other words, we are not far from the existing situation. The only difference is that with a trusted stack based on TPM, application may trust and use secure elements of lower layers and interact with other trusted principals.

There are also many things to be said about audit. This is for another post.