Oracle wants secure coding aware students

In her blog, Mary Ann Davidson, CSO at Oracle, is highlighting a weakness in the supply chain of software. She castigates US universities for not training software students in secure coding. She is awfully right, and it is not limited to US universities. Secure coding should be part of the normal programme of software development like methodologies, algorithmic and languages. Very few students have this secure coding background when joining the industry. Unfortunately, security becomes pervasive.

If students would have secure code lectures, this would not mean that they would become good at secure coding. It requires a given mindset (hacker minded?). Nevertheless, we could expect some benefits:

  • Some elements of secure coding in their day to day work
  • Avoid some basic errors in their production
  • And most important, they would be security aware. They would ask knowledgeable people to put the right solution in place. They would avoid writing software with highways for hackers. They would be more robust against social engineering.

One of the challenges for teaching secure coding is that secure coding is not as advanced in formalization then other elements of software programming. Secure coding is very much based on heuristics and some pinches of black art. Academic communities should invest more in this field. More conferences should treat this topic. Furthermore, practitioners should teach in universities. Only real practical knowledge can generate secure code. Industry should help universities in this challenge.

She proposes also to have students hack each other solutions. This would be a revolution, but a good practice. It creates the right mindset. Hackers are used to that at conferences such as DefCon, Black Hat or Chaos Computer Camp. Even some governments experiment such challenges (See Défi Sécurité Système d’Exploitation Cloisonné et Sécurisé pour l’Internaute ). Should we not have such hacking challenges between universities?

I would like just to cite a dreadful statement, unfortunately true.
We simply – and collectively – must evolve to defensive mindsets delivering defensible code lest none of us survive in a hostile world.

Second Life: An additional frontier to secure the enterprise?

3 april, IBM and Linden Labs (LL) made an interesting announcement. IBM will host its own private islands on Second Life. See the Reuter news.
If you acquire, or rent a land in Second Life (SL), you may define who can access it. If you expect to open a shop, then it will be open to the public. If you want it to become the headquarter of your guild of hackers, then you will grant access only to the members of the guilds. So, a company may have meeting rooms for virtual meetings ony accessible to the avatars of its employees. The access control is performed by LL servers.

In the case of IBM, the server(s) managing IBM’s islands will be behind IBM’s firewall, i.e. within IBM’s cybersphere and not anymore LL’s cybersphere. When the avatar of IBM employee navigates in public SL, then it is managed by LL. Once it enters IBM’s island, it is managed by IBM dedicated server.

Of course, this should bring greater control and security for IBM. There are some interesting problems behind that:

  • In theory, an avatar can bring a virtual asset from the public SL into the private island
  • In theory, an avatar cannot bring a virtual asset from the island to the public SL.

For that to be true, it would mean that there is a total isolation between the two worlds. Ideally, the avatar in the island should be different from the avatar in public SL. The public avatar could pass his/her clothes and belongings to the island one. But the island one could not pass anything to the public one. This means also that there would be no retrofit from what happened on the island to the public SL. Every transfer from island to public domain may become a potential leakage (through scripting, …)

In any case, the fact to allow an avatar to bring a virtual asset into the island is a potential breach of security. A forged virtual asset could contain a virus or a Trojan. Of course, we may expect that the servers are inside a firewalled domain within IBM infrastructure. By the way, even while in the public domain, SL may already have a foot inside IBM firewall through the computer of the owner of the avatar.

Would it not have been safer to create its own IBM meeting virtual world totally independent from SL (even if using LL software)? But it would be probably less glamorous.

Establishing end to end trust

Microsoft issued an extremely interesting white paper: Establishing end to end trust. It has been presented at RSA2008. The paper is worth reading. The main idea is that a trusted stack (encompassing hardware trust, OS trust, application trust, data trust and persona trust) and the ability to audit for accountability should make a more secure Internet.

It is interesting to note the extreme caution Microsoft takes on the topic of privacy and identity. Section IV is a fully dedicated cautionary note. Clearly, Microsoft fears that this initiative is considered as a Big Brother initiative. This is probably a sequel of the backlash on palladium.

I will focus on the notion of trusted stack. This is an addition to previous post on XBOX hack. The trusted stack is based on signature. According to the paper, there will be three categories.
“Even if code is signed, however, it will still fall into one of three buckets. There will be code that is signed by a known entity (e.g., Microsoft, Oracle, Adobe) that is trusted due to past experience, brand reputation or some other factor; there will be code that is signed but known to be malware (e.g., spyware, which can then be blocked); and there will be code signed by entities that are not known to the user.”
The paper clearly highlights the importance of the criteria to obtain the signature. If they are weak, then the trust is weak. The concept of signature relies on the fact that an authority, often called trusted third party, provides signature keys and associated certificates only to compliant and trusted principals. We expect the trusted third party do correctly its job. One of the strength of PC is the wealth of available shareware and freeware. There are thousands of small software publishers in the world. Thus, thte authority will never be possible to know if they are trust worthy. Will these publishers be allowed to sign?

To compensate, Microsoft proposes a reputation platform. Unfortunately, like in all reputation system, it has limitations. Reputation will increase only with the number of users recommending the software, i.e., the number of people taking the risk. Furthermore, many people will not check ( the same people that do not use an antivirus or do not update their software).

Furthermore, as explained in previous post, signature does not mean that the software is secure. Only peer auditing of the software before signing the application may give this assurance.

In other words, trusted stack as described will end up with the following situations:

  • Signed software that we trust because they are open source or from a publisher we trust.
  • Signed software that we do not know if we can trust.

It is still up to the user to decide if he takes the risk. In other words, we are not far from the existing situation. The only difference is that with a trusted stack based on TPM, application may trust and use secure elements of lower layers and interact with other trusted principals.

There are also many things to be said about audit. This is for another post.

Chain of trust

Yesterday, I highlighted the focus on the chain of trust. I would like to come back to it.
Chain of trust is based on the concept that an authority is trusted. This authority then delegates its trust by signing a certificate to another authority. This is the way Public Key Infrastructures (PKI) do work. A Certification Authority has a root trust and all the certificates will cascade back to it.

In the case of downloaded/loaded software, it works in a similar way. The software to be downloaded is signed with the private key of an authority. The host, which should load the software, checks the signature using the corresponding certified public key. The certificate may be part of a hierarchical signature scheme. This seems extremely sound. Where is the problem?

From the cryptographic point of view, the trust model has mainly two assumptions (in addition to the traditional use of robust proven algorithms and secure implementation)
1- No private key involved in the signature schemes does leak. In reality, the assumption is mainly that the private root key does not leak. The other leakages can be coped through revocation.
2- The attacker cannot replace or add a new root public key.
The second assumption is often forgotten. Nevertheless, this seems still sound.

Unfortunately, the trust model is more complex. It adds a third assumption.
3- If a piece of software is signed, it means that this software is safe.

In an ideal world, assumption 3 means that the signing authority carefully checked the software and certifies that it is safe. Any developer knows how it is difficult to carefully review a small piece of software to find flaws, then reviewing the complete software…

In real world, if the host may receive many applications, for instance in game consoles, or future TPM based software for computers, we may assume that the signing authority will sign any piece of software presented by a software editor that it trusts. It means that the trust model has a fourth assumption:
4- A piece of software provided by a known software editor can be trusted.

Unfortunately, this assumption is rather weak. Many attacks or errors can invalidate it (malware insertion, security flaws, impersonation of the editor, …). It is why the chain of trust is not as efficient as we could expect in an environment that handles many applications.

The chain of trust may be stronger in more restricted environment such as Set Top Boxes.

Open source and Kerckoffs law

In a recent post, at TechRepublic, Chad Perrin argued that open source is definitely a better security solution than proprietary solution because it complies with Kerckoffs Law.

Although, it is true most of the time, it is not an absolute truth in security (as usual). It depends on the trust model of the security system. I will take an example: OpenSSL. The trust model of SSL is that Alice and Bob trust each other and they want to avoid that Eve spies them or tampers their messages. Thus, OpenSSL uses cryptographic algorithms. The OpenSSL cryptographic toolbox is well studied and perfect. But only under the above mentioned trust model.

Let us now suppose that Alice wants to control the access of Bob to an information stored on Bob’s computer. She does not trust Bob. Thus, she will cipher the information with a secret key and gives a decryption program to Bob. Nevertheless, for obvious reasons, she wants to keep the secret key secret from Bob. She cannot use the cryptographic toolbox of OpenSSL (although it is good and has no flaws) because Bob, being a good hacker, will easily extract the secret key by knowing where and when it is used in OpenSSL.

This example is a simplified illustration of the problem of DRM. Therefore, it is impossible to design an open source DRM for B2C or B2B applications. The final user is not trusted. It may have sense in a C2C model (Consumer To Consumer).
Open source is perfect if the trust model of the system assumes that the “owner” or “operator” of the corresponding software is trusted. If it is not the case, then Open source is not the right answer. Then, we enter in the realm of secure coding and tamper resistant software which is another story.

Nevertheless, even for proprietary implementations, it is recommended to use well known and studied algorithms and protocols. Here, security by obscurity is bad. For implementation issues, it is another story (remember AACS hack)

Confidential data and P2P

Last year, Pfizer had a serious security breach. Personal records of 17,000 employees and previous employees were available on a peer-to-peer (P2P) network. The wife of a Pfizer employee installed a file sharing software on her husband’s company laptop. The configuration was badly set and confidential information leaked. This type of leakage is rather common. In Security Newsletter n°4, I reported a virus using P2P software to distribute random file of a hard disk. Japanese defense plans leaked!

The first-thought recommendations would be to ban P2P software from company’s computers. This recommendation has limits:

  • P2P software may be useful in some context (and probably will become more prevalent in the future)
  • There is no serious way to avoid user to install such software and use it outside the fire walled environment of the company. In fact, it is possible to block installation of software by users, but it becomes quickly a problem for the IT department (cost of installing new software, upgrades, patches, …). It is often not practical excepted in highly secure environment. In any case, in most case, IT aware users will bypass the control.

Thus, the best recommendation I would give is to encrypt all confidential files on the laptop. This answers this threat, because what is shared is encrypted data, i.e. useless, and answers many other threats such as theft of laptops. Obviously the choice of the encryption tool is important (We will report on the latest hack on encryption tools in next security newsletter to be published in a fortnight)

It is also important to remember that you are also at risk at home with your private data. If ever you, or your relatives, use P2P software on your personal computer, check carefully its configuration to strictly sandbox the sharing space. Hoping that there is no backdoor that allows changing it  :Wink:

In the referenced article, I found also interesting the data mining performed on queries on P2P network. Privacy is even leaking on P2P network usage :Amazed:

Social networks and privacy

Recently Facebook enhanced its privacy controls on the information. Users are supposed to be able to control who can access personal data for instance personal pictures. Nevertheless, a hole in security allowed to access personal pictures independently from their control rules. Journalist from Associated Press (AP) was able to browse among personal pictures (see AP news) Facebook quickly fixed the hole.

Once more, this news rises the question about privacy and social networks. Social networks are not different from traditional web sites. Data stored on their server are vulnerable and may be exposed. Social networks, due to their social role, increase the problem. Information posted on these networks are by nature personal thus potentially sensitive.

Data on social networks (or any other type of sites) have two characteristics:

  • They are vulnerable. They may leak or may be stolen
  • They are persistent. Internet has a huge memory. Ten years old data are still somewhere in the cyberspace, available to revealed.

The consequences are:

  • Information that you do not want to be public may become public
  • Information that were not important today may become embarrassing in the future. These information will be available and may ruin reputation.

Thus, a rule: Do never post a personal information that you do not want to become one day public It may become public.