Nov 23 2015

Attackers are smart

In 2010, Steven MURDOCH, Ross ANDERSON, and their team disclosed a weakness in the EMV protocol. Most Credit / Debit card equipped with a chip use the EMV (Europay, MasterCard, Visa) protocol. The vulnerability enabled to bypass the authentication phase for a given category of transactions. The card does not condition transaction authorization on successful cardholder verification. At the time of disclosure, Ross’s team created a Proof Of Concept using an FPGA. The device was bulky. Thus, some people minored the criticality.

The team of David NACCACHE recently published an interesting paper disclosing an exemplary work on a real attack exploiting this vulnerability: “when organized crime applies academic results.” The team performed a non-destructive forensic analysis of forged smart cards that exploited this weakness. The attacker combined in a plastic smart card the chip of a stolen EMV card (in green on the picture) and an other smart card chip FUN. The FUN chip acted like a man in the middle attack. It intercepted the communication between the Point of Sales (PoS) and the actual EMV chip. The FUN chip filtered out the VerifyPIN commands. The EMV card did not verify the PIN and thus was not blocked in case of the presentation of wrong PINs. On the other side, the FUN chip acknowledged the PIN for the PoS which continues the fraudulent transaction.

Meanwhile, the PoS have been updated to prevent this attack.

This paper is an excellent example of forensics analysis as well as responsible disclosure. The paper was published after the problem was solved in the field. It discloses an example of a new potential class of attacks: Chip in The Middle.

Law 1: Attackers will always find their way. Moreover, they even read academic publications and use them.

Nov 15 2015

Cloud Security: a Metaphor

Last year, at the annual SMPTE Technical Conference, I presented a paper “Is the Future of Content Protection Cloud(y)?”  I explained that the trust model of public cloud was theoretically inherently weaker than the trust model of private cloud or private data center.  The audience argued that at the opposite, the security of public cloud may be better than the security of most private implementations.  As usual in security, the answer is never Manichean.

Metaphors are often good tools to introduce complex concepts.  Analogy with the real world helps to build proper mental models.  The pizza as a service metaphor that explains the IaaS, PaaS and SaaS is a good example.  In preparation of the panel on cloud security at the next Content Protection Summit, I was looking for a metaphor to illustrate the difference between the two trust models.  I may have found one.

On one side, when using a private cloud (or a private data center), we can likened the trust model to your residential house.  You control whom you invite into your home and what your guests are allowed to do.   You are the only person (with your family) to have the keys.  Furthermore, you may have planted a high hedge to enforce some privacy so that your neighbors cannot easily eavesdrop.


On the other side, the trust model of the public cloud is like a hotel.  You book a room at the hotel.  The concierge decides who enters the hotel and what the hosts are allowed to do.  The concierge provides you with the key to your room.  Nevertheless, the concierge has a passkey (or can generate a duplicate of this key).  You have to trust the concierge as you have to trust your cloud provider.


The metaphor of the hotel can be extended to different aspects of security.  You are responsible for the access to your room.  If you do not lock the room, a thief may enter easily regardless of the vigilance of the hotel staff.  Similarly, if your cloud application is not secured, hackers will penetrate irrespective of the security of your cloud provider.  The hotel may provide a vault in your room.  Nevertheless, the hotel manager has access to its key.  Once more, you will have to trust the concierge.  The same situation occurs when your cloud provider manages the encryption keys of your data at rest.  The hotel is a good illustration of the risks associated to multi-tenancy.  If you forget valuable assets in your room when leaving the hotel, the next visitor of the room may get them.  Similarly, if you do not clean the RAM and the temporary files before leaving your cloud applications, the next user of the server may retrieve them.  This is not just a theoretical attack.  Multi-tenancy may enable it.  Clean your space behind you, the cloud provider will not do it on your behalf.  The person in the room next to your room may eavesdrop your conversation.  You do not control who is in the contiguous rooms.  Similarly, in the public cloud, if another user is co-located on the same server than your application, this service may extract information from your space.  Several attacks based on side channels have been demonstrated recently on co-located server.  They enabled the exfiltration or detection of sensitive data such as secret keys.  Adjacent hotel rooms have sometimes connecting doors.  They are locked.  Nevertheless, they are potential weaknesses.  A good thief may intrude your room without passing through the common corridor.  Similarly, an hypervisor may have some weaknesses or even trapdoors.  The detection of colocation is a hot topic that interests the academic community (and of course, the hacking community).  My blog will follow carefully these new attacks.

Back to the question whether the public cloud is more secure than the private cloud, the previous metaphor helps to answer.  Let us look more carefully at the house of the first figure.   Let us imagine that the house is as the following illustration.


The windows are wide open.  The door is not shut.  Furthermore, the door has cracks and a weak lock.  Evidently, the owner does not care about security.  Yes, in that case, the owner’s assets would be more secure in the room of a hotel than in his house.  If your security team cannot secure properly your private cloud (lack of money, lack of time, or lack of expertise), then you would be better on a public cloud.

If the house is like the one of the next image, then it is another story.


The windows have armored grids to protect their access.  The steal door is reinforced.  The lock requires a strong password and is protected against physical attacks.  Cameras monitor the access to the house.  The owner of this house cares about security.  In that case, the owner’s assets would be less secure in the room of a hotel than in his house.  If your security team is well trained and has sufficient resources (time, fund), then you may be better in your private cloud.

Now, if you are rich enough to afford to book an entire floor of the hotel for your usage, and put some access control to filter who can enter this level, then you mitigate the risks inherent to multi-tenancy as you will have no neighbors.  Similarly, if you take the option to have the servers of the public cloud uniquely dedicated to your own applications, then you are in a similar situation.

This house versus hotel metaphor is an interesting metaphor to introduce the trust model of private cloud versus the trust model of public cloud.  I believe that it may be a good educational tool.  Can we extend it even more?  Your opinion is welcome.

A cautionary note is mandatory: a metaphor has always limitation and should never be pushed too far.


The illustrations are from my son Chadi.

Nov 05 2015

Alea Jacta Est (2)

Four years ago, I sent the manuscript of my first book to Springer.   This weekend, it was the turn of my second book: “Ten laws of security.”    It covers the ten laws.  Now, Springer will start the copy editing and once approved by me, it will go to print.  I hope that it should be available for the first semester 2016.

I will keep you informed of the progress.

Oct 15 2015

iOS 9 is jailbroken

The official release date for iOS 9 was 16 September 2015.  It did not take long for the Chinese Pangu team to set-up a jailbreaking exploit (here).   Since iOS 7, Pangu team released jailbreak exploits.

The exploit requires a Windows computer. 


Oct 08 2015

Does HTTPS prevent Man In The Middle attacks?

A common belief is that the HTTPS protocol prevents so-called Man In The Middle (MiTM) attacks. Unfortunately, in some circumstances, this assumption is wrong.

HTTPS and authentication

A browser decides whether an HTTPS connection is secure if the two following conditions are verified:

  • The owner information in the received X509 certificates matches the server name. The subject field of the X509 certificate defines the owner information.  If the browser visits the website, then the subject of the digital certificate should also be
  • The received certificate must be valid and a Certification Authority (CA) that the browser trusts must have signed it. The issuer field of the X509 certificate identifies the issuing CA.

For instance, the certificate checked by your browser belonged to and was issued the CA AlphaSSL.  AlphaSSL is not one of the Trusted Root CAs of Chrome.  Nevertheless, its certificate was signed by GlobalSign that is one of the Trusted Root Certification Authorities.


Inside a corporate network, usually all Internet connections are forwarded to a proxy server that resides inside the corporate Demilitarized Zone (DMZ).  This proxy may interact with the connection.  The same browser does not verify the same certificate when connecting to  The received certificate was issued and signed by another CA than AlphaSSL.   GNS Services issued the certificate of this CA.  GNS Services is one of the Trusted Root CAs that were listed in my  corporate version of Chrome.


The proxy acts as a local CA.  It generates the X509 certificate for the domain and returns this new certificate to the browser.  As the proxy signed it with a private key being part of a trusted key hierarchy, the browser validates this certificate.  The proxy can perform its MiTM.

Why does it work?  The Internet uses the trust listed CA model.  In this model, the system manages a list of CAs that it trusts.  Indeed, the list contains the self-signed certificate of the public key of each trusted root CA.  A self-signed certificate is the certificate of the public key that is signed by its private key.  The CAs are independent.  Browsers have to access all legitimate websites to offer a satisfactory user experience.  Thus, browsers have to trust most existing CAs.  Therefore, browsers come with a large bunch of preinstalled public root keys of CAs.  Internet Explorer hosts about twenty Trusted Root CAs.  Mozilla recognizes more than 120 Trusted Root CAs.  Each Trusted Root CA signs other CAs’ certificate.  The current Internet ends up with more than 1,500 CA that the browsers trust!

A corporate proxy has two solutions:

  • It may get a key pair with the certificate signed from one of the trusted CA. In that case, it may use the mainstream browsers without any issue.   The certificate must be used for signature.
  • If it allows only managed devices within its corporate network, the IT department can patch the browsers and add their own root public key as part of the trusted CA. Which is the solution used here.

Is this practice only limited to the corporate network?  The answer is no.   On my home computers, I use an anti-virus.   The anti-virus has a feature called WebShield that attempts to protect against malicious websites.   It has an option labeled “Enable HTTPS scanning.”  This option is set on by default.  The certificate validated by the browser when accessing the same website https:/  with the option enabled, it is not the genuine certificate.   It is a certificate that has been signed by the anti-virus that acts as a MiTM.  During its installation, the anti-virus appended its own root certificate to the list of Trusted Root CA.


Is it difficult to install such a solution?  Unfortunately, the answer is negative.  An open source project mitmproxy provides all that is needed to install such a MiTM.

How to know if there is a “MiTM”?

Fortunately, there is a simple test.  In 2007, a new flavor of X509 SSL certificates was created: Extended Validation SSL (EV SSL).  Only CAs C3that have a strictly controlled, documented process to verify that the requester of a domain certificate is the actual owner of the domain can issue EV SSL certificates.  Browsers provide a distinctively enhanced display for sites using EV SSL certificates.  Usually, they display in the address bar the name of the entity owning the certificate, the name of the issuing CA and a distinctive color (usually green).  Websites using EV SSL certificates should be more trustworthy than sites using standard X509 certificates.

Of course, if the conneC4ction is under a MiTM, the browser does not receive an EV SSL certificate but rather an SSL certificate.  The MiTM cannot generate an EV SSL certificate.   Thus, the browser displays a classical HTTPS connection.

Thus the simple test is:

  • Select one website that uses EV SSL and bookmark it.
  • Each time, you want to check whether there is MiTM, visit this website and check whether it presents an EV SSL certificate.


The current model of trust of Internet employs hundreds of CAs.   This brittle model allows to set-up lawful or unlawful man in the middle attacks.  As usual, vigilance is the only solution.   Fortunately, a simple test detects this type of MiTM.

Update (9-oct-15):   THe GNS Services certificate is not part of the standard distribution of Chrome.   Thanks to Jim H. for having spotted this mistake

Jul 23 2015

We know where you went

Google released a new enhancement to Google Maps.  The timeline provides you the complete history of locations of your Android  mobile device, i.e., likely you.  The history is going deep in the past (2009 if you had an Android phone).   The analysis is detailled with even the shops or places you may have visited.   It is extremely accurate.  It is also linked to Google Photo to display the corresponding pictures you may have shot ta that time.

The timeline is only available to you, or more precisely to the entity that logs into your account.

It is scary.  The positive side is that Google does not hide that it tracks all our movements.


The feature is available at


Update:  The feature can be deactivated under yourf Google account history parameters,  It is not clear ifr you simply deactivate the time line feature or Google erases the history.

Jul 06 2015

Using temperature as a covert channel

CaptureFour researchers from the Ben-Gurion University disclosed a new covert channel.   A covert channel is a mean to transfer information through a channel that was not supposed to transfer information.   Covert channels are at the heart of side channel attacks.  Many covert channels have been investigated, e.g. power supply, radio frequency, or sound.

Their system coined BitWhisper uses temperature as the carrying ‘media.’  The interesting feature of BitWhisper is that it may cross air-gapped computers.   Air-gapped computers have no digital connections (wired or wireless).  Air-gap is the ultimate isolation between networks or computers.

In BitWhisper, the attacker owns one computer on each side of the air-gap.  Furthermore, both computers are in the same vicinity.  Modern computers are equipped with thermal sensors that can be read by software.  On the emitter computer, the attacker increases or decreases the computation effort drastically, thus creating a variation of the internal temperature, for instance by using CPU and GPU stress tests.   The higher the computation effort, the higher the internal temperature.   The receiving computer monitors stays with a constant computing power and measures the variation of its internal thermal probes.

Obviously, this covert channel has a big limitation.  The distance separating both computers should not exceed 40 cm.  At 35 cm, they succeeded to induce a one degree Celsius variation in the receiving computer.   The system would probably not work in a data center.     The orientation of the computers is also impacting.  The overall throughput is of a few bits per day.

Nevertheless, it is an interesting idea, although not practical.   In another setup where the attacker could use an external thermal camera as a receiver, rather than a generic computer, the efficiency of this covert channel could be increased.


Guri, Mordechai, Matan Monitz, Yisroel Mirski, and Yuval Elovici. “BitWhisper: Covert Signaling Channel between Air-Gapped Computers Using Thermal Manipulations.” arXiv, March 26, 2015.
PS:  this draft version does not describe the communication protocol

Older posts «