Does HTTPS prevent Man In The Middle attacks?

A common belief is that the HTTPS protocol prevents so-called Man In The Middle (MiTM) attacks. Unfortunately, in some circumstances, this assumption is wrong.

HTTPS and authentication

A browser decides whether an HTTPS connection is secure if the two following conditions are verified:

  • The owner information in the received X509 certificates matches the server name. The subject field of the X509 certificate defines the owner information.  If the browser visits the website mysite.com, then the subject of the digital certificate should also be  www.mysite.com.
  • The received certificate must be valid and a Certification Authority (CA) that the browser trusts must have signed it. The issuer field of the X509 certificate identifies the issuing CA.

For instance, the certificate checked by your browser belonged to https://eric-diehl.com and was issued the CA AlphaSSL.  AlphaSSL is not one of the Trusted Root CAs of Chrome.  Nevertheless, its certificate was signed by GlobalSign that is one of the Trusted Root Certification Authorities.

C5

Inside a corporate network, usually all Internet connections are forwarded to a proxy server that resides inside the corporate Demilitarized Zone (DMZ).  This proxy may interact with the connection.  The same browser does not verify the same certificate when connecting to https://eric-diehl.com.  The received certificate was issued and signed by another CA than AlphaSSL.   GNS Services issued the certificate of this CA.  GNS Services is one of the Trusted Root CAs that were listed in my  corporate version of Chrome.

C1

The proxy acts as a local CA.  It generates the X509 certificate for the eric-diehl.com domain and returns this new certificate to the browser.  As the proxy signed it with a private key being part of a trusted key hierarchy, the browser validates this certificate.  The proxy can perform its MiTM.

Why does it work?  The Internet uses the trust listed CA model.  In this model, the system manages a list of CAs that it trusts.  Indeed, the list contains the self-signed certificate of the public key of each trusted root CA.  A self-signed certificate is the certificate of the public key that is signed by its private key.  The CAs are independent.  Browsers have to access all legitimate websites to offer a satisfactory user experience.  Thus, browsers have to trust most existing CAs.  Therefore, browsers come with a large bunch of preinstalled public root keys of CAs.  Internet Explorer hosts about twenty Trusted Root CAs.  Mozilla recognizes more than 120 Trusted Root CAs.  Each Trusted Root CA signs other CAs’ certificate.  The current Internet ends up with more than 1,500 CA that the browsers trust!

A corporate proxy has two solutions:

  • It may get a key pair with the certificate signed from one of the trusted CA. In that case, it may use the mainstream browsers without any issue.   The certificate must be used for signature.
  • If it allows only managed devices within its corporate network, the IT department can patch the browsers and add their own root public key as part of the trusted CA. Which is the solution used here.

Is this practice only limited to the corporate network?  The answer is no.   On my home computers, I use an anti-virus.   The anti-virus has a feature called WebShield that attempts to protect against malicious websites.   It has an option labeled “Enable HTTPS scanning.”  This option is set on by default.  The certificate validated by the browser when accessing the same website https:/eric-diehl.com  with the option enabled, it is not the genuine certificate.   It is a certificate that has been signed by the anti-virus that acts as a MiTM.  During its installation, the anti-virus appended its own root certificate to the list of Trusted Root CA.

C2

Is it difficult to install such a solution?  Unfortunately, the answer is negative.  An open source project mitmproxy provides all that is needed to install such a MiTM.

How to know if there is a “MiTM”?

Fortunately, there is a simple test.  In 2007, a new flavor of X509 SSL certificates was created: Extended Validation SSL (EV SSL).  Only CAs C3that have a strictly controlled, documented process to verify that the requester of a domain certificate is the actual owner of the domain can issue EV SSL certificates.  Browsers provide a distinctively enhanced display for sites using EV SSL certificates.  Usually, they display in the address bar the name of the entity owning the certificate, the name of the issuing CA and a distinctive color (usually green).  Websites using EV SSL certificates should be more trustworthy than sites using standard X509 certificates.

Of course, if the conneC4ction is under a MiTM, the browser does not receive an EV SSL certificate but rather an SSL certificate.  The MiTM cannot generate an EV SSL certificate.   Thus, the browser displays a classical HTTPS connection.

Thus the simple test is:

  • Select one website that uses EV SSL and bookmark it.
  • Each time, you want to check whether there is MiTM, visit this website and check whether it presents an EV SSL certificate.

Conclusion

The current model of trust of Internet employs hundreds of CAs.   This brittle model allows to set-up lawful or unlawful man in the middle attacks.  As usual, vigilance is the only solution.   Fortunately, a simple test detects this type of MiTM.

Update (9-oct-15):   THe GNS Services certificate is not part of the standard distribution of Chrome.   Thanks to Jim H. for having spotted this mistake

We know where you went

Google released a new enhancement to Google Maps.  The timeline provides you the complete history of locations of your Android  mobile device, i.e., likely you.  The history is going deep in the past (2009 if you had an Android phone).   The analysis is detailled with even the shops or places you may have visited.   It is extremely accurate.  It is also linked to Google Photo to display the corresponding pictures you may have shot ta that time.

The timeline is only available to you, or more precisely to the entity that logs into your account.

It is scary.  The positive side is that Google does not hide that it tracks all our movements.

 

The feature is available at https://www.google.com/maps/timeline

 

Update:  The feature can be deactivated under yourf Google account history parameters,  It is not clear ifr you simply deactivate the time line feature or Google erases the history.

Using temperature as a covert channel

CaptureFour researchers from the Ben-Gurion University disclosed a new covert channel.   A covert channel is a mean to transfer information through a channel that was not supposed to transfer information.   Covert channels are at the heart of side channel attacks.  Many covert channels have been investigated, e.g. power supply, radio frequency, or sound.

Their system coined BitWhisper uses temperature as the carrying ‘media.’  The interesting feature of BitWhisper is that it may cross air-gapped computers.   Air-gapped computers have no digital connections (wired or wireless).  Air-gap is the ultimate isolation between networks or computers.

In BitWhisper, the attacker owns one computer on each side of the air-gap.  Furthermore, both computers are in the same vicinity.  Modern computers are equipped with thermal sensors that can be read by software.  On the emitter computer, the attacker increases or decreases the computation effort drastically, thus creating a variation of the internal temperature, for instance by using CPU and GPU stress tests.   The higher the computation effort, the higher the internal temperature.   The receiving computer monitors stays with a constant computing power and measures the variation of its internal thermal probes.

Obviously, this covert channel has a big limitation.  The distance separating both computers should not exceed 40 cm.  At 35 cm, they succeeded to induce a one degree Celsius variation in the receiving computer.   The system would probably not work in a data center.     The orientation of the computers is also impacting.  The overall throughput is of a few bits per day.

Nevertheless, it is an interesting idea, although not practical.   In another setup where the attacker could use an external thermal camera as a receiver, rather than a generic computer, the efficiency of this covert channel could be increased.

 

Guri, Mordechai, Matan Monitz, Yisroel Mirski, and Yuval Elovici. “BitWhisper: Covert Signaling Channel between Air-Gapped Computers Using Thermal Manipulations.” arXiv, March 26, 2015. http://arxiv.org/abs/1503.07919.
PS:  this draft version does not describe the communication protocol

RIP SSL

IETF has officially deprecated SSL 3.0 with the publication of  RFC 7568: SSLv3 Is Not Secure. RFC 7568: SSLv3 Is Not Secure. TLS clients and servers MUST NOT send a request for an SSLv3 session. Similarly, TLS clients and servers MUST close any session requesting SSLv3. According to RFC2119, must means mandatory.
POODLE signed the certificate of death.
As a consequence, we should avoid using anymore the vocable SSL when indeed we mean TLS. During a long period, we often merged SSL and TLS when writing. We should discipline ourselves now. Will the community dare remove SSL from OpenSSL or LibreSSL? Will it be rebaptized OpenTLS, or keep SSL name as a tribute?

Stealing account with mobile phone-based two-factor authentication

Attackers often entice users to become the weakest link.   Phishing and scams exploit the human weakness.  These attacks become even creepier if the attacker circumvents legitimate security mechanisms.   Two factor authentication offers better security than simple login/password.  The use of mobile phone as the second factor is becoming mainstream.  It is impossible to steal our account without stealing our phone.  We feel safer.  Should we?

Symantec reported a new used method to steal the account of users despite the use of a two-factor authentication.   Here is the scheme.

Mallory wants to gain access to Alice’s account.  He knows Alice email address and her mobile phone number as well as her account.  For a social engineer, this information is not difficult to collect.  It is part of the usual exploration phase before the actual hack.   Mallory contacts the service provider of Alice’s account and requests a password reset.  He selects the method that sends a digital code to Alice’s mobile phone.   The service provider sends an SMS to Alice’s mobile phone with this code. Simultaneously, Mallory sends an SMS to Alice impersonating the service provider.  Once more, this is not difficult as many providers do not use a specific number.  This SMS explains to Alice that there was some suspicious activity on her account.  To verify her account, she must reply to this SMS with the code that was sent previously to her.  Gullible Alice obeys.  Mallory has now the code that the service provider requests to reset Alice password.  Mallory gains entire access to Alice’s account with the involuntary help of Alice.

This type of attack can be used on most web services, e.g., webmails like gmail.  Obviously, Alice should not have replied to this SMS.  She should have followed the known procedure and not an unknown one.  She may have been cautious that the two phone numbers were different.

This is a perfect example of social engineering.   The only answer is education.  Therefore, spread this information around you,  The more people are aware, the less they will be prone to be hacked.  Never forget Law 6: You are the weakest link.

Crashing a plane through IFE?

4549185468_d28a2709e2_zThis week end, Chris Roberts made the headlines of the media.  He was presented as the hacker who succeeded to control a plane by hacking the In-Flight Entertainment  (IFE) system. This is not the first time that planes are supposed to be controlable by hackers.  In 2013, a researcher claimed to control the flight management system with an Android phone.  As usual, not properly analysed documents were used to create a false sense of truth.  I have seen mainly two big “pieces of evidence’ that demonstrated it must be true.

  • It is written in an FBI affidavit that Roberts hacked IFE and controlled a plane.  He was arrested, and his electronic material seized.
  • The US Government Accountability Office (GOA) stated in a report that it was feasible.

I decided to read these “evidences”.  As FBI arrested Roberts, the FBI agent wrote an affidavit.  Some interesting facts:

  • Roberts was two times interviewed by FBI about vulnerabilities on IFE: 13 February 2015 and 5 March 2015.  During these interviews, Roberts explained his operating mode as well as his tools.  He  claimed to have entered about twenty times in Panasonic and Thales IFE.  He claimed that one time he was able to access the avionics system.
  • He stated that he then overwrote code on the airplane’s Thrust Management Computer while aboard a flight.  He stated that he successfully commanded the system he had accessed to issue the “CLB” or climb command.  He stated that he thereby caused one of the airplane engines to climb resulting in a lateral or sideways movement of the plane…

  • The affidavit does not state that he provided any proof of this statement.
  • In February, FBI agents advised him that accessing the IFE without authorization may be a violation and may result in prosecution.  He acknowledged this fact.
  • On 15th April, Roberts twitted that he may “play” with the avionics once more.
  • United Airlines informed FBI who then arrested Roberts.
  • Investigation showed that two boxes used by IFE were tampered.  One of these boxes was at his seat (3A) and the second one was one row in front of him (2A)
  • … showed that the SEBs under seats 2A and 3A showed signs of tampering.  The SEB under 2A was damaged.  The outer cover of the box was open approximatively 1/2 inch and one of the retaining screw was not seated and was exposed.

  • It is interesting to note that the “opened” box was one row in front on a first class seat.

Despite was media infers, the affidavit does not present any proof that he hacked the IFE and even less that he accessed the avionics.

The governmental report from GOA is even less conclusive.  The statement is

Modern aircraft are increasingly connected to the Internet. This interconnectedness can potentially provide unauthorized remote access to aircraft avionics systems.

This broad statement cannot be challenged.   It is Law 8.  The same can be said from any car automotive systems.  Nevertheless, this does not mean that avionics can be accessed from IFE.

In other words, there is no real evidence that Roberts hacked the avionics.  It may be possible that Roberts hacked the IFE network with physical access to the network carrying video.  Most of the wired IFE systems may assume that the physical network is trusted.   It is usually expected that the attending crew would spot a user tampering the hardware.  Fortunately, the IFE and the avionics are air-gapped. I know the Airbus and Thales security teams. They would never have accepted the risk to not air gapping the systems.  All the IFE systems I was exposed to were air-gapped from avionics.  Roberts did never explain how he would have succeeded to cross the air gap.  (Current attacks on air gap, use either file sharing in the cloud, contaminating files exchanged over USB thumbs or sophisticated side channels such as audio or thermal)

Conclusion:  don’t panic when you see a guy with a computer in a plane.

 

image credits: by-sa Sarah Klockars-Clauser 2010

How people perceive hacking

People make decision following mental models that they have of how a system works.  Security is not different from other fields.  Experts or technically well-informed people may have mental models that are reasonably accurate, i.e. the mental model fits reasonably with the real world behavior.  For normal users, the problem is different.  Wash Rick identified several mental models used by normal users when handling security in a paper entitled “Folk Model of Home Computer Security”. For instance, he extracted four mental models describing what viruses are:

  • Viruses are bad; people using this mental model have little knowledge about virus and thus believed they were not concerned. They thought to be immune.
  • Viruses are buggy software; viruses are normal software that are badly written. Their bugs may crash the computer or create strange behavior.  People understood that they needed to download and install such viruses.  Thus, their protection solution was only to install trusted software.
  • Viruses cause mischief; viruses are pieces of software that are intentionally annoying. They disrupt the normal behavior of the computer.  People do not understand the genesis of virus.  They understand that the infection comes from clicking on applications or visiting bad sites.  Their suggested protection is to be careful.
  • Viruses support crime; the end goal of viruses is identity theft or sifting personal and banking information. As such, people believe that viruses are stealthy and do not impair the behavior of the computer.   Their suggested protection is the regular use of anti-virus software.

Wash extracted four mental models used to understand hackers.

  • Hackers are digital graffiti artists; hackers are skilled individuals that enter in computers just for mischief and show off. They are often young geeks with poor morality.  This is the Hollywood image of hackers.  The victims are random.
  • Hackers are burglars; Hackers act with computers as burglars act with physical properties. The goal is financial gain.  The victims are chosen opportunistically.
  • Hackers are criminals targeting big fish; these hackers are similar to previous ones but their victims are either organizations or rich people.
  • Hackers are contractors who support criminals; these hackers are similar to the graffiti hackers but they are henchmen of criminal organizations. Their victims are mostly large organizations.

When applying these mental models, it is obvious that some best practices will never be used by end users, regardless of their pertinence.  Most of them do not understand these practices or feel they are not concerned by these practices.  For instance, users who believe that virus are bad or buggy software cannot understand the interest to install an anti-virus.  Users assimilating hackers to contractors believe that hackers will never attack their home computers.  Better understanding the mental model of users highlights where awareness is needed to adjust user’s mental model to the reality.  It helps also to design efficient secure solutions that may seem to fit the mental model although they fight in the real model.

Reference:

Wash, Rick. “Folk Models of Home Computer Security.” In Proceedings of the Sixth Symposium on Usable Privacy and Security, 11:1–11:16. SOUPS ’10. New York, NY, USA: ACM, 2010. .