Jul 23 2015

We know where you went

Google released a new enhancement to Google Maps.  The timeline provides you the complete history of locations of your Android  mobile device, i.e., likely you.  The history is going deep in the past (2009 if you had an Android phone).   The analysis is detailled with even the shops or places you may have visited.   It is extremely accurate.  It is also linked to Google Photo to display the corresponding pictures you may have shot ta that time.

The timeline is only available to you, or more precisely to the entity that logs into your account.

It is scary.  The positive side is that Google does not hide that it tracks all our movements.


The feature is available at https://www.google.com/maps/timeline


Update:  The feature can be deactivated under yourf Google account history parameters,  It is not clear ifr you simply deactivate the time line feature or Google erases the history.

Jul 06 2015

Using temperature as a covert channel

CaptureFour researchers from the Ben-Gurion University disclosed a new covert channel.   A covert channel is a mean to transfer information through a channel that was not supposed to transfer information.   Covert channels are at the heart of side channel attacks.  Many covert channels have been investigated, e.g. power supply, radio frequency, or sound.

Their system coined BitWhisper uses temperature as the carrying ‘media.’  The interesting feature of BitWhisper is that it may cross air-gapped computers.   Air-gapped computers have no digital connections (wired or wireless).  Air-gap is the ultimate isolation between networks or computers.

In BitWhisper, the attacker owns one computer on each side of the air-gap.  Furthermore, both computers are in the same vicinity.  Modern computers are equipped with thermal sensors that can be read by software.  On the emitter computer, the attacker increases or decreases the computation effort drastically, thus creating a variation of the internal temperature, for instance by using CPU and GPU stress tests.   The higher the computation effort, the higher the internal temperature.   The receiving computer monitors stays with a constant computing power and measures the variation of its internal thermal probes.

Obviously, this covert channel has a big limitation.  The distance separating both computers should not exceed 40 cm.  At 35 cm, they succeeded to induce a one degree Celsius variation in the receiving computer.   The system would probably not work in a data center.     The orientation of the computers is also impacting.  The overall throughput is of a few bits per day.

Nevertheless, it is an interesting idea, although not practical.   In another setup where the attacker could use an external thermal camera as a receiver, rather than a generic computer, the efficiency of this covert channel could be increased.


Guri, Mordechai, Matan Monitz, Yisroel Mirski, and Yuval Elovici. “BitWhisper: Covert Signaling Channel between Air-Gapped Computers Using Thermal Manipulations.” arXiv, March 26, 2015. http://arxiv.org/abs/1503.07919.
PS:  this draft version does not describe the communication protocol

Jun 29 2015


IETF has officially deprecated SSL 3.0 with the publication of  RFC 7568: SSLv3 Is Not Secure. RFC 7568: SSLv3 Is Not Secure. TLS clients and servers MUST NOT send a request for an SSLv3 session. Similarly, TLS clients and servers MUST close any session requesting SSLv3. According to RFC2119, must means mandatory.
POODLE signed the certificate of death.
As a consequence, we should avoid using anymore the vocable SSL when indeed we mean TLS. During a long period, we often merged SSL and TLS when writing. We should discipline ourselves now. Will the community dare remove SSL from OpenSSL or LibreSSL? Will it be rebaptized OpenTLS, or keep SSL name as a tribute?

Jun 22 2015

Stealing account with mobile phone-based two-factor authentication

Attackers often entice users to become the weakest link.   Phishing and scams exploit the human weakness.  These attacks become even creepier if the attacker circumvents legitimate security mechanisms.   Two factor authentication offers better security than simple login/password.  The use of mobile phone as the second factor is becoming mainstream.  It is impossible to steal our account without stealing our phone.  We feel safer.  Should we?

Symantec reported a new used method to steal the account of users despite the use of a two-factor authentication.   Here is the scheme.

Mallory wants to gain access to Alice’s account.  He knows Alice email address and her mobile phone number as well as her account.  For a social engineer, this information is not difficult to collect.  It is part of the usual exploration phase before the actual hack.   Mallory contacts the service provider of Alice’s account and requests a password reset.  He selects the method that sends a digital code to Alice’s mobile phone.   The service provider sends an SMS to Alice’s mobile phone with this code. Simultaneously, Mallory sends an SMS to Alice impersonating the service provider.  Once more, this is not difficult as many providers do not use a specific number.  This SMS explains to Alice that there was some suspicious activity on her account.  To verify her account, she must reply to this SMS with the code that was sent previously to her.  Gullible Alice obeys.  Mallory has now the code that the service provider requests to reset Alice password.  Mallory gains entire access to Alice’s account with the involuntary help of Alice.

This type of attack can be used on most web services, e.g., webmails like gmail.  Obviously, Alice should not have replied to this SMS.  She should have followed the known procedure and not an unknown one.  She may have been cautious that the two phone numbers were different.

This is a perfect example of social engineering.   The only answer is education.  Therefore, spread this information around you,  The more people are aware, the less they will be prone to be hacked.  Never forget Law 6: You are the weakest link.

May 18 2015

Crashing a plane through IFE?

4549185468_d28a2709e2_zThis week end, Chris Roberts made the headlines of the media.  He was presented as the hacker who succeeded to control a plane by hacking the In-Flight Entertainment  (IFE) system. This is not the first time that planes are supposed to be controlable by hackers.  In 2013, a researcher claimed to control the flight management system with an Android phone.  As usual, not properly analysed documents were used to create a false sense of truth.  I have seen mainly two big “pieces of evidence’ that demonstrated it must be true.

  • It is written in an FBI affidavit that Roberts hacked IFE and controlled a plane.  He was arrested, and his electronic material seized.
  • The US Government Accountability Office (GOA) stated in a report that it was feasible.

I decided to read these “evidences”.  As FBI arrested Roberts, the FBI agent wrote an affidavit.  Some interesting facts:

  • Roberts was two times interviewed by FBI about vulnerabilities on IFE: 13 February 2015 and 5 March 2015.  During these interviews, Roberts explained his operating mode as well as his tools.  He  claimed to have entered about twenty times in Panasonic and Thales IFE.  He claimed that one time he was able to access the avionics system.
  • He stated that he then overwrote code on the airplane’s Thrust Management Computer while aboard a flight.  He stated that he successfully commanded the system he had accessed to issue the “CLB” or climb command.  He stated that he thereby caused one of the airplane engines to climb resulting in a lateral or sideways movement of the plane…

  • The affidavit does not state that he provided any proof of this statement.
  • In February, FBI agents advised him that accessing the IFE without authorization may be a violation and may result in prosecution.  He acknowledged this fact.
  • On 15th April, Roberts twitted that he may “play” with the avionics once more.
  • United Airlines informed FBI who then arrested Roberts.
  • Investigation showed that two boxes used by IFE were tampered.  One of these boxes was at his seat (3A) and the second one was one row in front of him (2A)
  • … showed that the SEBs under seats 2A and 3A showed signs of tampering.  The SEB under 2A was damaged.  The outer cover of the box was open approximatively 1/2 inch and one of the retaining screw was not seated and was exposed.

  • It is interesting to note that the “opened” box was one row in front on a first class seat.

Despite was media infers, the affidavit does not present any proof that he hacked the IFE and even less that he accessed the avionics.

The governmental report from GOA is even less conclusive.  The statement is

Modern aircraft are increasingly connected to the Internet. This interconnectedness can potentially provide unauthorized remote access to aircraft avionics systems.

This broad statement cannot be challenged.   It is Law 8.  The same can be said from any car automotive systems.  Nevertheless, this does not mean that avionics can be accessed from IFE.

In other words, there is no real evidence that Roberts hacked the avionics.  It may be possible that Roberts hacked the IFE network with physical access to the network carrying video.  Most of the wired IFE systems may assume that the physical network is trusted.   It is usually expected that the attending crew would spot a user tampering the hardware.  Fortunately, the IFE and the avionics are air-gapped. I know the Airbus and Thales security teams. They would never have accepted the risk to not air gapping the systems.  All the IFE systems I was exposed to were air-gapped from avionics.  Roberts did never explain how he would have succeeded to cross the air gap.  (Current attacks on air gap, use either file sharing in the cloud, contaminating files exchanged over USB thumbs or sophisticated side channels such as audio or thermal)

Conclusion:  don’t panic when you see a guy with a computer in a plane.


image credits: by-sa Sarah Klockars-Clauser 2010

May 12 2015

How people perceive hacking

People make decision following mental models that they have of how a system works.  Security is not different from other fields.  Experts or technically well-informed people may have mental models that are reasonably accurate, i.e. the mental model fits reasonably with the real world behavior.  For normal users, the problem is different.  Wash Rick identified several mental models used by normal users when handling security in a paper entitled “Folk Model of Home Computer Security”. For instance, he extracted four mental models describing what viruses are:

  • Viruses are bad; people using this mental model have little knowledge about virus and thus believed they were not concerned. They thought to be immune.
  • Viruses are buggy software; viruses are normal software that are badly written. Their bugs may crash the computer or create strange behavior.  People understood that they needed to download and install such viruses.  Thus, their protection solution was only to install trusted software.
  • Viruses cause mischief; viruses are pieces of software that are intentionally annoying. They disrupt the normal behavior of the computer.  People do not understand the genesis of virus.  They understand that the infection comes from clicking on applications or visiting bad sites.  Their suggested protection is to be careful.
  • Viruses support crime; the end goal of viruses is identity theft or sifting personal and banking information. As such, people believe that viruses are stealthy and do not impair the behavior of the computer.   Their suggested protection is the regular use of anti-virus software.

Wash extracted four mental models used to understand hackers.

  • Hackers are digital graffiti artists; hackers are skilled individuals that enter in computers just for mischief and show off. They are often young geeks with poor morality.  This is the Hollywood image of hackers.  The victims are random.
  • Hackers are burglars; Hackers act with computers as burglars act with physical properties. The goal is financial gain.  The victims are chosen opportunistically.
  • Hackers are criminals targeting big fish; these hackers are similar to previous ones but their victims are either organizations or rich people.
  • Hackers are contractors who support criminals; these hackers are similar to the graffiti hackers but they are henchmen of criminal organizations. Their victims are mostly large organizations.

When applying these mental models, it is obvious that some best practices will never be used by end users, regardless of their pertinence.  Most of them do not understand these practices or feel they are not concerned by these practices.  For instance, users who believe that virus are bad or buggy software cannot understand the interest to install an anti-virus.  Users assimilating hackers to contractors believe that hackers will never attack their home computers.  Better understanding the mental model of users highlights where awareness is needed to adjust user’s mental model to the reality.  It helps also to design efficient secure solutions that may seem to fit the mental model although they fight in the real model.


Wash, Rick. “Folk Models of Home Computer Security.” In Proceedings of the Sixth Symposium on Usable Privacy and Security, 11:1–11:16. SOUPS ’10. New York, NY, USA: ACM, 2010. .

May 01 2015

Smart Bottle

JW_Blue_Smart_Bottle_3Diageo and Thin Films have recently demonstrated a smart bottle.   The seal of the bottle contains a NFC tag.  This tag not only carries unique identity of the bottle, but it detects also whether the seal was opened or is still closed.  This smart tag allows interesting features:

  • As for traditional RFID tags, it enables the follow up of the bottle along the delivery chain.
  • As it uses NFC, the seal allows a mobile phone app to identify the bottle, and thus create a personalized experience (interesting features for privacy: it is possible to track who purchased the bottle (at the point of sale with the credit card) and see who actually drinks it (was it a gift?))
  • As it detects if the seal has been broken, it is a way to detect tampering of the bottle during the distribution chain.  This may thwart some forms of piracy and counterfeiting.
  • The tag is also a way to authenticate the origin of the product.  It may have interesting application for expensive rare bottles to verify counterfeiting.
  • It does not yet tell if you drank too much.  This will be the next application associated to the smart glass that will detect what you drink and how much 

See thinfilm brochure opensense

Older posts «