Category Archive: Book/Paper

Feb 23 2017

Malware Lets a Drone Steal Data by Watching a Computer’s Blinking LED

This news initially published by Wired has made the headlights of many news and blogs. Thus, I had to dive in and read the paper. This team already disclosed in the past the use of some covert channels such as temperature. Three researchers, GURI M., ZADOV B., and ELOVICI Y. have devised a new way to breach airgap. They use the hard-drive (HDD) LED as a covert channel. By reading from the hard drive with a given sequence, it is possible to control the LED without having to be a privileged user. The reading sequence modulates the emitted light that carries the covert channel.

They experimented with several normal computers and concluded that they were able to reach about 4K bit/s by reading 4KB sectors. Obviously, such throughput does require special equipment to record the blinking LED. Typical video cameras will not allow more than 15 bit/s depending on their frame per second (fps). Do not forget Shannon’s theorem about sampling. Thus, they used a photodiode or a specialized amplified light detectors. Only such kind of equipment can guarantee a good detection rate.

Using the HDD reading for a covert channel is not a new method. At Black Hat 2008, FILIOL E. et al. disclosed such attack but they used the clicking of the hard HDD, i.e., acoustic channel, rather than the LED, i.e., optical channel. This is an interesting presentation of many covert channels.

The new attack is nice and adding the drone component guarantees the buzz. Nevertheless, I believe it is not as dangerous as publicized. The issue is the malware itself. The malware has to be the exclusive entity accessing the HDD during the transmission. Indeed, if any concurrent process uses the HDD, it will mess up with the emitted message. Therefore, the researchers recommend turning off the disk caching (drop_caches for Linux). What is the likelihood that an air-gapped computer can run a malware as the exclusive process without being noticed? One of the characteristics of the malware is that it should be stealthy, thus probably not being alone to access the HDD.

The second issue is the synchronization with the spying eyes. The evil maid scenario (or evil drone) does not seem realistic. The malware should execute only during the presence of the spy; else it will be noticed (due to the exclusivity of access to HDD). The spy cannot signal its presence to the malware as the malware is air gapped thus cannot receive any incoming message. Thus, either they have to define in advance some rendez-vous, or the malware has to run repeatedly for a long period, i.e., reducing its stealthiness. If the spying device is “fixed,” using cameras is not realistic due to their low bandwidth, thus requesting the malware to run for long periods. Nevertheless, the spy may have installed special equipment and record everything and analyze later the recorded light and look for the malware sequences when the malware wakes up and plays. The spying device will have to exfiltrate stealthily a far larger message than the covert message, increasing the risk to be detected.

The attack is possible but seems more complex than what is publicized. The paper’s proposed countermeasures disclose the defense:

Another interesting solution is to execute a background process that frequently invokes random read and write operations; that way, the signal generated by the malicious process will get mixed up with a random noise, limiting the attack’s effectiveness.

As already told, I believe that in most cases, more than one process will be executing and accessing the HDD. If you are paranoid, you can always hide the LED. 

Reference:

Guri, Mordechai, Boris Zadov, and Yuval Elovici. “LED-It-GO Leaking a Lot of Data from Air-Gapped Computers via the (Small) Hard Drive LED,” February 2017. http://cyber.bgu.ac.il/advanced-cyber/system/files/LED-it-GO_0.pdf

Calmette, Vincent, Stephane Vallet, Eric Filiol, and Guy Le Bouter. “Passive and Active Leakage of Secret Data from Non Networked Computer.” Black Hat 2008, Las Vegas, NV, USA, 2008. https://www.researchgate.net/publication/228801499_Passive_and_Active_Leakage_of_Secret_Data_from_Non_Networked_Computer

Jan 30 2017

Neural Networks learning security

In October 2016, Martin Abadi and David Andersen, two Google researchers, published a paper that made the highlights of the newspapers. The title was “Learning to protection communications with adversarial Neural Cryptography.” The newspapers announced that two neural networks learnt autonomously how to protect their communication. This statement was interesting.

As usual, many newspapers simplified the outcome of the publication. Indeed, the experiment operated under some detailed limitations that the newspaper rarely highlighted.

The first limitation is the adversarial model. Usually, in security, we expect Eve not to be able to understand the communication between Alice and Bob. The usual limitations for Eve are either she is passive, i.e., she can only listen to the communication, or she is active, i.e., she can mingle with the exchanged data. In this case, Eve is passive and Eve is a neural network trained by the experimenters. Eve is not a human or one customized piece of software. In other words, it has limited capacities.

The second limitation is the definition of success and secrecy:

  • The training of Alice and Bob to be successful requires that there is an average error rate of 0.05 bits for the reconstruction of a protected message of 16 bit. In cryptography, the reconstruction error must be null. We cannot accept any error in the decryption process.
  • The output of the neural network protected message must not look random. Usually, the randomness of the output is an expected feature of any cryptosystem.

Under these working assumptions, Alice and Bob succeeded to hide their communication from Eve after 20,000 iterations of training. Unfortunately, the paper does not explain how the neural network succeeded, and what types of mathematical methods it implemented (although the researchers modeled a symmetric like cryptosystem, i.e., Alice and Bob shared a common key). There was neither an attempt to protect a textual message and challenge cryptanalysts to break it.

Thus, it is an interesting theoretical work in the field of machine learning but most probably not useful in the field of cryptography. By the way, with the current trends in cryptography to require formal proof of security, any neural network based system would fail this formal proof step.

 

Abadi, Martín, and David G. Andersen. “Learning to Protect Communications with Adversarial Neural Cryptography.” arXiv, no. 1610:06918 (October 21, 2016). http://arxiv.org/abs/1610.06918

 

 

 

 

Dec 19 2016

Artificial Intelligence vs. Genetic Algorithm

AI and Deep Learning are hot topics. Their progress is impressive (see Alpha Go). Nevertheless, they open new security challenges. I do not here speak about the famous singularity point, but rather about basic security issues. This interesting topic raised my interest. Thus, expect to hear from me on the subject.

Can AI be fooled? For instance, can a recognition software be fooled to recognize other things than expected? The answer is yes, and some studies seem to indicate that at least in some fields it may be relatively easy. What does the following image represent?

You most probably have recognized a penguin. So did a well-trained, deep neural network (DNN) software as we may expect. According to you, what does the following image represent?

Once more, did you not recognize a penguin? The same DNN decided that it was a penguin. Of course, this image is not a random image. A. NGUYEN, J. YOSINSKI and J. CLUNE studied how to fool such a DNN in a paper “Deep Neural Networks are Easily Fooled.” They used a genetic algorithm (or evolutionary algorithm) to create such fooling images. Genetic algorithms try to mimic evolution under the assumption that only the fittest elements survive (so called, natural selection). These algorithms start from an initial population. The population is evaluated through a fitness function (here the score of recognition of the image) to select the fittest samples. Then, the selected samples are mutating and crossing over. The resulting offsprings pass again the same selection process. After several generations (in the order of thousands usually), the result is an optimized solution. The researchers applied this technique to fool the DNN with success.

They also attempted to train the DNN with these fooling images has decoys to reject them. Then, they tried the same process with new generations. The newly trained DNN with decoys did not perform better to avoid fooling images. They also succeeded with totally random noisy images, but these images are aesthetically less satisfactory J

Interestingly, the characteristics of the fooling images, such as color, patterns, or repetition may give some hints on what the DNN uses actually as main differentiating features.

This experiment highlights the risks of AI and DNN. They operate as black boxes. Currently, practitioners have no tools to verify whether the AI or DNN may operate properly under adverse conditions. Usually, with signal processing, the researchers can calculate a theoretical false positive rate. To the best of my knowledge, this is not anymore the case with DNN. Unfortunately, false positive rates are an important design factor in security and pattern recognition related to security or safety matters. With AI and DNN, we are loosing predictability of the behavior. This limitation may become an issue soon if we cannot expect them to react properly in non-nominal conditions. Rule 1.1: Always Expect the Attackers to Push the Limits.

A very interesting paper to read.

Nguyen, A., J. Yosinski, and J. Clune. “Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images.” In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 427–36, 2015. doi:10.1109/CVPR.2015.7298640 available at https://arxiv.org/pdf/1412.1897.pdf

 

 

Nov 20 2016

Law 7 – You Are the Weakest Link

laws7This post is the seventh post in a series of ten posts. The previous post explored the sixth law: Security is not stronger than its weakest link.  Although often neglected, the seventh law is fundamental.  It states that human users are often the weakest element of the security.

Humans are the weakest link for many reasons.  Often, they do not understand security or have an ill perceived perception of it.  For instance, security is often seen as an obstacle.  Therefore, users will circumvent it when security is an obstruction to the fulfillment of their task and will not apply security policies and procedures.  They do not believe that they are a worthwhile target for cyber-attacks.

Humans are the weakest link because they do not grasp the full impact of their security-related decisions.  How many people ignore the security warnings of their browser?  How many people understand the security consequences and constraints of Bring Your Own Device (BYOD) or Bring Your Own Cloud (BYOC)?  Employees put their company at risk by bad decisions.

Humans are the weakest link because they have intrinsic limitations.  Human memory is often feeble thus we end up with weak passwords or complex passwords written on a post-it.  Humans do not handle complexity correctly.  Unfortunately, security is too complex.

Humans are the weakest link because they can be easily deceived.  Social engineers use social interaction to influence people and convince them to perform actions that they are not expected to do, or to share information that they are not supposed to disclose.   For instance, phishing is an efficient contamination vector.

How can we mitigate the human risk?

  • Where possible, make decisions on behalf of the end user; as the end users are not necessarily able to make rational decisions on security issues, the designer should make the decisions when possible. Whenever the user has to decide, the consequences of his decision should be made clear to him to guide his decision.
  • Define secure defaults; the default value should always be set to that for the highest or, at least, an acceptable security level. User friendliness should not drive the default value, but rather security should.
  • Educate your employees; the best answer to social engineering is enabling employees to identify an ongoing social engineering attack. This detection is only possible by educating the employees about this kind of attack.  Training employees increases their security awareness and thus raises their engagement.
  • Train your security staff; the landscape of security threats and defense tools is changing quickly. Skilled attackers will use the latest exploits.  Therefore, it is imperative that the security personnel be aware of the latest techniques.  Operational security staff should have a significant part of their work time dedicated to continuous training.

Interestingly, with the current progress of Artificial Intelligence and Big Data analytics, will the new generation of security tools partly compensate this weakness?

If you find this post interesting, you may also be interested in my second book “Ten Laws for Security” that will be available end of this month.   Chapter 8 explores in details this law. The book will be available for instance at Springer or Amazon.

Sep 29 2016

Judgment under uncertainty

We often have to make decisions without having all the information we expected. We make this decision on the belief of perceived likelihood of events. Obviously, evaluating uncertain events is an incredibly difficult task. Unfortunately, it is mandatory for risk management or data analysis. This judgment is subjective although we may believe it is rationale.

In 1974, Amos Tversky and Daniel Kahneman published a paper “Judgment under uncertainty: heuristics and biases.” They explored the different biases that will taint our decision and that we are most probably not aware of.

For instance,

  • Insensitivity to sample sizes. The size of the sample impacts its representativeness
  • Misconception of chance; many people believe that the probability of dice sequence 111111 is far lower than the sequence 163125
  • Biases of imaginability;

The paper lists ten such biases. Being aware of them is worthwhile. The article remains me a lot of the book “Predictably irrational.”

Nice to read for security guys and data analysts.

Reference

Tversky, Amos, and Daniel Kahneman. “Judgment under Uncertainty: Heuristics and Biases.” In Utility, Probability, and Human Decision Making, edited by Dirk Wendt and Charles Vlek, 141–62. Theory and Decision Library 11. Springer Netherlands, 1975. http://www.cob.unt.edu/itds/faculty/evangelopoulos/busi6220/Kahneman1974_Science_JudgementUnderUncertainty.pdf

 

 

 

Jul 06 2016

An insight in Knox

Samsung provides for its Galaxy devices an enterprise mobile security solution Knox. Among the features, Knox offers Workspace that compartments the mobile device in two spaces: user space and Knox space. Of course, the Knox space is running in a TrustZone™ and executes only authenticated trusted applications. There is not much public information about the actual implementation of Knox.

Uri Kanonov and Avishai Wool have lifted a part of the veil by reverse engineering Knox 1.0. Their paper provides an interesting in-depth description of some secure mechanisms such a compartmentalization (based on SELinux) or encryption file system. They also disclose some vulnerabilities. The last section describes some enhancements available in Knox 2.3 as well as some remaining issues.

An interesting element of the paper is the list of lessons:

  • Component reuse is welcome, provided a proper protection for the added attack surface.
  • Protect the software code of secure components
  • Validating the application authorized to run in the Trust Zone is key for security
  • Hardware Root of Trust should be at the root of any secure container system
  • Avoid resource sharing; it increases the attack surface.
  • Check the integrity of the secure container periodically; only checking at boot time is insufficient.

If you want to learn more about Knox, this paper is a good reading.

Kanonov, Uri, and Avishai Wool. “Secure Containers in Android: The Samsung KNOX Case Study.” arXiv, May 27, 2016. http://arxiv.org/abs/1605.08567.

Apr 15 2016

Hacking reCAPTCHA (2)

In 2012, the hacking team DefCon 949 disclosed their method to break Google’s reCaptcha. They used weaknesses in the version dedicated to visually impaired persons. End of 2014, Google replaced its letter-warping version with a user-friendlier version. It is based on the recognition of a set of images illustrating an object within a set of nine images.

At Black Hat Asia 2016, S. Sivakorn, J. Polakis and A. Keromytis from Columbia disclosed a method to break this visual captcha. They used many tools, but the core of the attack is the use of image annotation services, such as Google Reverse Image Search (GRIS) or Clarifai. These tools return a best guess description of the image, i.e., a list of potential tags. For instance, for the picture of a go-ban illustrating the blog post about AlphaGo, Clarifai returns chess, desktop, strategy, wood, balance, no person, table, and game, whereas GRIS returns go game. They use many tricks to increase the efficiency. My preferred one is to use GRIS to locate a high-resolution instance of each proposed challenge. They discovered that the accuracy of these annotation services decreased with the resolution of the submitted image.

They obtained a 70% accuracy for Google reCaptcha and 83.5% for Facebook’s version.

Sivakorn, Suphannee, Jason Polakis, and Angelos D. Keromytis, “I’m Not a Human: Breaking the Google reCaptcha” presented at Black Hat Asia, Singapore, 2016.

 

Older posts «