Subliminal images deceive Machine Learning video labeling

Current machine learning systems are not designed to defend against malevolent adversaries. Some teams succeeded already to fool image recognition or voice recognition. Three researchers from the University of Washington fooled the Google’s video tagging system. Google offers a Cloud Video Intelligence API that returns the most relevant tags of a video sequence. It is based on deep learning.

The idea of the researchers is simple. They insert the same image every 50 frames in the video. The API returns the tags related to the added image (with an over 90% high confidence) rather than the tags related to the forged video sequence. Thus, rather than returning tiger for the video (98% of the video time), the API returns Audi.

It has never been demonstrated that subliminal images are effective on people. This team demonstrated that subliminal images can be effective on Machine Learning. This attack has many interesting uses.

This short paper is interesting to read. Testing this attack on other APIs would be interesting.

Hossein, Hosseini, Baicen Xiao, and Radha Poovendran. “Deceiving Google’s Cloud Video Intelligence API Built for Summarizing Videos.” arXiv, March 31, 2017.


Ben Seri and Gregory Vishnepolsky from the society armis recently disclosed eight vulnerabilities present in various BlueTooth stacks. Their paper “The dangers of Bluetooth implementations: Unveiling zero day vulnerabilities and security flaws in modern Bluetooth stacks” thoroughly describes these vulnerabilities and derives some interesting lessons.

Some vulnerabilities may allow taking control of the Bluetooth device. These exploits do not need the target to be in discoverable mode. They just need to know the Bluetooth MAC address (BDADDR). Contrary to common belief, it is guessable even for non-discoverable devices. If the target generates Bluetooth traffic, then it BDAADR is visible in the access code. If it is not generating traffic, the widely accepted convention to use the same MAC address for Wifi than for Bluetooth may reveal it.

Once the attacker knows the BDADDR, he can use the exploits. One powerful vulnerability is due to some lack of implementation guidelines in the specifications for the “Just Works” authentication. For Android and Windows, if the attacker claims to be “No Input No output, No Man in the middle protection required and no bonding,” the target stealthily accepts the connection with limited security capabilities for a short period of time (due to the no bonding). Of course, any service that would require MiTM protection or bonding, and verifies the requirement, will refuse to operate over such connection. For Apple, the connection requests a validation by the user.

Once the attacker is linked to the unknowing target, it can try many attacks. My preferred ones are CVE-2017-0783 and CVE-2017-8628. They use a flaw in the specification of the Personal Area Network (PAN). This service has a low-level security requirement. This means that the previous attack grants access to the PAN without any authorization! The attacker can mount a pineapple attack over Bluetooth without the target being aware. In a Wifi Pineapple, the attacker impersonates an already known WIFI public network and can act as a man in the middle. In this case, the pineapple does not need to be a known network. Redoutable.

The PAN specification dated from 2003 and was never since revised. “Just works” and the newer authentication protocols were specified more recently. They changed the trust model and trust context. The older specifications were not analyzed to identify potential impacts.

The other vulnerabilities allow either buffer overflows or data leakage by exploring more than the attributed spaces.

The disclosure was part of a coordinated disclosure with Google, Microsoft, and Linux kernel team.

Conclusion: Verify that you installed the August and September security patches for your devices. They contain patches to these vulnerabilities.


French users seem aware of the risks and threats of illicit sites

The French HADOPI recently published an interesting paper “Etude sur les risques encourus sur les sites illicites,” i.e., a study on the risks incurred on illegal sites. They polled 1,021 Internet users older than 15. The first part of the study analyses the reported use of so-called illicit sites. The second part checks the awareness of these users of the associated risks.

The first part is very conventional and shows information that was already known for other markets. The results are neither surprising nor widely deviating from other countries. For instance, without surprise, the younger generations use more illicit sharing sites than the oldest ones.

Figure extracted from the report. In black, my proposed translation.

Music, movies and TV shows are the categories that are the most illicitly accessed.

The second part is more interesting than the first one. Most polled users claim to know the threats of Internet (scareware, spam, the slowdown of computer due to malware, adult advertisement, and change of browser’s settings) as well as the issues (theft of banking account, identity theft, scam, or ransomware). Nevertheless, the more using illicit content, the higher the risk of nuisance and prejudice. Not surprisingly, 60% of consumers who stopped using illicit content suffered at least on serious prejudice.

Figure extracted from the report. In black, my proposed translation.

Users seem to understand that the use of illicit content seriously increases the risks. Nevertheless, there is a distortion. The nuisance is more associated to illegal consumption than actual real prejudices.

Figure extracted from the report. In black, my proposed translation

The top four motivation of legal users is to be lawful (66%), fear of malware (51%), respect for the artists (50%) and a better product (43%). For regular illicit users, the top three motivation to use legal offer is a better quality (43%), fear of malware (42%) and being lawful (41%). 57% of illicit users claim that they intend to reduce or stop using illegal content. 39% of illicit users announce that they will not change their behavior. 4% of illicit users claim they plan to consume more illicit content.

We must always be cautious with the answers to a poll. Some people may not be ready to disclose their unlawful behavior. Therefore, the real values of illicit behavior are most probably higher than disclosed in the document. Polled people may also provide wrong answers. For instance, about 30% of the consumers is illicitly consuming software claim to use streaming! Caution should also apply to the classification between streaming and P2P. Many new tools, for instance, Popcorn time, use P2P but present a behavior similar to streaming.

Conclusion of the report

Risks are present on the Internet. Illicit users are more at risk than lawful users.

Users acknowledge that illicit consumption is riskier than legal consumption.

Legal offer is perceived as the safe choice.

Having been hit by a security problem pushes users towards the legal offer.

An interesting report, unfortunately, currently it is only available in French.



What the Public Knows About Cybersecurity

In June 2016, The US Pew Research Center asked 1,055 US adults 13 questions to evaluate their knowledge about cyber security. The questions were ranging from identifying a suitable password to identifying a two-factor authentication system.

The readers of this blog would belong to the 1% of the sample (i.e., 11 individuals) who made no mistake. To be honest, the non-US readers may fail at the question related to US credit score (“Americans can legally obtain one free credit report yearly from each of the three credit bureaus.” This question is not directly related to cybersecurity and is purely US-related. I must confess that before moving to the US, I would have had no ideas of the right answer)

Not surprisingly, the success ratio was low. The average number of correct answers was 5.5 for 13 asked questions! ¾ of interviewees correctly identified a strong password. About half of the individuals accurately spotted a simple phishing attack. Only 1/3 knew that https means the connection is encrypted.

There was a clear correlation between the level of education and the ratio of proper answers. Those with college degrees or higher had an average of 7 right answers. The impact of the age was less conclusive.

You may take the quiz.


The results are not a surprise. We do not see an increase of awareness. This study should be a reminder that we, the security practitioners, must educate people around us. It is our civic responsibility and duty. This education is the first step towards a more secure connected world. Without it, the connected world will become a hell for most people.

The paper may be useful in your library


Olmstead, Kenneth, and Aaron Smith. “What the Public Knows About Cybersecurity,” March 22, 2017.

Malware Lets a Drone Steal Data by Watching a Computer’s Blinking LED

This news initially published by Wired has made the headlights of many news and blogs. Thus, I had to dive in and read the paper. This team already disclosed in the past the use of some covert channels such as temperature. Three researchers, GURI M., ZADOV B., and ELOVICI Y. have devised a new way to breach airgap. They use the hard-drive (HDD) LED as a covert channel. By reading from the hard drive with a given sequence, it is possible to control the LED without having to be a privileged user. The reading sequence modulates the emitted light that carries the covert channel.

They experimented with several normal computers and concluded that they were able to reach about 4K bit/s by reading 4KB sectors. Obviously, such throughput does require special equipment to record the blinking LED. Typical video cameras will not allow more than 15 bit/s depending on their frame per second (fps). Do not forget Shannon’s theorem about sampling. Thus, they used a photodiode or a specialized amplified light detectors. Only such kind of equipment can guarantee a good detection rate.

Using the HDD reading for a covert channel is not a new method. At Black Hat 2008, FILIOL E. et al. disclosed such attack but they used the clicking of the hard HDD, i.e., acoustic channel, rather than the LED, i.e., optical channel. This is an interesting presentation of many covert channels.

The new attack is nice and adding the drone component guarantees the buzz. Nevertheless, I believe it is not as dangerous as publicized. The issue is the malware itself. The malware has to be the exclusive entity accessing the HDD during the transmission. Indeed, if any concurrent process uses the HDD, it will mess up with the emitted message. Therefore, the researchers recommend turning off the disk caching (drop_caches for Linux). What is the likelihood that an air-gapped computer can run a malware as the exclusive process without being noticed? One of the characteristics of the malware is that it should be stealthy, thus probably not being alone to access the HDD.

The second issue is the synchronization with the spying eyes. The evil maid scenario (or evil drone) does not seem realistic. The malware should execute only during the presence of the spy; else it will be noticed (due to the exclusivity of access to HDD). The spy cannot signal its presence to the malware as the malware is air gapped thus cannot receive any incoming message. Thus, either they have to define in advance some rendez-vous, or the malware has to run repeatedly for a long period, i.e., reducing its stealthiness. If the spying device is “fixed,” using cameras is not realistic due to their low bandwidth, thus requesting the malware to run for long periods. Nevertheless, the spy may have installed special equipment and record everything and analyze later the recorded light and look for the malware sequences when the malware wakes up and plays. The spying device will have to exfiltrate stealthily a far larger message than the covert message, increasing the risk to be detected.

The attack is possible but seems more complex than what is publicized. The paper’s proposed countermeasures disclose the defense:

Another interesting solution is to execute a background process that frequently invokes random read and write operations; that way, the signal generated by the malicious process will get mixed up with a random noise, limiting the attack’s effectiveness.

As already told, I believe that in most cases, more than one process will be executing and accessing the HDD. If you are paranoid, you can always hide the LED. 


Guri, Mordechai, Boris Zadov, and Yuval Elovici. “LED-It-GO Leaking a Lot of Data from Air-Gapped Computers via the (Small) Hard Drive LED,” February 2017.

Calmette, Vincent, Stephane Vallet, Eric Filiol, and Guy Le Bouter. “Passive and Active Leakage of Secret Data from Non Networked Computer.” Black Hat 2008, Las Vegas, NV, USA, 2008.

Neural Networks learning security

In October 2016, Martin Abadi and David Andersen, two Google researchers, published a paper that made the highlights of the newspapers. The title was “Learning to protection communications with adversarial Neural Cryptography.” The newspapers announced that two neural networks learnt autonomously how to protect their communication. This statement was interesting.

As usual, many newspapers simplified the outcome of the publication. Indeed, the experiment operated under some detailed limitations that the newspaper rarely highlighted.

The first limitation is the adversarial model. Usually, in security, we expect Eve not to be able to understand the communication between Alice and Bob. The usual limitations for Eve are either she is passive, i.e., she can only listen to the communication, or she is active, i.e., she can mingle with the exchanged data. In this case, Eve is passive and Eve is a neural network trained by the experimenters. Eve is not a human or one customized piece of software. In other words, it has limited capacities.

The second limitation is the definition of success and secrecy:

  • The training of Alice and Bob to be successful requires that there is an average error rate of 0.05 bits for the reconstruction of a protected message of 16 bit. In cryptography, the reconstruction error must be null. We cannot accept any error in the decryption process.
  • The output of the neural network protected message must not look random. Usually, the randomness of the output is an expected feature of any cryptosystem.

Under these working assumptions, Alice and Bob succeeded to hide their communication from Eve after 20,000 iterations of training. Unfortunately, the paper does not explain how the neural network succeeded, and what types of mathematical methods it implemented (although the researchers modeled a symmetric like cryptosystem, i.e., Alice and Bob shared a common key). There was neither an attempt to protect a textual message and challenge cryptanalysts to break it.

Thus, it is an interesting theoretical work in the field of machine learning but most probably not useful in the field of cryptography. By the way, with the current trends in cryptography to require formal proof of security, any neural network based system would fail this formal proof step.


Abadi, Martín, and David G. Andersen. “Learning to Protect Communications with Adversarial Neural Cryptography.” arXiv, no. 1610:06918 (October 21, 2016).