White box cryptography: an open challenge

The ideal implementation of a cryptographic algorithm would be such that even if the attacker would have the source code and would entirely control the platform, she would not be able to retrieve the secret key. In 2002, Stanley Chow and his colleagues proposed a new concept coined the white-box cryptography. The threat model of white-box attack assumes that the attacker has full access to the encryption software and entirely controls the execution platform. White-box cryptography attempts to protect the keys even under such a hostile threat model. The main idea is to create a functionally equivalent implementation of the encryption or decryption algorithm that uses only look-up tables. Corresponding look-up tables, with the corresponding hard-coded secret key, replace the S-boxes, Feistel boxes and XOR functions usually employed by symmetric cryptography. Then, the look-up tables are further randomized. In theory, the randomization hides the hard-coded key. White box cryptography is a difficult challenge for skilled reverse engineers.

Abundant cryptographic analysis has demonstrated that these constructions are not theoretically secure. Nevertheless, well-crafted real implementations may resist reverse engineering. Many vendors offer such white-box cryptography for AES. The issue is how do you know whether an implementation is robust. Securing white-box cryptography is a lot of black magic. Currently, the only solutions are either reverse-engineer it yourself or trust your supplier blindly.

Fortunately, the European-funded research project ECRYPT launches an exciting challenge: The WhiBox contest. It is a capture the flag challenge dedicated to white-box cryptography. Developers are encouraged to post AES-128 white-box implementation as a C source code. Attackers are invited to break the challenge, i.e., extract the encryption key.

The contest starts on May 15 and ends August 31. The winners, i.e., the implementation that resisted the longest, and the attacker who broke the “strongest” implementation, during the rump session of CHES 2017.

This initiative is interesting. It will be a benchmark of state of the art in this obscure field. Of course, it will have value only if enough skilled attackers will answer the challenge. I expect some success. It reminds the challenges to evaluate oracle attacks for digital video watermarking (BOWS and BOWS2). BOWS demonstrated the risk associated with the access to a watermark detector.

We will follow this challenge. Will commercial solutions dare to submit implementations? Winning this challenge would be a feather in their hat.

 

Reference:

    Chow, S., Eisen, P., Johnson, H., Oorschot, P.C. van: A White-Box DES Implementation for DRM Applications. In: Feigenbaum, J. (ed.) Digital Rights Management. pp. 1–15. Springer Berlin Heidelberg (2003).

Blade Runner

Usually, this blog is for security and sometimes some SciFi related issues.  I never did any advertisement for movies.  But as an absolute fan, I have to violate my publishing rules but I was waited this event for too long, as many other Philip K Dick fans.  Enjoy.

Law 8 – If You Watch the Internet, the Internet Is Watching You

This post is the eighth post in a series of ten posts. The previous post explored the seventh law: you are the weakest link. With our increasingly connected word, this law becomes more and more important. Most connections are bidirectional. The consequence is that information flows both ways. If you receive information from the Internet, the Internet may collect information from you. If your apparatus is going to the Internet, the Internet may come to your device. Some of these ingress connections may not be solicited.

Controlling what is exchanged, and monitoring who is using the connections is the role of network security. Fortunately, network security is a rather mature science. Thus, the first rule should be the following one.

Do not connect directly to the Internet; the access to the Internet should be carefully controlled. It should have at least a firewall and anti-malware filtering. When possible, implement a Demilitarized Military Zone (DMZ) to create an isolation buffer between the Internet and your network that may discard attackers to intrude it. Not everybody may need to install a DMZ or can install a DMZ, especially at home. However, everybody should install a firewall between his network and the Internet. In a consumer environment, the firewall should by default ban every ingress communication.     
Many specialists claim (rightly) that the notion of perimetric defense is outdated. This does not mean that local networks should not be protected against intrusions or leaks. These network security mechanisms remain mandatory but not sufficient.

Thou will be traced; the digital world increasingly keeps records of all the activities of users. Many Web enterprises build their business model on monetizing the results of this data collection. This data collection may be known and announced, but sometimes also hidden. For instance, spying techniques such as fingerprinting canvas stealthily collect information when people visit web pages. A recent study disclosed that more than 5% of the sites used fingerprinted canvas. This constant monitoring is a threat to privacy and also a potential mine of information for attackers. Some tools, such as the TOR browser, help in preserving anonymity on the Internet.

If you find this post interesting, you may also be interested in my second book “Ten Laws for Security.”  Chapter 9 explores in details this law. The book is available for instance at Springer or Amazon.

What the Public Knows About Cybersecurity

In June 2016, The US Pew Research Center asked 1,055 US adults 13 questions to evaluate their knowledge about cyber security. The questions were ranging from identifying a suitable password to identifying a two-factor authentication system.

The readers of this blog would belong to the 1% of the sample (i.e., 11 individuals) who made no mistake. To be honest, the non-US readers may fail at the question related to US credit score (“Americans can legally obtain one free credit report yearly from each of the three credit bureaus.” This question is not directly related to cybersecurity and is purely US-related. I must confess that before moving to the US, I would have had no ideas of the right answer)

Not surprisingly, the success ratio was low. The average number of correct answers was 5.5 for 13 asked questions! ¾ of interviewees correctly identified a strong password. About half of the individuals accurately spotted a simple phishing attack. Only 1/3 knew that https means the connection is encrypted.

There was a clear correlation between the level of education and the ratio of proper answers. Those with college degrees or higher had an average of 7 right answers. The impact of the age was less conclusive.

You may take the quiz.

Lessons:

The results are not a surprise. We do not see an increase of awareness. This study should be a reminder that we, the security practitioners, must educate people around us. It is our civic responsibility and duty. This education is the first step towards a more secure connected world. Without it, the connected world will become a hell for most people.

The paper may be useful in your library

Ref:

Olmstead, Kenneth, and Aaron Smith. “What the Public Knows About Cybersecurity,” March 22, 2017. http://assets.pewresearch.org/wp-content/uploads/sites/14/2017/03/17140820/PI_2017.03.22_Cybersecurity-Quiz_FINAL.pdf

Security assessment: white or black box?

White or Black: Which is the best choice? Is white box testing cheating? To me, the answers seem trivial. Nevertheless, recent discussions highlighted that it is not clear for everybody.

White box testing means that the security assessor has access to the entire documentation, the source code and even sometimes an instrumented target of evaluation (TOE). Black box testing means that the security assessor has only access to the TOE and publicly available information. Thus, black box testing mimics the configuration encountered by hackers. Of course, hackers will use any possible means to collect related non-public information, for instance, using social engineering.

Many people believe that black box evaluation is the solution of choice. As it is similar to the hackers’ configuration, it should provide a realistic assessment of the security of the TOE. Unfortunately, this assumption is wrong. The main issue is the asymmetry of the situation. On one side, the black box evaluation is performed by one evaluator or a team of reviewers. On the other side, a legion of hackers probes the product. They may outnumber the evaluators by several magnitudes. They spend more time to discover vulnerabilities. Mathematically, their chance to find an exploit is higher than the likelihood of the evaluators to find the same vulnerability.

Do you evaluate your security to know whether an attacker may breach your system? According to Law 1, this will ineluctably happen. Then, if your evaluation team has not found any problem, you may only conclude that there were no blatant vulnerabilities. You have no idea whether (or rather when) a hacker will find a vulnerability.

Alternatively, do you evaluate the security to make your product more secure, i.e., with fewer vulnerabilities? In that case, you will give the evaluators the best tools to discover vulnerabilities. Therefore, white box testing will be your choice. For the same effort, white box testing will find more security issues than black box testing. Furthermore, the team will pinpoint the faulty part of the system whereas black box testing will disclose how the attacker may succeed but not where the actual issue is. Furthermore, let us assume that the white box assessment discovered a vulnerability through code review. The white box tester has just to explain the mistake and its consequences. For the same vulnerability, the black box tester has to blindly explore random attacks until he/she finds the vulnerability. Then, the evaluator has to write the exploit that demonstrates the feasibility of the attack. The required effort is too big. Thus, in term of Return On Investment (ROI), white box testing is far superior to black box testing. More discovered vulnerabilities for the money!

Fixing a vulnerability discovered by white box testing may also be cheaper. It is well known that the earlier a bug is found during the development cycle, the cheaper it is to fix. The same goes for vulnerabilities. As some white box security testing can occur before final integration (design review, code review…), fixing it is earlier thus cheaper than for black box testing which occurs after the final integration.

White box security testing is compliant with Kerckhoffs’s law. The selection of new cryptographic primitives uses white box testing. The algorithm is published for the cryptanalysts to break. ISO27001 is a kind of white box evaluation.

When a company claims that a third party audited the security of its product, in addition to the identity of this third party, it would be great to disclose whether it was white or black box testing. I would favor the first one.

Thus, when possible, prefer white box security testing over black box testing. This is the wise, winning choice. The bounty programs are the only exception. They operate as black box testing. They should be a complement to white box testing but never a replacement. Their ROI is high as you pay only when successful.

 

 

 

Malware Lets a Drone Steal Data by Watching a Computer’s Blinking LED

This news initially published by Wired has made the headlights of many news and blogs. Thus, I had to dive in and read the paper. This team already disclosed in the past the use of some covert channels such as temperature. Three researchers, GURI M., ZADOV B., and ELOVICI Y. have devised a new way to breach airgap. They use the hard-drive (HDD) LED as a covert channel. By reading from the hard drive with a given sequence, it is possible to control the LED without having to be a privileged user. The reading sequence modulates the emitted light that carries the covert channel.

They experimented with several normal computers and concluded that they were able to reach about 4K bit/s by reading 4KB sectors. Obviously, such throughput does require special equipment to record the blinking LED. Typical video cameras will not allow more than 15 bit/s depending on their frame per second (fps). Do not forget Shannon’s theorem about sampling. Thus, they used a photodiode or a specialized amplified light detectors. Only such kind of equipment can guarantee a good detection rate.

Using the HDD reading for a covert channel is not a new method. At Black Hat 2008, FILIOL E. et al. disclosed such attack but they used the clicking of the hard HDD, i.e., acoustic channel, rather than the LED, i.e., optical channel. This is an interesting presentation of many covert channels.

The new attack is nice and adding the drone component guarantees the buzz. Nevertheless, I believe it is not as dangerous as publicized. The issue is the malware itself. The malware has to be the exclusive entity accessing the HDD during the transmission. Indeed, if any concurrent process uses the HDD, it will mess up with the emitted message. Therefore, the researchers recommend turning off the disk caching (drop_caches for Linux). What is the likelihood that an air-gapped computer can run a malware as the exclusive process without being noticed? One of the characteristics of the malware is that it should be stealthy, thus probably not being alone to access the HDD.

The second issue is the synchronization with the spying eyes. The evil maid scenario (or evil drone) does not seem realistic. The malware should execute only during the presence of the spy; else it will be noticed (due to the exclusivity of access to HDD). The spy cannot signal its presence to the malware as the malware is air gapped thus cannot receive any incoming message. Thus, either they have to define in advance some rendez-vous, or the malware has to run repeatedly for a long period, i.e., reducing its stealthiness. If the spying device is “fixed,” using cameras is not realistic due to their low bandwidth, thus requesting the malware to run for long periods. Nevertheless, the spy may have installed special equipment and record everything and analyze later the recorded light and look for the malware sequences when the malware wakes up and plays. The spying device will have to exfiltrate stealthily a far larger message than the covert message, increasing the risk to be detected.

The attack is possible but seems more complex than what is publicized. The paper’s proposed countermeasures disclose the defense:

Another interesting solution is to execute a background process that frequently invokes random read and write operations; that way, the signal generated by the malicious process will get mixed up with a random noise, limiting the attack’s effectiveness.

As already told, I believe that in most cases, more than one process will be executing and accessing the HDD. If you are paranoid, you can always hide the LED. 

Reference:

Guri, Mordechai, Boris Zadov, and Yuval Elovici. “LED-It-GO Leaking a Lot of Data from Air-Gapped Computers via the (Small) Hard Drive LED,” February 2017. http://cyber.bgu.ac.il/advanced-cyber/system/files/LED-it-GO_0.pdf

Calmette, Vincent, Stephane Vallet, Eric Filiol, and Guy Le Bouter. “Passive and Active Leakage of Secret Data from Non Networked Computer.” Black Hat 2008, Las Vegas, NV, USA, 2008. https://www.researchgate.net/publication/228801499_Passive_and_Active_Leakage_of_Secret_Data_from_Non_Networked_Computer

SECITC 2017 Call For Paper

The 10th International Conference on Security for Information Technology and Communications (SECITC 2017, www.secitc.eu) will be held in Bucharest, Romania on 8-9 June 2017. The conference venue will be at Bucharest University of Economic Studies located in the heart of Bucarest city.

——————————————————————————–

Conference Topics:

Topics include all aspects of information security, including but not limited to the following areas: Access control, Algorithmic tools for security and cryptography, All aspects of cryptography, Application security, Attacks and defences, Authentication biometry, Censorship and censorship-resistance, Cloud Security, Distributed systems security, Embedded systems security, Digital forensics, Hardware security, Information flow analysis, Internet of Things (IoT) Security, Intrusion detection, Language-based security, Malware, Mobile security and privacy, Network security, New exploits, Policy enforcements, Privacy and anonymity, Protocol security, Reverse-engineering and code obfuscation, Security architectures, Security aspects of alternative currencies, Side channel attacks, Surveillance and anti-surveillance, System security.

——————————————————————————-

Instructions for authors:

Submissions must not substantially duplicate work that any of the authors has published elsewhere or has submitted in parallel for consideration of any other journal, conference/workshop with proceedings. The submission should begin with a title followed by a short abstract and keywords. Submissions must be in PDF format and should have at most 12 pages excluding the bibliography and appendices, and at most 20 pages in total, using at least 11-point fonts and with reasonable margins. All submissions must be anonymous. The reviewers are not required to read appendices-the paper should be intelligible without them. Submissions not meeting these guidelines risk rejection without consideration of their merits. Authors of accepted papers should guarantee that at least one of the authors will attend the conference and present their paper. Paper submission and review process is handled via EasyChair platform. All submissions must be in PDF format. For paper submission follow the following link: https://easychair.org/conferences/?conf=secitc2017

As the final accepted papers will be published in LNCS by Springer, it is recommended that the submissions be processed in LaTeX2e according to the instructions listed on the Springer’s LNCS Webpage: www.springer.com/lncs. These instructions are mandatory for the final papers.

In particular, Springer’s LNCS paper formatting requirements can be found at: http://www.springer.com/computer/lncs/lncs+authors?SGWID=0-40209-0-0-0

————————————————————————

Important Dates:

  • Paper submission deadline: 27 March 2017 at 23:59 UTC/GMT
  • Notification of decisions: 21 April 2017
  • Proceedings version deadline: 28 April 2017
  • Conference: 8-9 June 2017

————————————————————————

Program Chairs:

————————————————————————