Feb 09 2017

SECITC 2017 Call For Paper

The 10th International Conference on Security for Information Technology and Communications (SECITC 2017, www.secitc.eu) will be held in Bucharest, Romania on 8-9 June 2017. The conference venue will be at Bucharest University of Economic Studies located in the heart of Bucarest city.


Conference Topics:

Topics include all aspects of information security, including but not limited to the following areas: Access control, Algorithmic tools for security and cryptography, All aspects of cryptography, Application security, Attacks and defences, Authentication biometry, Censorship and censorship-resistance, Cloud Security, Distributed systems security, Embedded systems security, Digital forensics, Hardware security, Information flow analysis, Internet of Things (IoT) Security, Intrusion detection, Language-based security, Malware, Mobile security and privacy, Network security, New exploits, Policy enforcements, Privacy and anonymity, Protocol security, Reverse-engineering and code obfuscation, Security architectures, Security aspects of alternative currencies, Side channel attacks, Surveillance and anti-surveillance, System security.


Instructions for authors:

Submissions must not substantially duplicate work that any of the authors has published elsewhere or has submitted in parallel for consideration of any other journal, conference/workshop with proceedings. The submission should begin with a title followed by a short abstract and keywords. Submissions must be in PDF format and should have at most 12 pages excluding the bibliography and appendices, and at most 20 pages in total, using at least 11-point fonts and with reasonable margins. All submissions must be anonymous. The reviewers are not required to read appendices-the paper should be intelligible without them. Submissions not meeting these guidelines risk rejection without consideration of their merits. Authors of accepted papers should guarantee that at least one of the authors will attend the conference and present their paper. Paper submission and review process is handled via EasyChair platform. All submissions must be in PDF format. For paper submission follow the following link: https://easychair.org/conferences/?conf=secitc2017

As the final accepted papers will be published in LNCS by Springer, it is recommended that the submissions be processed in LaTeX2e according to the instructions listed on the Springer’s LNCS Webpage: www.springer.com/lncs. These instructions are mandatory for the final papers.

In particular, Springer’s LNCS paper formatting requirements can be found at: http://www.springer.com/computer/lncs/lncs+authors?SGWID=0-40209-0-0-0


Important Dates:

  • Paper submission deadline: 27 March 2017 at 23:59 UTC/GMT
  • Notification of decisions: 21 April 2017
  • Proceedings version deadline: 28 April 2017
  • Conference: 8-9 June 2017


Program Chairs:


Jan 30 2017

Neural Networks learning security

In October 2016, Martin Abadi and David Andersen, two Google researchers, published a paper that made the highlights of the newspapers. The title was “Learning to protection communications with adversarial Neural Cryptography.” The newspapers announced that two neural networks learnt autonomously how to protect their communication. This statement was interesting.

As usual, many newspapers simplified the outcome of the publication. Indeed, the experiment operated under some detailed limitations that the newspaper rarely highlighted.

The first limitation is the adversarial model. Usually, in security, we expect Eve not to be able to understand the communication between Alice and Bob. The usual limitations for Eve are either she is passive, i.e., she can only listen to the communication, or she is active, i.e., she can mingle with the exchanged data. In this case, Eve is passive and Eve is a neural network trained by the experimenters. Eve is not a human or one customized piece of software. In other words, it has limited capacities.

The second limitation is the definition of success and secrecy:

  • The training of Alice and Bob to be successful requires that there is an average error rate of 0.05 bits for the reconstruction of a protected message of 16 bit. In cryptography, the reconstruction error must be null. We cannot accept any error in the decryption process.
  • The output of the neural network protected message must not look random. Usually, the randomness of the output is an expected feature of any cryptosystem.

Under these working assumptions, Alice and Bob succeeded to hide their communication from Eve after 20,000 iterations of training. Unfortunately, the paper does not explain how the neural network succeeded, and what types of mathematical methods it implemented (although the researchers modeled a symmetric like cryptosystem, i.e., Alice and Bob shared a common key). There was neither an attempt to protect a textual message and challenge cryptanalysts to break it.

Thus, it is an interesting theoretical work in the field of machine learning but most probably not useful in the field of cryptography. By the way, with the current trends in cryptography to require formal proof of security, any neural network based system would fail this formal proof step.


Abadi, Martín, and David G. Andersen. “Learning to Protect Communications with Adversarial Neural Cryptography.” arXiv, no. 1610:06918 (October 21, 2016). http://arxiv.org/abs/1610.06918





Dec 19 2016

Artificial Intelligence vs. Genetic Algorithm

AI and Deep Learning are hot topics. Their progress is impressive (see Alpha Go). Nevertheless, they open new security challenges. I do not here speak about the famous singularity point, but rather about basic security issues. This interesting topic raised my interest. Thus, expect to hear from me on the subject.

Can AI be fooled? For instance, can a recognition software be fooled to recognize other things than expected? The answer is yes, and some studies seem to indicate that at least in some fields it may be relatively easy. What does the following image represent?

You most probably have recognized a penguin. So did a well-trained, deep neural network (DNN) software as we may expect. According to you, what does the following image represent?

Once more, did you not recognize a penguin? The same DNN decided that it was a penguin. Of course, this image is not a random image. A. NGUYEN, J. YOSINSKI and J. CLUNE studied how to fool such a DNN in a paper “Deep Neural Networks are Easily Fooled.” They used a genetic algorithm (or evolutionary algorithm) to create such fooling images. Genetic algorithms try to mimic evolution under the assumption that only the fittest elements survive (so called, natural selection). These algorithms start from an initial population. The population is evaluated through a fitness function (here the score of recognition of the image) to select the fittest samples. Then, the selected samples are mutating and crossing over. The resulting offsprings pass again the same selection process. After several generations (in the order of thousands usually), the result is an optimized solution. The researchers applied this technique to fool the DNN with success.

They also attempted to train the DNN with these fooling images has decoys to reject them. Then, they tried the same process with new generations. The newly trained DNN with decoys did not perform better to avoid fooling images. They also succeeded with totally random noisy images, but these images are aesthetically less satisfactory J

Interestingly, the characteristics of the fooling images, such as color, patterns, or repetition may give some hints on what the DNN uses actually as main differentiating features.

This experiment highlights the risks of AI and DNN. They operate as black boxes. Currently, practitioners have no tools to verify whether the AI or DNN may operate properly under adverse conditions. Usually, with signal processing, the researchers can calculate a theoretical false positive rate. To the best of my knowledge, this is not anymore the case with DNN. Unfortunately, false positive rates are an important design factor in security and pattern recognition related to security or safety matters. With AI and DNN, we are loosing predictability of the behavior. This limitation may become an issue soon if we cannot expect them to react properly in non-nominal conditions. Rule 1.1: Always Expect the Attackers to Push the Limits.

A very interesting paper to read.

Nguyen, A., J. Yosinski, and J. Clune. “Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images.” In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 427–36, 2015. doi:10.1109/CVPR.2015.7298640 available at https://arxiv.org/pdf/1412.1897.pdf



Dec 05 2016

Speaking at next CDSA summit


Nov 20 2016

Law 7 – You Are the Weakest Link

laws7This post is the seventh post in a series of ten posts. The previous post explored the sixth law: Security is not stronger than its weakest link.  Although often neglected, the seventh law is fundamental.  It states that human users are often the weakest element of the security.

Humans are the weakest link for many reasons.  Often, they do not understand security or have an ill perceived perception of it.  For instance, security is often seen as an obstacle.  Therefore, users will circumvent it when security is an obstruction to the fulfillment of their task and will not apply security policies and procedures.  They do not believe that they are a worthwhile target for cyber-attacks.

Humans are the weakest link because they do not grasp the full impact of their security-related decisions.  How many people ignore the security warnings of their browser?  How many people understand the security consequences and constraints of Bring Your Own Device (BYOD) or Bring Your Own Cloud (BYOC)?  Employees put their company at risk by bad decisions.

Humans are the weakest link because they have intrinsic limitations.  Human memory is often feeble thus we end up with weak passwords or complex passwords written on a post-it.  Humans do not handle complexity correctly.  Unfortunately, security is too complex.

Humans are the weakest link because they can be easily deceived.  Social engineers use social interaction to influence people and convince them to perform actions that they are not expected to do, or to share information that they are not supposed to disclose.   For instance, phishing is an efficient contamination vector.

How can we mitigate the human risk?

  • Where possible, make decisions on behalf of the end user; as the end users are not necessarily able to make rational decisions on security issues, the designer should make the decisions when possible. Whenever the user has to decide, the consequences of his decision should be made clear to him to guide his decision.
  • Define secure defaults; the default value should always be set to that for the highest or, at least, an acceptable security level. User friendliness should not drive the default value, but rather security should.
  • Educate your employees; the best answer to social engineering is enabling employees to identify an ongoing social engineering attack. This detection is only possible by educating the employees about this kind of attack.  Training employees increases their security awareness and thus raises their engagement.
  • Train your security staff; the landscape of security threats and defense tools is changing quickly. Skilled attackers will use the latest exploits.  Therefore, it is imperative that the security personnel be aware of the latest techniques.  Operational security staff should have a significant part of their work time dedicated to continuous training.

Interestingly, with the current progress of Artificial Intelligence and Big Data analytics, will the new generation of security tools partly compensate this weakness?

If you find this post interesting, you may also be interested in my second book “Ten Laws for Security” that will be available end of this month.   Chapter 8 explores in details this law. The book will be available for instance at Springer or Amazon.

Sep 29 2016

Judgment under uncertainty

We often have to make decisions without having all the information we expected. We make this decision on the belief of perceived likelihood of events. Obviously, evaluating uncertain events is an incredibly difficult task. Unfortunately, it is mandatory for risk management or data analysis. This judgment is subjective although we may believe it is rationale.

In 1974, Amos Tversky and Daniel Kahneman published a paper “Judgment under uncertainty: heuristics and biases.” They explored the different biases that will taint our decision and that we are most probably not aware of.

For instance,

  • Insensitivity to sample sizes. The size of the sample impacts its representativeness
  • Misconception of chance; many people believe that the probability of dice sequence 111111 is far lower than the sequence 163125
  • Biases of imaginability;

The paper lists ten such biases. Being aware of them is worthwhile. The article remains me a lot of the book “Predictably irrational.”

Nice to read for security guys and data analysts.


Tversky, Amos, and Daniel Kahneman. “Judgment under Uncertainty: Heuristics and Biases.” In Utility, Probability, and Human Decision Making, edited by Dirk Wendt and Charles Vlek, 141–62. Theory and Decision Library 11. Springer Netherlands, 1975. http://www.cob.unt.edu/itds/faculty/evangelopoulos/busi6220/Kahneman1974_Science_JudgementUnderUncertainty.pdf




Sep 25 2016

Law 6 – Security Is No Stronger Than its Weakest Link

This is the sixth post in a series of ten posts. The previous post explored Law 5: Si vis pacem, para bellum. The sixth law is one of the less controversial ones. Security is the result of many elements and principals that interact to build the appropriate defense. As a consequence, security cannot be stronger than its weakest element. Once more, Chinese general Sun Tzu explained it perfectly.

So in war, the way is to avoid what is strong and to strike at what is weak.

A smart attacker analyzes the full system and looks for the weakest points. The attacker focuses on these points. For instance, in 2012, about 80% of the cyber incidents implying data breach were opportunistic. Furthermore, they did not require proficient hacking skills. The targets were not properly protected. Attackers went after these easy targets.

Another example of attacking the weakest link is the use of side-channel attacks. Side-channel attacks are devastating, non-intrusive attacks that reveal secret information. The information leaks through an unintentional channel in a given physical implementation of an algorithm. These channels are the result of physical effects of the actual implementation. They may, for instance, be timing characteristics, power consumption, generated audio noise, or electromagnetic radiation.

As a general rule, the defender has to know its defense mechanisms. When trying to strengthen the defense, the designer must first focus on the weakest elements of its defense. Both for the defender and the attacker, the difficulty is to identify these weakest elements. They may take many forms: humans (see Law 7), design errors, bad implementations, limitations… White box testing is a good way to identify some weak points.

Know the hardware limitations; in this digital world, most of the technical effort is put into developing software. The focus is often on protecting the executed piece of code. Nevertheless, the code executes on hardware. Hardware introduces constraints that are often unknown to contemporary software developers. Ignoring these constraints may lead to interesting attack surfaces that a seasoned attacker will, of course, use. A typical example is the deletion of data in memory. Hardware memories have persistence even when erased or powered off. For instance, some data may be remaining DRAM several minutes after being powered off. Or, memories may have unexpected behavior when used in extreme conditions. The RowHammer attack is a perfect illustration.

Patch, patch, patch; Security is aging. New vulnerabilities are disclosed every week. As a result, manufacturers and publishers regularly issue patches. They are useless if they are not applied. Unfortunately, too many deployed systems are not properly patched. Smart attackers look first for unpatched targets.

Protect always your keys; Keys are probably the most precious security assets of any secure digital system. Their protection should never be the weakest link. Ideally, these protections should represent the strongest link as they defend the ultimate treasure. Keys need protection not only at rest but also while in use. A software implementation of cryptographic algorithms has to be carefully crafted, especially when operating in a hostile environment. In some contexts, the hardware implementation must resist side-channel attacks. Secure implementation of cryptography is expert work.

Older posts «