Category Archive: AI

The news about Security and Artificial Intelligence. This is a new hot topic.

Jan 30 2017

Neural Networks learning security

In October 2016, Martin Abadi and David Andersen, two Google researchers, published a paper that made the highlights of the newspapers. The title was “Learning to protection communications with adversarial Neural Cryptography.” The newspapers announced that two neural networks learnt autonomously how to protect their communication. This statement was interesting.

As usual, many newspapers simplified the outcome of the publication. Indeed, the experiment operated under some detailed limitations that the newspaper rarely highlighted.

The first limitation is the adversarial model. Usually, in security, we expect Eve not to be able to understand the communication between Alice and Bob. The usual limitations for Eve are either she is passive, i.e., she can only listen to the communication, or she is active, i.e., she can mingle with the exchanged data. In this case, Eve is passive and Eve is a neural network trained by the experimenters. Eve is not a human or one customized piece of software. In other words, it has limited capacities.

The second limitation is the definition of success and secrecy:

  • The training of Alice and Bob to be successful requires that there is an average error rate of 0.05 bits for the reconstruction of a protected message of 16 bit. In cryptography, the reconstruction error must be null. We cannot accept any error in the decryption process.
  • The output of the neural network protected message must not look random. Usually, the randomness of the output is an expected feature of any cryptosystem.

Under these working assumptions, Alice and Bob succeeded to hide their communication from Eve after 20,000 iterations of training. Unfortunately, the paper does not explain how the neural network succeeded, and what types of mathematical methods it implemented (although the researchers modeled a symmetric like cryptosystem, i.e., Alice and Bob shared a common key). There was neither an attempt to protect a textual message and challenge cryptanalysts to break it.

Thus, it is an interesting theoretical work in the field of machine learning but most probably not useful in the field of cryptography. By the way, with the current trends in cryptography to require formal proof of security, any neural network based system would fail this formal proof step.


Abadi, Martín, and David G. Andersen. “Learning to Protect Communications with Adversarial Neural Cryptography.” arXiv, no. 1610:06918 (October 21, 2016).





Dec 19 2016

Artificial Intelligence vs. Genetic Algorithm

AI and Deep Learning are hot topics. Their progress is impressive (see Alpha Go). Nevertheless, they open new security challenges. I do not here speak about the famous singularity point, but rather about basic security issues. This interesting topic raised my interest. Thus, expect to hear from me on the subject.

Can AI be fooled? For instance, can a recognition software be fooled to recognize other things than expected? The answer is yes, and some studies seem to indicate that at least in some fields it may be relatively easy. What does the following image represent?

You most probably have recognized a penguin. So did a well-trained, deep neural network (DNN) software as we may expect. According to you, what does the following image represent?

Once more, did you not recognize a penguin? The same DNN decided that it was a penguin. Of course, this image is not a random image. A. NGUYEN, J. YOSINSKI and J. CLUNE studied how to fool such a DNN in a paper “Deep Neural Networks are Easily Fooled.” They used a genetic algorithm (or evolutionary algorithm) to create such fooling images. Genetic algorithms try to mimic evolution under the assumption that only the fittest elements survive (so called, natural selection). These algorithms start from an initial population. The population is evaluated through a fitness function (here the score of recognition of the image) to select the fittest samples. Then, the selected samples are mutating and crossing over. The resulting offsprings pass again the same selection process. After several generations (in the order of thousands usually), the result is an optimized solution. The researchers applied this technique to fool the DNN with success.

They also attempted to train the DNN with these fooling images has decoys to reject them. Then, they tried the same process with new generations. The newly trained DNN with decoys did not perform better to avoid fooling images. They also succeeded with totally random noisy images, but these images are aesthetically less satisfactory J

Interestingly, the characteristics of the fooling images, such as color, patterns, or repetition may give some hints on what the DNN uses actually as main differentiating features.

This experiment highlights the risks of AI and DNN. They operate as black boxes. Currently, practitioners have no tools to verify whether the AI or DNN may operate properly under adverse conditions. Usually, with signal processing, the researchers can calculate a theoretical false positive rate. To the best of my knowledge, this is not anymore the case with DNN. Unfortunately, false positive rates are an important design factor in security and pattern recognition related to security or safety matters. With AI and DNN, we are loosing predictability of the behavior. This limitation may become an issue soon if we cannot expect them to react properly in non-nominal conditions. Rule 1.1: Always Expect the Attackers to Push the Limits.

A very interesting paper to read.

Nguyen, A., J. Yosinski, and J. Clune. “Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images.” In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 427–36, 2015. doi:10.1109/CVPR.2015.7298640 available at



Feb 25 2016

A Milestone in AI: a computer won against a Go champion

I usually only blog about security or Sci-Fi. Nevertheless, I will blog about an entirely unrelated topic as I believe we have reached an important milestone. Artificial Intelligence (AI) is around for many decades with various successes. For several years, AI, through machine learning, has made tremendous progress with some deployed fascinating products or services. For instance, Google Photo has leap-frogged the exploitation of databases of images. It can automatically detect pictures featuring the same person over decades! Some friends told me that it even differentiated natural twins.

Nevertheless, I always believed that go game was out of the reach of AI. Go is a multi-millennial ancient game with extremely simple rules (indeed, only three rules). It is played on a go ban of 19 x 19 positions. Each player adds a stone (white or black) to create the largest territory. The game is extremely complex not only because of the number of possible combinations (it is said to be greater than the number of atoms in the universe) but also by the infinite possible strategies. It exceeds by several amplitudes the complexity of chess. A great game!!!

On January 27, 2016, Google made my belief wrong. For the first time, their software, AlphaGo, won five games to zero against a professional go player. AlphaGo was first trained with 30 million moves. Then, it has been self-reinforced by playing against itself thousands of times. The result is a software at the level of a professional go player. Evidently, AI passed a milestone.

Machine learning will smoothly invade security practices. Training software through logs to detect incidents will be a good starting point.