In October 2016, Martin Abadi and David Andersen, two Google researchers, published a paper that made the highlights of the newspapers. The title was “Learning to protection communications with adversarial Neural Cryptography.” The newspapers announced that two neural networks learnt autonomously how to protect their communication. This statement was interesting.
As usual, many newspapers simplified the outcome of the publication. Indeed, the experiment operated under some detailed limitations that the newspaper rarely highlighted.
The first limitation is the adversarial model. Usually, in security, we expect Eve not to be able to understand the communication between Alice and Bob. The usual limitations for Eve are either she is passive, i.e., she can only listen to the communication, or she is active, i.e., she can mingle with the exchanged data. In this case, Eve is passive and Eve is a neural network trained by the experimenters. Eve is not a human or one customized piece of software. In other words, it has limited capacities.
The second limitation is the definition of success and secrecy:
- The training of Alice and Bob to be successful requires that there is an average error rate of 0.05 bits for the reconstruction of a protected message of 16 bit. In cryptography, the reconstruction error must be null. We cannot accept any error in the decryption process.
- The output of the neural network protected message must not look random. Usually, the randomness of the output is an expected feature of any cryptosystem.
Under these working assumptions, Alice and Bob succeeded to hide their communication from Eve after 20,000 iterations of training. Unfortunately, the paper does not explain how the neural network succeeded, and what types of mathematical methods it implemented (although the researchers modeled a symmetric like cryptosystem, i.e., Alice and Bob shared a common key). There was neither an attempt to protect a textual message and challenge cryptanalysts to break it.
Thus, it is an interesting theoretical work in the field of machine learning but most probably not useful in the field of cryptography. By the way, with the current trends in cryptography to require formal proof of security, any neural network based system would fail this formal proof step.
Abadi, Martín, and David G. Andersen. “Learning to Protect Communications with Adversarial Neural Cryptography.” arXiv, no. 1610:06918 (October 21, 2016). http://arxiv.org/abs/1610.06918