Watermarking Deep Neural Networks

Recently, an IBM team presented at ASIA CCS’18 a framework implementing watermark in a Deep Neural Network (DNN) network. Similarly, to what we do in the multimedia space, if a competitor uses or modifies a watermarked model, it should be possible to extract the watermark from the model to prove the ownership.

In a nutshell, the DNN model is trained with the normal set of data to produce the results that everybody would expect and an additional set of data (the watermarks) that produces an “unexpected” result that is known solely to the owner. To prove the ownership, the owner injects in the allegedly “stolen” model the watermarks and verifies whether the observed result is what it expected.

The authors explored thee techniques in the field of image recognition:

  • Meaningful content: the watermarks are modified images, for instance by adding a consistently visible mark. The training enforces that the presentation of such visible mark results in a given “unrelated” category.
  • Unrelated content: the watermarks are images that are totally unrelated to the task of the model; normally they should be rejected, but the training will enforce a known output for the detection
  • Noisy content: the watermarks are images that embed a consistent shaped noise and produce a given known answer.

The approach is interesting. Some remarks inherited from the multimedia space:

  • The method of creating the watermarks must remain secret. If the attacker guesses the method, for instance that the system uses a given logo, then the attacker may perhaps wash the watermark. The attacker may untrain the model, by supertraining the watermarked model with generated watermarks that will output an answer different from the one expected by the original owner. As the attacker has uncontrolled, unlimited access to the detector, the attacker can fine tune the model until the detection rate is too low.
  • The framework is most probably too expensive to be used for making traitor tracing at a large scale. Nevertheless, I am not sure whether traitor tracing at large scale makes any sense.
  • The method is most probably robust against an oracle attack.
  • Some of the described methods were related to image recognition but could be ported to other tasks.
  • It is possible to embed several successive orthogonal watermarks.

A paper interesting to read as it is probably the beginning of a new field. ML/AI security will be key in the coming years.

Reference

Zhang, Jialong, Zhongshu Gu, Jiyong Jang, Hui Wu, Marc Ph. Stoecklin, Heqing Huang, and Ian Molloy. “Protecting Intellectual Property of Deep Neural Networks with Watermarking.” In Proceedings of the 2018 on Asia Conference on Computer and Communications Security, 159–172. ASIACCS ’18. New York, NY, USA: ACM, 2018. https://doi.org/10.1145/3196494.3196550.

Leave a Reply

Your email address will not be published. Required fields are marked *