Generative AI is the current hot topic. Of course, one of the newest challenges is to discriminate a genuine image from a generative-AI-produced one. Many papers propose systematically watermarking the generative AI outputs.
This approach makes several assumptions. The first one is that the generator is actually adding an invisible watermark. The second assumption is that the watermark survives most transformations.
In the content protection field, we know about the validity of the second assumption. Zhao et al., from the University of California Santa Barbara and Carnegie Mellon University, published a paper. The system adds Gaussian noise to the watermarked image and reconstructs the same image using the noise image. After several iterations, the watermark disappears. They conclude that any watermark can be defeated.
This is a well known fact in the watermark community. The Break Our Watermark System (BOWS) in 2006 and the BOWS2 in 2010 demonstrated this reality. These contests aimed to demonstrate that attackers can defeat the watermark if they have access to an oracle watermark detector.
Thus, this paper illustrates this fact. Their contribution adds generative AI to the attacker’s toolset. As a countermeasure, they propose to use a semantic watermark. The semantic watermark changes the image but keeps its semantic information (or at least some). This approach is clearly not usable for content protection.
Reference
Zhao, Xuandong, Kexun Zhang, Zihao Su, Saastha Vasan, Ilya Grishchenko, Christopher Kruegel, Giovanni Vigna, Yu-Xiang Wang, and Lei Li. “Invisible Image Watermarks Are Provably Removable Using Generative AI.” arXiv, August 6, 2023. https://arxiv.org/pdf/2306.01953.pdf.
Craver, Scott, Idris Atakli, and Jun Yu. “How We Broke the BOWS Watermark.” In Proceedings of the SPIE, 6505:46. San Jose, CA, USA: SPIE, 2007. https://doi.org/10.1117/12.704376.
“BOWS2 Break Our Watermarking System 2nd Ed.” http://bows2.ec-lille.fr/.