Skip to yearly menu bar Skip to main content


Poster session B
in
Workshop: ICLR 2025 Workshop on GenAI Watermarking (WMARK)

Watermark Smoothing Attacks against Language Models

Hongyan Chang · Hamed Hassani · Reza Shokri


Abstract:

Watermarking is a key technique for detecting AI-generated text. In this work, we study its vulnerabilities and introduce the Smoothing Attack, a novel watermark removal method. By leveraging the relationship between the model’s confidence and watermark detectability, our attack selectively smoothes the watermarked content, erasing watermark traces while preserving text quality. We validate our attack on open-source models ranging from 1.3B to 30B parameters on 10 different water- marks, demonstrating its effectiveness. Our findings expose critical weaknesses in existing watermarking schemes and highlight the need for stronger defenses.

Chat is not available.