Skip to yearly menu bar Skip to main content


Poster session A
in
Workshop: ICLR 2025 Workshop on GenAI Watermarking (WMARK)

Theoretically Grounded Framework for LLM Watermarking: A Distribution-Adaptive Approach

Haiyun He · Yepeng Liu · Ziqiao Wang · Yongyi Mao · Yuheng Bu


Abstract: Watermarking has emerged as a crucial method to distinguish AI-generated text from human-created text. In this paper, we present a novel theoretical framework for watermarking Large Language Models (LLMs) that jointly optimizes both the watermarking scheme and the detection process. Our approach focuses on maximizing detection performance while maintaining control over the worst-case Type-I error and text distortion. We characterize \emph{the universally minimum Type-II error}, showing a fundamental trade-off between watermark detectability and text distortion. Importantly, we identify that the optimal watermarking schemes are adaptive to the LLM generative distribution. Building on our theoretical insights, we propose an efficient, model-agnostic, distribution-adaptive watermarking algorithm, utilizing a surrogate model alongside the Gumbel-max trick. Experiments conducted on Llama2-13B and Mistral-8$\times$7B models confirm the effectiveness of our approach. Additionally, we examine incorporating robustness into our framework, paving the way for future watermarking systems that withstand adversarial attacks more effectively.

Chat is not available.