Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Backdoor Attacks and Defenses in Machine Learning

How to Backdoor Diffusion Models?

Sheng-Yen Chou · Pin-Yu Chen · Tsung-Yi Ho


Abstract: Diffusion models are state-of-the-art deep learning empowered generative models that are trained based on the principle of learning forward and reverse diffusion processes via progressive noise-addition and denoising. To gain a better understanding of the limitations and potential risks, this paper presents the first study on the robustness of diffusion models against backdoor attacks. Specifically, we propose $\textbf{BadDiffusion}$, a novel attack framework that engineers compromised diffusion processes during model training for backdoor implantation. At the inference stage, the backdoored diffusion model will behave just like an untampered generator for regular data inputs, while falsely generating some targeted outcome designed by the bad actor upon receiving the implanted trigger signal. Such a critical risk can be dreadful for downstream tasks and applications built upon the problematic model. Our extensive experiments on various backdoor attack settings show that $\textbf{BadDiffusion}$ can consistently lead to compromised diffusion models with high utility and target specificity. Even worse, $\textbf{BadDiffusion}$ can be made cost-effective by simply finetuning a clean pre-trained diffusion model to implant backdoors. We also explore some possible countermeasures for risk mitigation. Our results call attention to potential risks and possible misuse of diffusion models.

Chat is not available.