Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Backdoor Attacks and Defenses in Machine Learning

Exploring Vulnerabilities of Semi-Supervised Learning to Simple Backdoor Attacks

Marissa Connor · Vincent Emanuele


Abstract:

Semi-supervised learning methods can train high-accuracy machine learning models with a fraction of the labeled training samples required for traditional supervised learning. Such methods do not typically involve close review of the unlabeled training samples, making them tempting targets for data poisoning attacks. In this paper, we show that simple backdoor attacks on unlabeled samples in semi-supervised learning are surprisingly effective - achieving an average attack success rate as high as 96.9%. We identify unique characteristics of backdoor attacks against semi-supervised learning that can provide practitioners with a better understanding of the vulnerabilities of their models to backdoor attacks.

Chat is not available.