Skip to yearly menu bar Skip to main content


Spotlight Talk
in
Workshop: I Can't Believe It's Not Better: Challenges in Applied Deep Learning

Spotlight Talk 1: Are We Really Unlearning? The Presence of Residual Knowledge in Machine Unlearning

Hsiang Hsu · Zichang He


Abstract:

Machine unlearning seeks to remove a set of forget samples from a pre-trained model to comply with emerging privacy regulations. While existing machine unlearning algorithms focus on effectiveness by either achieving indistinguishability from a re-trained model or closely matching its accuracy, they often overlook the vulnerability of unlearned models to slight perturbations of forget samples. In this paper, we identify a novel privacy vulnerability in unlearning, which we term residual knowledge. We find that even when an unlearned model no longer recognizes a forget sample---effectively removing direct knowledge of the sample---residual knowledge often persists in its vicinity, which a re-trained model does not recognize at all. Addressing residual knowledge should become a key consideration in the design of future unlearning algorithms.

Chat is not available.