Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Leveraging Unlabeled Data to Track Memorization

Mahsa Forouzesh · Hanie Sedghi · Patrick Thiran

MH1-2-3-4 #59

Keywords: [ Deep Learning and representational learning ] [ unlabeled data ] [ Memorization ] [ deep learning ] [ label noise ] [ generalization ]


Abstract: Deep neural networks may easily memorize noisy labels present in real-world data, which degrades their ability to generalize. It is therefore important to track and evaluate the robustness of models against noisy label memorization. We propose a metric, called $\textit{susceptibility}$, to gauge such memorization for neural networks. Susceptibility is simple and easy to compute during training. Moreover, it does not require access to ground-truth labels and it only uses unlabeled data. We empirically show the effectiveness of our metric in tracking memorization on various architectures and datasets and provide theoretical insights into the design of the susceptibility metric. Finally, we show through extensive experiments on datasets with synthetic and real-world label noise that one can utilize susceptibility and the overall training accuracy to distinguish models that maintain a low memorization on the training set and generalize well to unseen clean data.

Chat is not available.