Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Learning without Prejudices: Continual Unbiased Learning via Benign and Malignant Forgetting

Myeongho Jeon · Hyoje Lee · Yedarm Seong · Myungjoo Kang

Keywords: [ Deep Learning and representational learning ] [ unbiased learning ] [ representation learning ] [ continual learning ]


Abstract:

Although machine learning algorithms have achieved state-of-the-art status in image classification, recent studies have substantiated that the ability of the models to learn several tasks in sequence, termed continual learning (CL), often suffers from abrupt degradation of performance from previous tasks. A large body of CL frameworks has been devoted to alleviating this issue. However, we observe that forgetting phenomena in CL are not always unfavorable, especially when there is bias (spurious correlation) in training data. We term such type of forgetting benign forgetting, and categorize detrimental forgetting as malignant forgetting. Based on this finding, our objective in this study is twofold: (a) to discourage malignant forgetting by generating previous representations, and (b) encourage benign forgetting by employing contrastive learning in conjunction with feature-level augmentation. Extensive evaluations of biased experimental setups demonstrate that our proposed method, Learning without Prejudices, is effective for continual unbiased learning.

Chat is not available.