Skip to yearly menu bar Skip to main content


Spotlight

Long-tail learning via logit adjustment

Aditya Krishna Menon · Sadeep Jayasumana · Ankit Singh Rawat · Himanshu Jain · Andreas Veit · Sanjiv Kumar

[ ] [ Livestream: Visit Oral Session 5 ] [ Paper ]
[ Paper ]

Abstract:

Real-world classification problems typically exhibit an imbalanced or long-tailed label distribution, wherein many labels have only a few associated samples. This poses a challenge for generalisation on such labels, and also makes naive learning biased towards dominant labels. In this paper, we present a statistical framework that unifies and generalises several recent proposals to cope with these challenges. Our framework revisits the classic idea of logit adjustment based on the label frequencies, which encourages a large relative margin between logits of rare positive versus dominant negative labels. This yields two techniques for long-tail learning, where such adjustment is either applied post-hoc to a trained model, or enforced in the loss during training. These techniques are statistically grounded, and practically effective on four real-world datasets with long-tailed label distributions.

Chat is not available.