Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

Proposal-Contrastive Pretraining for Object Detection from Fewer Data

Quentin Bouniot · Romaric Audigier · Angelique Loesch · Amaury Habrard

MH1-2-3-4 #165

Keywords: [ Unsupervised and Self-supervised learning ] [ unsupervised ] [ contrastive learning ] [ pretraining ] [ object detection ]


Abstract:

The use of pretrained deep neural networks represents an attractive way to achieve strong results with few data available. When specialized in dense problems such as object detection, learning local rather than global information in images has proven to be more efficient. However, for unsupervised pretraining, the popular contrastive learning requires a large batch size and, therefore, a lot of resources. To address this problem, we are interested in transformer-based object detectors that have recently gained traction in the community with good performance and with the particularity of generating many diverse object proposals. In this work, we present Proposal Selection Contrast (ProSeCo), a novel unsupervised overall pretraining approach that leverages this property. ProSeCo uses the large number of object proposals generated by the detector for contrastive learning, which allows the use of a smaller batch size, combined with object-level features to learn local information in the images. To improve the effectiveness of the contrastive loss, we introduce the object location information in the selection of positive examples to take into account multiple overlapping object proposals. When reusing pretrained backbone, we advocate for consistency in learning local information between the backbone and the detection head. We show that our method outperforms state of the art in unsupervised pretraining for object detection on standard and novel benchmarks in learning with fewer data.

Chat is not available.