Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

ViewCo: Discovering Text-Supervised Segmentation Masks via Multi-View Semantic Consistency

Pengzhen Ren · Changlin Li · Hang Xu · Yi Zhu · Guangrun Wang · Jianzhuang Liu · Xiaojun Chang · Xiaodan Liang

Keywords: [ Unsupervised and Self-supervised learning ] [ Visual Self-Supervision ] [ Consistent Semantics ] [ vision-language pretraining ] [ Zero-shot semantic segmentation ]


Abstract:

Recently, great success has been made in learning visual representations from text supervision, facilitating the emergence of text-supervised semantic segmentation. However, existing works focus on pixel grouping and cross-modal semantic alignment, while ignoring the correspondence among multiple augmented views of the same image. To overcome such limitation, we propose multi-View Consistent learning (ViewCo) for text-supervised semantic segmentation. Specifically, we first propose text-to-views consistency modeling to learn correspondence for multiple views of the same input image. Additionally, we propose cross-view segmentation consistency modeling to address the ambiguity issue of text supervision by contrasting the segment features of Siamese visual encoders. The text-to-views consistency benefits dense assignment of the visual features by encouraging different crops to align with the same text, while the cross-view segmentation consistency modeling provides additional self-supervision, overcoming the limitation of ambiguous text supervision for segmentation masks. Trained with large-scale image-text data, our model can directly segment objects of arbitrary categories in a zero-shot manner. Extensive experiments show that ViewCo outperforms state-of-the-art methods on average by up to 2.9%, 1.6%, and 2.4% mIoU on PASCAL VOC2012, PASCAL Context, and COCO, respectively.

Chat is not available.