Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Masked Vision and Language Modeling for Multi-modal Representation Learning

Gukyeong Kwon · Zhaowei Cai · Avinash Ravichandran · Erhan Bas · Rahul Bhotika · Stefano Soatto

MH1-2-3-4 #33

Keywords: [ Applications ] [ Multi-Modal Learning ] [ vision and language ]


Abstract:

In this paper, we study how to use masked signal modeling in vision and language (V+L) representation learning. Instead of developing masked language modeling (MLM) and masked image modeling (MIM) independently, we propose to build joint masked vision and language modeling, where the masked signal of one modality is reconstructed with the help from another modality. This is motivated by the nature of image-text paired data that both of the image and the text convey almost the same information but in different formats. The masked signal reconstruction of one modality conditioned on another modality can also implicitly learn cross-modal alignment between language tokens and image patches. Our experiments on various V+L tasks show that the proposed method, along with common V+L alignment losses, not only achieves state-of-the-art performance by using a large amount of data but also outperforms the other competitors by a significant margin in the regimes of limited training data.

Chat is not available.