Skip to yearly menu bar Skip to main content


Virtual presentation / top 5% paper

PaLI: A Jointly-Scaled Multilingual Language-Image Model

Xi Chen · Xiao Wang · Soravit Changpinyo · AJ Piergiovanni · Piotr Padlewski · Daniel Salz · Sebastian Goodman · Adam Grycner · Basil Mustafa · Lucas Beyer · Alexander Kolesnikov · Joan Puigcerver · Nan Ding · Keran Rong · Hassan Akbari · Gaurav Mishra · Linting Xue · Ashish V. Thapliyal · James Bradbury · Weicheng Kuo · Mojtaba Seyedhosseini · Chao Jia · Burcu Karagol Ayan · Carlos Riquelme · Andreas Steiner · Anelia Angelova · Xiaohua Zhai · Neil Houlsby · Radu Soricut

Keywords: [ Deep Learning and representational learning ]


Abstract:

Effective scaling and a flexible task interface enable large language models to excel at many tasks. We present PaLI, a model that extends this approach to the joint modeling of language and vision. PaLI generates text based on visual and textual inputs, and with this interface performs many vision, language, and multimodal tasks, in many languages. To train PaLI, we make use of large pretrained encoder-decoder language models and Vision Transformers (ViTs). This allows us to capitalize on their existing capabilities and leverage the substantial cost of training them. We find that joint scaling of the vision and language components is important. Since existing Transformers for language are much larger than their vision counterparts, we train a large, 4-billion parameter ViT (ViT-e) to quantify the benefits from even larger-capacity vision models. To train PaLI, we create a large multilingual mix of pretraining tasks, based on a new image-text training set containing 10B images and texts in over 100 languages. PaLI achieves state-of-the-art in multiple vision and language tasks (such as captioning, visual question-answering, scene-text understanding), while retaining a simple, modular, and scalable design.

Chat is not available.