Skip to yearly menu bar Skip to main content


Poster

Language Model Beats Diffusion - Tokenizer is key to visual generation

Lijun Yu · José Lezama · Nitesh Bharadwaj Gundavarapu · Luca Versari · Kihyuk Sohn · David Minnen · Yong Cheng · Agrim Gupta · Xiuye Gu · Alexander G Hauptmann · Boqing Gong · Ming-Hsuan Yang · Irfan Essa · David Ross · Lu Jiang

Halle B #235
[ ] [ Project Page ]
Fri 10 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

While Large Language Models (LLMs) are the dominant models for generative tasks in language, they do not perform as well as diffusion models on image and video generation. To effectively use LLMs for visual generation, one crucial component is the visual tokenizer that maps pixel-space inputs to discrete tokens appropriate for LLM learning. In this paper, we introduce \modelname{}, a video tokenizer designed to generate concise and expressive tokens for both videos and images using a common token vocabulary. Equipped with this new tokenizer, we show that LLMs outperform diffusion models on standard image and video generation benchmarks including ImageNet and Kinetics. In addition, we demonstrate that our tokenizer surpasses the previously top-performing video tokenizer on two more tasks: (1) video compression comparable to the next-generation video codec (VCC) according to human evaluations, and (2) learning effective representations for action recognition tasks.

Live content is unavailable. Log in and register to view live content