Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

SlotFormer: Unsupervised Visual Dynamics Simulation with Object-Centric Models

Ziyi Wu · Nikita Dvornik · Klaus Greff · Thomas Kipf · Animesh Garg

MH1-2-3-4 #145

Keywords: [ Unsupervised and Self-supervised learning ] [ transformer ] [ object-centric learning ] [ dynamics modeling ]


Abstract:

Understanding dynamics from visual observations is a challenging problem that requires disentangling individual objects from the scene and learning their interactions. While recent object-centric models can successfully decompose a scene into objects, modeling their dynamics effectively still remains a challenge. We address this problem by introducing SlotFormer -- a Transformer-based autoregressive model operating on learned object-centric representations. Given a video clip, our approach reasons over object features to model spatio-temporal relationships and predicts accurate future object states. In this paper, we successfully apply SlotFormer to perform video prediction on datasets with complex object interactions. Moreover, the unsupervised SlotFormer's dynamics model can be used to improve the performance on supervised downstream tasks, such as Visual Question Answering (VQA), and goal-conditioned planning. Compared to past works on dynamics modeling, our method achieves significantly better long-term synthesis of object dynamics, while retaining high quality visual generation. Besides, SlotFormer enables VQA models to reason about the future without object-level labels, even outperforming counterparts that use ground-truth annotations. Finally, we show its ability to serve as a world model for model-based planning, which is competitive with methods designed specifically for such tasks.

Chat is not available.