Skip to yearly menu bar Skip to main content


Spotlight Poster

Learning to Act from Actionless Videos through Dense Correspondences

Po-Chen Ko · Jiayuan Mao · Yilun Du · Shao-Hua Sun · Joshua B Tenenbaum

Halle B #240
[ ]
Fri 10 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract:

In this work, we present an approach to construct a video-based robot policy capable of reliably executing diverse tasks across different robots and environments from few video demonstrations without using any action annotations. Our method leverages images as a task-agnostic representation, encoding both the state and action information, and text as a general representation for specifying robot goals. By synthesizing videos that "hallucinate" robot executing actions and in combination with dense correspondences between frames, our approach can infer the closed-formed action to execute to an environment without the need of any explicit action labels. This unique capability allows us to train the policy solely based on RGB videos and deploy learned policies to various robotic tasks. We demonstrate the efficacy of our approach in learning policies on table-top manipulation and navigation tasks. Additionally, we contribute an open-source framework for efficient video modeling, enabling the training of high-fidelity policy models with four GPUs within a single day.

Live content is unavailable. Log in and register to view live content