Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

Programmatically Grounded, Compositionally Generalizable Robotic Manipulation

Renhao Wang · Jiayuan Mao · Joy Hsu · Hang Zhao · Jiajun Wu · Yang Gao

MH1-2-3-4 #124

Keywords: [ Reinforcement Learning ] [ Vision-Language-Action Grounding ] [ Neurosymbolic Learning ] [ Zero-Shot Generalization ] [ compositional generalization ]


Abstract:

Robots operating in the real world require both rich manipulation skills as well as the ability to semantically reason about when to apply those skills. Towards this goal, recent works have integrated semantic representations from large-scale pretrained vision-language (VL) models into manipulation models, imparting them with more general reasoning capabilities. However, we show that the conventional {\it pretraining-finetuning} pipeline for integrating such representations entangles the learning of domain-specific action information and domain-general visual information, leading to less data-efficient training and poor generalization to unseen objects and tasks. To this end, we propose \ours, a {\it modular} approach to better leverage pretrained VL models by exploiting the syntactic and semantic structures of language instructions. Our framework uses a semantic parser to recover an executable program, composed of functional modules grounded on vision and action across different modalities. Each functional module is realized as a combination of deterministic computation and learnable neural networks. Program execution produces parameters to general manipulation primitives for a robotic end-effector. The entire modular network can be trained with end-to-end imitation learning objectives. Experiments show that our model successfully disentangles action and perception, translating to improved zero-shot and compositional generalization in a variety of manipulation behaviors. Project webpage at: \url{https://progport.github.io}.

Chat is not available.