Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Generative Models for Robot Learning

Responsive Noise-Relaying Diffusion Policy: Responsive and Efficient Visuomotor Control

Zhuoqun Chen · Xiu Yuan · Tongzhou Mu · Hao Su


Abstract:

Imitation learning is an efficient method for teaching robots a variety of tasks. Diffusion Policy, which uses a conditional denoising diffusion process to generate actions, has demonstrated superior performance, particularly in handling multi-modal data. However, it relies on executing multiple actions to prevent mode bouncing, which limits its responsiveness, as actions are not conditioned on the most recent observations, reducing its adaptability. To address this, we introduce Responsive Noise-Relaying Diffusion Policy (RNR-DP), which maintains a noise-relaying buffer with progressively increasing noise levels and employs a sequential denoising mechanism that generates immediate, noise-free actions at the head of the sequence, while appending noisy actions at the tail. This ensures that actions are responsive and conditioned on the latest observations, while maintaining motion consistency through the noise-relaying buffer. RNR-DP offers two key advantages: it enables the handling of tasks requiring responsive control, and it accelerates action generation by reusing denoising steps. We evaluate RNR-DP on robotic tasks requiring responsive control (e.g. contact-rich dynamic object manipulation) and show that it significantly outperforms Diffusion Policy. Further evaluation on tasks that do not require responsive control demonstrates that RNR-DP also surpasses popular acceleration methods, highlighting its computational efficiency advantage in scenarios where responsiveness is less critical.

Chat is not available.