Dense Policy: Bidirectional Autoregressive Learning of Actions

Created by MG96

External Public cs.RO cs.CV cs.LG

Statistics

Citations
3
References
33
Last updated
Loading...
Authors

Yue Su Xinyu Zhan Hongjie Fang Han Xue Hao-Shu Fang Yong-Lu Li Cewu Lu Lixin Yang
Project Resources

Name Type Source Actions
ArXiv Paper Paper arXiv
Semantic Scholar Paper Semantic Scholar
Abstract

Mainstream visuomotor policies predominantly rely on generative models for holistic action prediction, while current autoregressive policies, predicting the next token or chunk, have shown suboptimal results. This motivates a search for more effective learning methods to unleash the potential of autoregressive policies for robotic manipulation. This paper introduces a bidirectionally expanded learning approach, termed Dense Policy, to establish a new paradigm for autoregressive policies in action prediction. It employs a lightweight encoder-only architecture to iteratively unfold the action sequence from an initial single frame into the target sequence in a coarse-to-fine manner with logarithmic-time inference. Extensive experiments validate that our dense policy has superior autoregressive learning capabilities and can surpass existing holistic generative policies. Our policy, example data, and training code will be publicly available upon publication. Project page: https: //selen-suyue.github.io/DspNet/.

Note:

No note available for this project.

No note available for this project.
Contact:

No contact available for this project.

No contact available for this project.