real-stanford / umi-on-legs

UMI on Legs: Making Manipulation Policies Mobile with Manipulation-Centric Whole-body Controllers

Home Page:https://umi-on-legs.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

UMI on Legs: Making Manipulation Policies Mobile with Manipulation-Centric Whole-body Controllers

Huy Ha$^{🐢,1,2}$, Yihuai Gao$^{🐢,1}$ Zipeng Fu$^1$, Jie Tan$^{3}$ Shuran Song$^{1,2}$

$^1$ Stanford University, $^2$ Columbia University, $^3$ Google DeepMind, $^🐢$ Equal Contribution

Project Page | Arxiv | Video

UMI on Legs is a framework for combining real-world human demonstrations with simulation trained whole-body controllers, providing a scalable approach for manipulation skills on robot dogs with arms.

The best part? You can plug-and-play your existing visuomotor policies onto a quadruped, making your manipulation policies mobile!


This repository includes source code for whole-body controller simulation training, whole-body controller real-world deployment, iPhone odometry iOS application, UMI real-world environment class, and ARX5 SDK. We've published our code in a similar fashion to how we've developed it - as separate submodules - with the hope that the community can easily take any component they find useful out and plug it into their own system.

If you find this codebase useful, consider citing:

@inproceedings{ha2024umionlegs,
      title={{UMI} on Legs: Making Manipulation Policies Mobile with Manipulation-Centric Whole-body Controllers}, 
      author={Huy Ha and Yihuai Gao and Zipeng Fu and Jie Tan and Shuran Song},
      year={2024},
}

If you have any questions, please contact Huy Ha at huyha [at] stanford [dot] edu or Yihuai Gao at yihuai [at] stanford [dot] edu.

Table of Contents

If you just want to start running some commands while skimming the paper, you should get started here, which downloads data, checkpoints, and rolls out the WBC. The rest of the documentation is focused on setting up real world deployment.

Code Acknowledgements

Whole-body Controller Simulation Training:

  • Like many other RL for control works nowadays, we started with Nikita Rudin's implementation of PPO and Gym environment wrapper around IsaacGym, legged gym. Shout out to Nikita for publishing such a hackable codebase - it's truly an amazing contribution to our community.
  • Although not used in the final results of the paper, our codebase does include a modified Perlin Noise Terrain from DeepWBC. To use it, run training with env.cfg.terrain.mode=perlin.

Whole-body Controller Deployment:

  • Thanks to Qi Wu for providing us with an initial deployment script for the whole-body controller!

iPhone Odometry Application:

  • Thanks to Zhenjia Xu for providing us with some starter code for ARKit camera pose publishing!

UMI Environment Class:

  • Our UMI deployment codebase heavily builds upon the original UMI codebase. Big thanks to the UMI team!

OptiTrack Motion Capture Setup:

  • Thanks to Jingyun Yang and Zi-ang Cao for providing the OptiTrack motion capture code and helping us to set it up!

About

UMI on Legs: Making Manipulation Policies Mobile with Manipulation-Centric Whole-body Controllers

https://umi-on-legs.github.io/

License:MIT License


Languages

Language:Python 97.4%Language:Dockerfile 2.2%Language:CMake 0.4%