hari-sikchi / benchmark-rrc

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Benchmarking Structured Policies and Policy Optimization for Real-World Dexterous Object Manipulation

This repository contains the code for the publication "Benchmarking Structured Policies and Policy Optimization for Real-World Dexterous Object Manipulation".

arXiv | project website

Quickstart

Running the code in simulation

Make sure to download the Phase 2 image

To run the code locally, first install Singularity and download singularity image for Phase 2 from the Real Robot Challenge. A couple extra dependencies are required to run our code. To create the required singularity image, run: singularity build --fakeroot image.sif image.def

Use the run_locally.sh script to build the catkin workspace and run commands inside the singularity image. For example, to run the Motion Planning with Planned Grasp (MP-PG) on a random goal of difficulty 4, use the following command:

./run_locally.sh /path/to/singularity/image.sif rosrun rrc run_local_episode.py 4 mp-pg

Use scripts/run_local_episode.py to visualize all of our implemented approaches. You can run our code with and without residual models and BO optimized parameters. See scripts/run_local_episode.py for the arguments.

Running the code on the robot cluster

Similarly to simulation, scripts/run_episode.py can be used to run our methods on the real platform. Edit run to set the correct arguments.

For detailed instructions on how to run this code on the robot cluster, see this page.

This repository contains code for automatically submitting jobs and analyzing logs in the log_manager directory.

Optimizing hyperparameters using BO

The functionality to run BO for optimizing the hyperparameters is contained inside python/cic/bayesian_opt. In this location, also a README file is provided which details how the experiments can be run.

Improving controllers with Residual Policy Learning

We use a residual version of Soft Actor Critic to train residual controllers using this Deep RL library. To train a controller for the MP-PG controller use the following command:

./training_scripts/run_script_local.sh /path/to/singularity/image ./training_scripts/configs/mp_pg.yaml

Similar configs for the other controllers exist in the training_scripts/configs folder.

To record or evaluate the model call the visualize.sh and evaluate.sh scripts respectively:

./training_scripts/visualize.sh /path/to/singularity/image /path/to/log/directory -n num_episodes -t ckpt

./training_scripts/evaluate.sh /path/to/singularity/image /path/to/log/directory -n num_episodes -t ckpt

Contributions

The code is the joint work of Niklas Funk, Charles Schaff, Rishabh Madan, and Takuma Yoneda.

The repository is structured as a catkin package and builds on the example package provided by the Real Robot Challenge, and this planning library.

The paper and this repository combines the code and algorithms from three teams that participated in the Real Robot Challenge:

About

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:Python 98.3%Language:Shell 1.7%Language:CMake 0.0%