ivalab / affordanceNet_Novel

An implementation of our RA-L work 'Toward Affordance Detection and Ranking on Novel Objects for Real-world Robotic Manipulation'

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AffordanceNet_Novel

This is the implementation of our RA-L work 'Toward Affordance Detection and Ranking on Novel Objects for Real-world Robotic Manipulation'. This paper presents a framework to detect and rank affordances of novel objects to assist with robotic manipulation tasks. The original arxiv paper can be found here.

drawing

If you find it helpful for your research, please consider citing:

@inproceedings{chu2019toward,
  title = {Learning Affordance Segmentation for Real-world Robotic Manipulation via Synthetic Images},
  author = {Chu, Fu-Jen and Xu, Ruinian and Seguin, Landan and Vela, Patricio A},
  journal = {IEEE Robotics and Automation Letters},
  year = {2019},
  volume={4},
  number={4},
  pages={4070--4077},    
  DOI = {10.1109/LRA.2019.2930364},
  ISSN = {4070-4077},
  month = {Oct}
}

Requirements

  1. Caffe:

  2. Specifications:

    • CuDNN-5.1.10
    • CUDA-8.0

Demo

  1. Clone the AffordanceNet_Novel repository into your $AffordanceNet_Novel_ROOT folder
git clone https://github.com/ivalab/affordanceNet_Novel.git
cd affordanceNet_Novel
  1. Export pycaffe path
`export PYTHONPATH=$AffordanceNet_Novel_ROOT/caffe-affordance-net/python:$PYTHONPATH`
  1. Build Cython modules
cd $AffordanceNet_Novel_ROOT/lib
make clean
make
cd ..
  1. Download pretrained models

    • trained model for DEMO on dropbox
    • put under ./pretrained/
  2. Demo

cd $AffordanceNet_Novel_ROOT/tools
python demo_img_kldivergence.py

you can adjust RANK to be 1 or 2 or 3

Training

  1. We train AffordanceNet_Novel on UMD dataset

    • You will need synthetic data and real data in Pascal dataset format.
    • For your convinience, we did it for you. Just download this file on dropbox and extract it into your $AffordanceNet_Novel_ROOT/data folder; And download this Annotations containing xml with objectness instead of all objects to replace $AffordanceNet_Novel_ROOT/data/VOCdevkit2012/VOC2012/Annotations; And download this file on dropbox and extract it into your $AffordanceNet_Novel_ROOT/data/cache folder; Make sure you use the category split on dropbox and extract it into your $AffordanceNet_Novel_ROOT/data/VOCdevkit2012/VOC2012/ImageSets/Main folder
    • You will need the VGG-16 weights pretrained on imagenet. For your convinience, please find it here
    • Put the weight into $AffordanceNet_Novel_ROOT/imagenet_models
    • If you want novel instance split, please find it here
  2. Train AffordanceNet_Novel:

cd $AffordanceNet_ROOT
./experiments/scripts/faster_rcnn_end2end.sh 0 VGG16 pascal_voc

Physical Manipulation with PDDL

1.1. Install Fast-Downward for PDDL.

1.2. Install ROS.

1.3. Install Freenect

1.4. Compile ivaHandy in your ros workspace handy_ws for our Handy manipulator.

1.5. Compile handy_experiment in your ros workspace handy_ws for experiment codebase.

2.1. run Handy (our robot, you may check our codebase and adjust yours)

cd handy_ws
roslaunch handy_experiment pickplace_pddl.launch

2.2. run camera

roslaunch freenect_launch freenect.launch depth_registration:=true

2.3. run PDDL

cd $AffordanceNet_ROOT/scripts
python kinect_pddl_UMD_firstAffordance_objectness_nonprimary.py

Note you might need to:

(1) modify camera parameters:

KINECT_FX = 494.042
KINECT_FY = 490.682
KINECT_CX = 330.273
KINECT_CY = 247.443

(2) modify the relative translation from aruco tag to robot base:

obj_pose_3D.position.x = round(coords_3D[0], 2) + 0.20
obj_pose_3D.position.y = round(coords_3D[1], 2) + 0.30
obj_pose_3D.position.z = round(coords_3D[2], 2) - 0.13 

(3) modify a good range for your object scale:

(arr_rgb.shape[0] > 100 and arr_rgb.shape[1] > 100)

(4) modify the args.sim path for debug mode

License

MIT License

Acknowledgment

This repo borrows tons of code from

Contact

If you encounter any questions, please contact me at fujenchu[at]gatech[dot]edu

Modifications

  1. Annotations contains xml with objectness instead of all objects, (and corresponding model descriptions for two classes)
  2. Modify proposal_target_layer.py
  3. to modify affordance number: (1) no prototxt: "mask_score" (2) no config: __C.TRAIN.CLASS_NUM = 13 (3) no proposal_target_layer: label_colors (4) yes proposal_target_layer: label2dist

About

An implementation of our RA-L work 'Toward Affordance Detection and Ranking on Novel Objects for Real-world Robotic Manipulation'

License:Other


Languages

Language:Python 94.3%Language:C 3.0%Language:Shell 1.5%Language:Cuda 0.8%Language:MATLAB 0.3%Language:C++ 0.0%Language:Makefile 0.0%