andyzeng / arc-robot-vision

MIT-Princeton Vision Toolbox for Robotic Pick-and-Place at the Amazon Robotics Challenge 2017 - Robotic Grasping and One-shot Recognition of Novel Objects with Deep Learning.

Home Page:http://arc.cs.princeton.edu

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Non-augmented vs augmented parallel grasping labels are inverted

thomascent opened this issue · comments

Hello,
I was just reading through processLabels.m (specifically lines 105-108) and I noticed that in the non-augmented labels, good grasps are set to 128 whereas in the augmented labels they're set to 0.

Is that an error by any chance? It seems like the augmented labels are used for training meaning that positive grasp affordance would be encoded in the first channel of the network's output but in visualize.m they're taken as the second channel.

You’re right - the augmented labels should be flipped. Hotfixed with the latest commits. Thanks for the catch!

No problem! The labels in the preprocessed dataset (this guy: http://vision.princeton.edu/projects/2017/arc/downloads/training-parallel-jaw-grasping-dataset.zip) are also affected so you might like to re-process them