BerkeleyAutomation / gqcnn

Python module for GQ-CNN training and deployment with ROS integration.

Home Page:https://berkeleyautomation.github.io/gqcnn

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Issue: Bug/Performance Issue [Custom Images]

runeg96 opened this issue · comments

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    Ubuntu 18 with ROS installed

  • Python version:
    Python 2.7

  • Installed using pip or ROS:
    Installed with ROS

  • Camera:
    Simulated camera from Webots. Tried to build the camera as similar to the Realsense d435 camera.

  • GPU model (if applicable):

Describe what you are trying to do
Me and my group are trying to make a dualarm system from two ur5's in a simulated world from webots. We are tying to use the simulated images from the simulation and use that for grasp planning with a parallel jaw gripper. We are using the dex-net model 4

Describe current behavior
We have been noticing that the grasp planning always finds a pretty bad grasp with a bad qvaule whenever the object we are trying to grasp is symmetric and convex. like a square box or a sphere. But when we try to grasp a non symmetric object(Hammer, doughnut) it finds a good grasp with a high q value.
Bad_grasp
hammer

The highest q value we have seen was about 0.75 so maybe we are doing something wrong?

Describe the expected behavior
For the symmetric and convex object i would assume that it would try to grasp the objects in the middle instead of grasping near the edges or corners .

Describe the input images
Size of image: 1280x720
We have a RGB, Depth and segmask image in the same size.
The RGB and Depth image is captured with ROS from a ROS topic as the type sensor_msgs:Image. furthermore do we convert the images to cv images using CvBridge to get them into the correct format the Autolab:Perception package needs.
Github_rgb
Github_depth

From here we make our own segmask image from the RGB image where we find the contour of the object of interests and segment the object white(255) and the rest of the image black(0).
Github_segmask

We then feed all the three images to the grasp planning like you do in the ros_policy example.
Finale resolution is 1280x720.

Describe the physical camera setup
In our setup we have two cameras where the first camera is on a pole next to the workspace its about 60 cm above the workspace at a 30 degree angle. The second camera is placed on the wrist of the robot. So here we can move the robot arm above the table, whatever height the robot can reach. But most of the time we moved the arm about 60 to 70 cm above the workspace and with the camera pointing straight down.
setup

Other info / logs
Another thing i went into the config folder and change the gripper width to the width of our gripper.
Do note that all the images in this issues is from the camera on the pole next to the workspace. We get similar results if we use the camera on the wrist of the robot looking straight down on the workspace.

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.