Shreeyak / cleargrasp

Official repository for the paper "ClearGrasp: 3D Shape Estimation of Transparent Objects for Manipulation"

Home Page:https://arxiv.org/abs/1910.02550

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to set up parameters if using different datasets?

AdevLog opened this issue · comments

In example:
If using matterport3d's undistorted_color.jpg and undistorted_depth.png
I have to change xres, yres, fx, fy, cx, cy.
Did i miss anything else?
Because i got output depth only in red and black color like this: Live Demo Picture

That's a very unusual output. I believe the first problem is that the surface normals isn't getting detected correctly. You can see it's all a blue-purple color. That's because the normals was trained on a different kind of dataset. And same for the segmentation and edge detection models. Because of that, it's cutting out weird holes and filling them up with whatever makes sense according to the surface normals.

Unless the segmentation, edge detection and surface normals models are providing reasonable outputs, the optimization model will not give reasonable outputs.

The other parameters are find. xres, fx, cx are simply for rectifying the images and projecting depth to point cloud when needed.

That's a very unusual output. I believe the first problem is that the surface normals isn't getting detected correctly. You can see it's all a blue-purple color. That's because the normals was trained on a different kind of dataset. And same for the segmentation and edge detection models. Because of that, it's cutting out weird holes and filling them up with whatever makes sense according to the surface normals.

Unless the segmentation, edge detection and surface normals models are providing reasonable outputs, the optimization model will not give reasonable outputs.

The other parameters are find. xres, fx, cx are simply for rectifying the images and projecting depth to point cloud when needed.

Hi @Shreeyak , I wanted to pass in a different resolution of images (640 x 480) to the network and it gives a similar results of red and black output depth image. Does this means that the model need to be retrained if we want to use our own setting for the depth completion?

@eesung00 you have to make pixel value to depth in meter. eg: 16bit PNG(range 0~65535) needs to divide 4000 to change image value to meter. and set your max depth to appropriate value.