dingdingcai / OVE6D-pose

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

A problem about pose visualization in point cloud format

zhhiyuan opened this issue · comments

Hey, thank you for your excellent work.
When I applied your method to the custom data set, I wanted to display the predicted pose (transform) as a point cloud to view the estimation results of the source and target. However, when I used open3d to simply transform with the predicted pose, the results were not good.

image

As you can see from the picture.
This question has been bothering me for a long time. It will be very helpfull if you have any codes or suggestions that can help me about point cloud visualization based on the predicted pose from your method. Thank you very much!

Hey, thank you for your excellent work. When I applied your method to the custom data set, I wanted to display the predicted pose (transform) as a point cloud to view the estimation results of the source and target. However, when I used open3d to simply transform with the predicted pose, the results were not good.

image

As you can see from the picture. This question has been bothering me for a long time. It will be very helpfull if you have any codes or suggestions that can help me about point cloud visualization based on the predicted pose from your method. Thank you very much!

Please ensure that the two point clouds are under the same coordinate system.

Thanks for your advice!
Since I'm new to this field, I was wondering, in the camera coordinate(maybe also eaquls to the target coordinate), what should the initial posture of the source be in your project? I am guessing it may be like torch.eye(4, dtype=torch.float32)?

Thanks for your advice! Since I'm new to this field, I was wondering, in the camera coordinate(maybe also eaquls to the target coordinate), what should the initial posture of the source be in your project? I am guessing it may be like torch.eye(4, dtype=torch.float32)?

No, I don't think the initial pose is defined by torch.eye(4). Please get yourself familiar with these coordinate system transformations.

You can consider the object pose as the 3D rotation and the 3D translation from the object CAD model coordinate system to the camera coordinate system. In this project, the object viewpoint (out-of-plane rotation) is retrieved from the template library generated using the object CAD model. All subsequent pose calculations are based on the retrieved pose.

Thanks for your patience, but I still can't get good results on custom data in the form of point clouds.
I have to consider other methods.

@zhhiyuan I recently finished my undergraduate thesis project using OVE6D here. I just tested on a fresh mamba env and it seems to run just fine.

Feel free to try and see if you are successful with your 3D model of interest there and then backtrack to what the actual issue is?

@zhhiyuan I recently finished my undergraduate thesis project using OVE6D here. I just tested on a fresh mamba env and it seems to run just fine.

Feel free to try and see if you are successful with your 3D model of interest there and then backtrack to what the actual issue is?

Thanks for sharing your project!
I tried my 3D ply model on your project. After using OVE6D to predict the pose, and rendering the resulting R and T to the target using render_mesh() and render_cloud() functions, the rendered points cannot be included in the image. Cause code here check the image boundary. And I wonder why you firstly flip your Y and Z coords during the render process.
So here is my issue. I have tried the example project provieded by author on the custom dataset. The predict results works fine on visualzing in 2D format.
vis

Then I want to check the result in 3D format. Here is what I did.
I use the target depth image and rgb image to get target point cloud, and transform the source point cloud with raw rotation and translation.
I got the unexpected result. The author said that this may be due to inconsistent coordinate systems, but I am a fresh man for 6D pose estimation, I don't know how to get the transformat matrix to transform the coordinate system. And have I missed some necessary steps during the process?

@zhhiyuan The fact that the rendered points are never included in the image hints to me that the scale of your cad model is different from mine. My cad models were in meters. If your cad models are in e.g. millimeters, the rendering would always go outside image boundaries in the projection. My solution for not rendering such frames is a bit lazy I admit.

Maybe you can load into for example blender the obj000001.ply from the demo_dataset and your own cad model and see in what scales they are. If they are different then you can rescale your cad model or change the MODEL_SCALING parameter in configs/config.py.

@EternalGoldenBraid
Thanks for your reminding, I double checked the code and MODEL_SCALING had been adjusted to millimeters (1.0/1000) before this error occurred. And I will rescale cad model and try your code again to make sure that.

Problem solved!
The source code scaled the object by the MODEL_SCALING parameter, so the transform process is based on the meters. But my point cloud generation process ignored this parameter(Based on millimeters). I eventually discovered this problem and corrected it.

@zhhiyuan Amazing, glad to hear!