NVlabs / RVT

Official Code for RVT-2 and RVT

Home Page:https://robotic-view-transformer-2.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

question about camera calibration in real world

FinnJob opened this issue · comments

Thanks for your excellent work!
I currently have two RealSense cameras, and I want to use these two cameras to generate a point cloud for use in RVT. I already know how to calibrate the intrinsic parameters of each camera and perform hand-eye calibration for a single camera with the robotic arm. How should I calibrate these two cameras to ensure that the point clouds from both cameras are aligned? Additionally, how can I determine the orientation of the point cloud with respect to the front of the robotic arm?
Thank you!

Hi @FinnJob,

Thanks for your kind words.

In our real world experiments we just use one camera. We calibrate the robot-camera extrinsics and transform the perceived point clouds to the robot base frame. In the robot base frame, the robotic arm is front.

For two cameras, you could potentially calibraate each one of them wrt to the robot base frame.

For questions about the software we use for calibrating wrt to the robot base frame, @ychao-nvidia might be the best person.

Best,
Ankit

Thank you for your replay. It helps a lot!

May I ask what camera you are using for implementation? Currently, we are using the RealSense camera, but the accuracy decreases when the distance exceeds 50cm. Do you have any suggestions?

We used an Azure Kinect camera. Every camera will have a range where the point cloud is reasonable. We found Azure Kinect to be good for our setup.

OK. Thank you for your replay.

When conducting landing experiments with UR3, we have encountered an issue where commanding the robotic arm to traverse a list of TCP poses sometimes leads to unexpected solutions. This results in significant joint changes, deviating from the expected motion as seen in the simulation environment. Could you provide insights on how you have addressed or mitigated this problem in your setup? We are seeking guidance on achieving more consistent and predictable execution of actions on the robotic arm in a real-world scenario, similar to the behavior observed in the simulation environment. Any advice or recommendations would be greatly appreciated. Thank you.

Unfortunately, I have not used UR3, so I am unable to help here. For our experiments we used Franka and we used Frankapy to move the robot to specific poses.

Closing because of inactivity.