j96w / DexCap

[RSS 2024] "DexCap: Scalable and Portable Mocap Data Collection System for Dexterous Manipulation" code repository

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The usage of `calculate_offset_vis_calib.py`

COMoER opened this issue · comments

Thanks for the source code release of your awesome work. I find a file calculate_offset_vis_calib.py in your project. It adopts the data in test_data to refine the calibration position and orientation offsets. In my understanding, it is used to calibrate the extrinsic between the realsense T265 camera and the data glove. But I do not find the file to generate the test_data. So I am wondering that the usage of calculate_offset_vis_calib.py. I would appreciate it if you could reply and help me about this.

Hi @COMoER, yes, you are correct. The calculate_offset_vis_calib.py script is used to correct minor offset errors in the calibration between the T265 and the glove. To generate correction data (which will be saved in the test_data folder), you first need to run replay_human_traj_vis.py in --calib mode (see code) with --default linked to the default_offset folder. In this mode, you will use keyboard commands (see guidance) to adjust the hand skeleton within the 3D point cloud observation of the scene. The goal is to align the hand skeleton with its correct 3D position in the scene and press the key 0 to save a data point. Typically, we save 3 to 4 data points for each video sequence and then run calculate_offset_vis_calib.py to generate the calib_offset.txt file in the data folder, which is later processed into the final training data. Feel free to let me know if you have more questions in these steps. We will update this information in the main README later. Thanks for bringing it up btw!

Hi @COMoER, yes, you are correct. The calculate_offset_vis_calib.py script is used to correct minor offset errors in the calibration between the T265 and the glove. To generate correction data (which will be saved in the test_data folder), you first need to run replay_human_traj_vis.py in --calib mode (see code) with --default linked to the default_offset folder. In this mode, you will use keyboard commands (see guidance) to adjust the hand skeleton within the 3D point cloud observation of the scene. The goal is to align the hand skeleton with its correct 3D position in the scene and press the key 0 to save a data point. Typically, we save 3 to 4 data points for each video sequence and then run calculate_offset_vis_calib.py to generate the calib_offset.txt file in the data folder, which is later processed into the final training data. Feel free to let me know if you have more questions in these steps. We will update this information in the main README later. Thanks for bringing it up btw!

Thank you for your reply!