NVlabs / RVT

Official Code for RVT-2 and RVT

Home Page:https://robotic-view-transformer-2.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Query Regarding Translation Loss Convergence in Real Robot Training

MrTooOldDriver opened this issue · comments

Hi. First of all many thanks for the excellent paper and great codebase.
In the paper you mention that single RVT model can solve the task with just ~10 demos per task, so recently I have been collecting data (10 simple tasks with 10 demos each) with my setup (similar to what you have, a single RGBD camera and a smaller robot arm). While I do the training. I found out that Rotation Loss (x,y,z) are welly converge but the Translation Loss aren't converge well, the translation loss only drops to ~4 while rotation evaluation loss converges to ~0.2. Given this, I would like to ask the following question for some clarity.

  1. When training on real robot data, did you start the training from scratch or was there any preliminary training involved using simulations?
  2. During your experiments with real robot data, did you observe a similar convergence pattern between translation and rotation?

Thank you in advance for your insights!

Hi,

Thanks for your kind words and interest in our work.

Q: When training on real robot data, did you start the training from scratch or was there any preliminary training involved using simulations?
A: We train on real robot data from scratch.

Q: During your experiments with real robot data, did you observe a similar convergence pattern between translation and rotation?
A: This is a great question! Yes, we observed a similar convergence pattern, so what you observe is typical. I would strongly recommend visualizing the predicted translation heatmap on top of virtual images of some train as well as unseen samples. My guess would be that you would observe sharp heatmap predictions. If it is so, it would suggest that the network has converged well and the seemingly (explained below) higher value of the translation loss is not an issue.

Unlike typical losses, which go to 0, the translation loss in RVT does not go to zero but hovers around 3.8 ( probably less for better-trained networks) for an image resolution ~220ish and some other parameters like gt_hm_sigmae. This is because, in RVT, translation loss is the sum of heatmap prediction loss for each virtual image. Predicting the ground-truth heatmap exactly requires predicting the keypoint location on the virtual image with sub-pixel accuracy. As the 3D ground-truth keypoint location can be mapped as a continuous 2D pixel location. This is just a design choice, we could have chosen to map to a discrete pixel location. Since, a network incurs a large loss value even for predicting the keypoint apart by <1 pixel distance, the loss values are large.

Many thanks for your reply! Indeed visualisation does show very reasonable results. Thank you so much!