mattpoggi / mono-uncertainty

CVPR 2020 - On the uncertainty of self-supervised monocular depth estimation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to train the self-teaching network?

zhangmaoxiansheng opened this issue · comments

Thx for your nice work! I wonder how to train the self-teaching network. It seems you didn't give the code for training. According to (14), a group of results is needed, will it takes a lot of GPU memory during training?
Besides, I didn't find the drop+self-teaching result in the paper, I wonder the performance of this way.

Hi @zhangmaoxiansheng ,
for now, we are not planning to release the training code. You can easily reimplement it on your own by extending monodepth2.

To train self-teaching, you need to load a pre-trained network to compute the distilled labels. Otherwise, you can pre-compute offline and load them as pseudo ground truth. In my code, I was able to compute them on-the-fly on a single GPU without memory issues.

About drop+self, it is missing in the paper because dropout itself, taken alone, performs poorly.
Anyway, you can find below the performance of drop+self with M supervision:

   abs_rel |          |     rmse |          |       a1 |          |
      AUSE |     AURG |     AUSE |     AURG |     AUSE |     AURG |
&   0.048  &   0.016  &   3.265  &   0.473  &   0.069  &   0.030  \\

Thx