ClementPinard / SfmLearner-Pytorch

Pytorch version of SfmLearner from Tinghui Zhou et al.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Depth and Pose on Image with Black Borders

gilmartinspinheiro opened this issue · comments

Hi! I already posted about this on the original tensorflow implementation, but i would also like to know your opinion on the matter :)

Do you think that the black borders on an image like the one bellow would affect the training and the predicitons of depth/pose?

exemplo

I think it will pose some problems because with warping you will compare image parts with black areas. The optimizer will try to avoid it to minimize photometric error.

To my mind, your best solution is to weight the photometric error for each pixel. If you know exactly the pixels to dismiss, just multiply these photometric loss values by 0

You can find an example of how I did the same thing with out of bound warped pixels in loss_functions.py

Thank you for your answer! :)
That makes a lot of sense. I will implement it and get back to you as soon as possible!

Hi, @ClementPinard
I have carefully read how you did for the out of bound warped pixels in loss_functions.py, but I am wondering when the pose or depth is not very accurate, such as at the begin of training, there might be a lot of pixels out of bound. Since we multiply these photometric loss values by 0, the photometric loss will be quite small. I think it will lead the network to divergence.
I have this problem when I use it in other scenarios. Do you have any experience dealing with it?

Normally it should not be a problem. At first, pose values are very low, so warping is very close to the identity function, which makes the gradient meaningful.
If you actually have out of bound pixels, then the difference won't have any gradient, because you are comparing a valid pixel to a grey zone on which no gradient is possible.

So discarding it from the loss value is just a way to have a more meaningful loss since you only deal with valid comparisons. This makes sense at the end of the training, where half of warping is zooming out to warp, with obvious out of bound pixels in the boundaries of the target image.

What scenario are you using it on ? A known problem is when translation is not enough, then the depth has very little influence on the warping since the parallax is low.

You can try the train_flexible_shift.py script which tries to overcome this problem by increasing temporal shift between frames when the translation predicted is too low.

Thank you for your prompt reply. I think you are right and I have fixed my problem. The divergence in my work is just because of a stupid typo.