ClementPinard / FlowNetPytorch

Pytorch implementation of FlowNet by Dosovitskiy et al.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

reconstruction issue

BingfengHan opened this issue · comments

Hello there, i'm tring to use this model to reconstruct result which showed in your .gif, but my results show quite different. I saved the output optical flow images and their color are much brighter than yours. And some colors are not the same with yours. I'm using your pretrained model. So i wonder if any solution could solve this problem?
I try to use the preframe and opticalflow to sythesis the later frame, my results construct a collapsed result. :(

Hello, for a bit of context, I will need to see some code. Are you using the code in run_inference.py to generate colored flow maps ?

For synthesized views, you can't use optical flow + inverse warp as in many unsupervised depth algorithms. It can only be used the other way around : generate preframe with optical flow + later frame. See issue here : ClementPinard/SfmLearner-Pytorch#60

Thank you for your reply, i was tring to use optical flow which generated in run_inference.py with the former frame, to rebuild the later frame. The optical flow is calculated with above former frame and the later frame. So, i thought, maybe it could build a predict frame with <former frame + opticalflow>

The reconstruction in this direction is not possible, you would need a "direct_warp" function instead of "inverse_warp" that uses grid_sample function.

It's counter intuitive but it has to do with clash in textures, especially with back ground and foreground pixels. Suppose the background is going left and the forward is going right, i.e. you are utrning around foreground object. You know that the background object pictured by the background pixel is going to be behind the foreground object pictures by the foreground picture. But how can you know it only with optical flow ? This is a problem that you don't have in inverse warping. You want to reconstruct first img, so for a pixel position [u,v], you just want its color to be the same as in img[u+f_u(u,v), v+f_v(u,v)], very simple. It might give impossible results for occluded objects, but at least there won't be any clash.

Hope it was clear. You need optical flow from img2 to img1 to be able to reconstruct Img2