hitachinsk / FGT

[ECCV 2022] Flow-Guided Transformer for Video Inpainting

Home Page:https://hitachinsk.github.io/publication/2022-10-01-Flow-Guided-Transformer-for-Video-Inpainting

Repository from Github https://github.comhitachinsk/FGTRepository from Github https://github.comhitachinsk/FGT

Run on image

AnhPC03 opened this issue · comments

Hi @hitachinsk, is it possible to run your awesome project on only image, not video? If yes, could you help me point out which part I need to modify. Thank you.

Yes, of course you can.
Since we cannot estimate optical flows based on a single image, we have to remove flow completion network (LAFC), and discard the optical flow branch in the transformer backbone. What's more, we should also remove the flow-guided content propagation module. Finally, we should also modify the dataset sampling procedure, I think these strategies may be enough. However, you should train the new model by yourself, since I didn't perform such experiment.

A tricky method is to regard your images as videos. All you need is to duplicate each image into several samples, and regard these duplicated images as each video sequence. In this way, you don't have to modify any codes.

I will close this issue, if you have further questions, please reopen this issue.