lliuz / ARFlow

The official PyTorch implementation of the paper "Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation".

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training+finetuning configurations

gallif opened this issue · comments

Hi, first of all great work and fantastic code!
I'm trying to recreate your reported results on the Sintel Training:

  1. Did you train and evaluate using the complete Sintel train dataset?
  2. I noticed that occlusion transform (run_ot) is set to false in the ar config file. Was it used during finetuning?
  3. Are there other parameters I should consider?

Thanks!

  1. Yes, I did. The results in Table 1 were trained on the complete Sintel train dataset.
  2. Since OT significantly increases training time, I did not use it in most of ablations, but it can still bring performance gains(refer to Table 4). The results in Table 1 used all kinds of transformations.
  3. You can reproduce the results just use the default parameters.

Thank you for the reply.
I will do as you suggested.