open-mmlab / mmflow

OpenMMLab optical flow toolbox and benchmark

Home Page:https://mmflow.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

When I want to fintune base on pre-trained RAFT mixed model,how to determine the ratio of old and new data?

pedroHuang123 opened this issue · comments

1、When I want to fintune my datasets base on mixed model, can I use the fine-tuning hyper-parameters as flow:

optimizer

optimizer = dict(type='Adam', lr=1e-5, weight_decay=0.0004, betas=(0.9, 0.999))
optimizer_config = dict(grad_clip=None)

learning policy

lr_config = dict(
policy='step',
by_epoch=False,
gamma=0.5,
step=[
45000, 65000, 85000, 95000, 97500, 100000, 110000, 120000, 130000,
140000
])
runner = dict(type='IterBasedRunner', max_iters=150000)
checkpoint_config = dict(by_epoch=False, interval=10000)
evaluation = dict(interval=10000, metric='EPE')
these parameters are from https://github.com/open-mmlab/mmflow/blob/master/docs/en/tutorials/2_finetune.md

2、we know the pre-trained mixed model is fintune on the mixed datasets,including FlyingChairs FlyingThing3D,Sintel,kitti2015 and HD1K, When I use the model for my datasets to obtain better performace, even if
I use some old data(FlyingThing3D,Sintel,HD1K) to participate in finetune ,But the result show that the EPE of my dataset decrease with iteration, but the old dataset(sintel、flyingthing3d) increase . So when I finetune the pre-trained model,should I use the old datasets for training?if yes,how to determine the ratio of old and new data?
image

3、When you train the mixed model, why you only use sintel final and clean as the validation dataset?Why not test the training effect of other datasets like hd1k、kitti2015 and flyingthing3d?