aosokin / os2d

OS2D: One-Stage One-Shot Object Detection by Matching Anchor Features

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Evaluation Dataset

chasb799 opened this issue · comments

Hello,

I have some questions about the evaluation dataset. An evaluation is done multiple times during training and looking at the code it seems like this dataset influences the annealing of the learning rate. This would mean that the eval dataset can not be regarded as "unseen data" and an additional test set is recommended for evaluating the performance at the end of the training, right? In the config file there is an evaluation every 1000 iterations. How would it influence the performance of the network when the evaluation isn't done that often (e.g. only every 20k iterations)?

Best regards

Simon Bauer

Hi, yes, validation data was used to pick the best checkpoint (early stopping) and to potentially tweak the learning rate with the ReduceLROnPlateau strategy (seems to be off by default). With this, it is preferable to use separate validation and test sets to estimate the model performance. As for the evaluation frequency, I usually select it to spend around 10% overall time on evaluation if I cannot afford a separate accelerator specifically for evaluation (not supported in os2d anyway).

Best,
Anton