JosephKJ / iOD

(TPAMI 2021) iOD: Incremental Object Detection via Meta-Learning

Home Page:https://josephkj.in

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Some question about configs and datasets

onepiece010938 opened this issue · comments

First of all, thank you for this great work, but I still have some doubts about configs and datasets, I hope you can give me some suggestions. I plan to use my own datasets for training. So I first look at base_19.yaml and 19_p_1.yaml and have the following questions..

  1. I found that in base_19.yam ,LEARN_INCREMENTALLY is set to True, shouldn't it be set to False in the first base training stage?
  2. NUM_CLASSES is set to 20, so when doing the first step of training, I have to determine the total number of classes(19+1) before doing the incremental learning ?
  3. If I want to use a customized datasets, do you have any suggestions on what I needs to be changed?
    looking forward to your reply

Hi @onepiece010938 : Thank you for your interest in our work.

Please find my responses inline:

  1. I found that in base_19.yam ,LEARN_INCREMENTALLY is set to True, shouldn't it be set to False in the first base training stage?

LEARN_INCREMENTALLY should be set to True. The base training is indeed learning on only the first 19, as opposed to all the 20 classes. Once LEARN_INCREMENTALLY is set to True, TRAIN_ON_BASE_CLASSES flag basically controls whether you are training on 19 classes or the incremental set of classes. Please see this code for better understanding.

  1. NUM_CLASSES is set to 20, so when doing the first step of training, I have to determine the total number of classes(19+1) before doing the incremental learning ?

Yes. Total number of classes that can potentially be introduced to the model should be known beforehand. But, it can be over-estimated as 50 / 100 or so. It is for controlling the classification head.

  1. If I want to use a customized datasets, do you have any suggestions on what I needs to be changed?

I think the easiest way would be to model your custom dataset to VOC style annotation. You can maybe refer to some implementation details here.

@JosephKJ ,Thanks for your reply!
I would like to check if the scenario described below is possible.

Suppose in the first training stage, the dataset I registered has 10 classes, I can set NUM_CLASSES: 50 ,NUM_BASE_CLASSES: 10,NUM_NOVEL_CLASSES: 40 .
In the incremental stage, I only keep the names of the old 10 classes, and register only 1 class of dataset, and then set NUM_CLASSES: 50 NUM_BASE_CLASSES: 10 NUM_NOVEL_CLASSES: 1, so I can test 10+1, 10+1+1, 10+1+1... by controlling NUM_BASE_CLASSES +1 without needing to prepare all datasets in advance.

For learning the base (10 classes) use:

NUM_CLASSES: 50
NUM_BASE_CLASSES: 10
NUM_NOVEL_CLASSES: 40 # doesnt matter, see the code
TRAIN_ON_BASE_CLASSES: True

For an incremental step with 1 class:

NUM_CLASSES: 50
NUM_BASE_CLASSES: 10
NUM_NOVEL_CLASSES: 1
TRAIN_ON_BASE_CLASSES: False

For the next incremental step with 1 class:

NUM_CLASSES: 50
NUM_BASE_CLASSES: 10 11
NUM_NOVEL_CLASSES: 1
TRAIN_ON_BASE_CLASSES: False