GMvandeVen / continual-learning

PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR, BI-R, ER, A-GEM, iCaRL, Generative Classifier) in three different scenarios.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to apply iCaRL on task and domain scenario?

hsqzzpf opened this issue · comments

There is always an error when I want to apply iCaRL in the scenario of task and domain

This error is because iCaRL is only compatible with training according to the Class-IL scenario. The reason for this is that iCaRL uses distillation of current task data on the classes of all previous tasks using binary classification loss (in my code this is indicated by the flag --bce-distill, which is automatically selected when the flag --icarl is used), and this aspect of iCaRL does not have a straight-forward translation to the Task- or Domain-IL scenarios.
Two other aspects of iCaRL could however also be used for training according to the other scenarios: (1) the use of stored exemplars for classification (indicated by the flag --use-exemplars) and (2) the replay of stored exemplars during training (which can be selected with the option --replay=exemplars). In Appendix C of our latest preprint (https://arxiv.org/abs/1904.07734) we explore the use of these two aspects of iCaRL in the different scenarios. I should note that a third aspect of iCaRL that could be used in all scenarios is the use of binary (instead of multi-class) classification loss (indicated by the flag --bce).
Finally, another option could be to use iCaRL to train according to the Class-IL scenario, and then evaluate the performance according to the Task- or Domain-IL scenario. It could however be argued that this is not very fair on iCaRL as its performance won't be optimised for the scenario it is tested on. For this last option you would have to slightly modify the code.
Hope this helps!

Thanks a lot!
It is much clearer more me now.