GMvandeVen / continual-learning

PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR, BI-R, ER, A-GEM, iCaRL, Generative Classifier) in three different scenarios.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Final results of class incremental learning

Harry-Up opened this issue · comments

Hello! I am sorry but I am confused that the code shows the test result of the final task as the final result of class incremental learning. In my opinion, the results of class incremental learning per task should be the average result of all previous tasks and current task. I would like to hear any suggestion, thank you!

Hi, thanks for your feedback! I just checked the code, but as far as I’m aware, the final results reported by the code are always the average over all tasks, not just the final task. And for individual experiments, the average performance is also printed for each task separately. For example, running ./main.py --scenario=class results for me in the following output:

 Precision on test-set (softmax classification):
 - Task 1: 0.0000
 - Task 2: 0.0000
 - Task 3: 0.0000
 - Task 4: 0.0000
 - Task 5: 0.9904
=> average precision over all 5 tasks: 0.1981

Could you point me to where you think that the final task performance is reported instead of the average? Thanks!

Thank you for your replying!

When I ran the code of icarl algorithm on cifar-100 dataset (I adapted the code to the condition), this information was given after all tasks being trained. Firstly, the desirable incremental learning results are supposed to be reported after each task being trained. Secondly, as given above, the average precision is the exactly wanted task result. However, when I try to reproduce the result with the same parameter (resnet34 for cifar100) in the icarl paper, it seems not to work well. I am keeping work on it.

Thanks again! And the first suggestion could be considered. For the second one, I am trying another implementation version.

Thanks for your reply. Regarding your first point, I should note that if you run the code with --visdom enabled, the intermediate results after training on each task are reported in the form of a graph. But if you want those results printed to the screen, the code would indeed need to be modified slightly. Regarding your second point, one thing I can say is that (with very messy extended code) I was able to obtain roughly similar results as those in the original iCaRL-paper, although I also found that there is quite some dependence on hyperparameter settings (e.g., the step-wise reduced learning rate).

Hi @GMvandeVen
You mentioned that results after training on each class can be obtained in printed form with some tweaks, could you please provide more details on that? I am looking to obtain FWT, BWT, Forgetting Measure, Learning Accuracy and Average Accuracy on these models in online learning setting. Which means, the buffer size should be 0 with class incremental setting. Is there any way possible to get a task matrix at the end of experiment?