GMvandeVen / continual-learning

PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR, BI-R, ER, A-GEM, iCaRL, Generative Classifier) in three different scenarios.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Suspicious Precision

tarasrumezhak opened this issue · comments

Why precision is 0 for tasks 1-4?

Precision on test-set:

  • Task 1: 0.0000
  • Task 2: 0.0000
  • Task 3: 0.0000
  • Task 4: 0.0000
  • Task 5: 0.9945
    => Average precision over all 5 tasks: 0.1989

Hi, thanks for your question. I expect that this is the final performance on a class-incremental learning experiment? (So using the flag --scenario=class.) In that case, if for example you use no specific continual learning methods, the network tends to overfit on the classes in the last task. In the last task, the network only sees the classes from that task, so then it learns to never predicted classes from the previous tasks anymore.

Hi, first of first, nice repo. thank you for providing this.

I have the same confusion. I tried the ewc method by:
./main.py --ewc --lambda=5000 --visdom
and by default, I think I use --experiment=splitMNIST and --scenario=class. The final result looks like:
` Precision on test-set:

  • Task 1: 0.0000
  • Task 2: 0.0000
  • Task 3: 0.0000
  • Task 4: 0.0000
  • Task 5: 0.9939
    => Average precision over all 5 tasks: 0.1988`

Based on the comment, I guess I should tune the learning rate to make it less overfitting? How about the lambda parameter? Can you provide a reasonable parameters in comment so I could try to make sense on the result?

Hi @gentlegy, apologies for the late reply. In the class-incremental learning scenario, as pointed out in the paper accompanying this code repository (https://arxiv.org/abs/1904.07734), the method EWC indeed does not work well and the results that you found are typical for EWC in this scenario. Methods using some form of replay (e.g., DGR, RtF, ER, A-GEM or iCaRL) typically perform much better with class-incremental learning.