GMvandeVen / continual-learning

PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR, BI-R, ER, A-GEM, iCaRL, Generative Classifier) in three different scenarios.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Wrong dataset?

GuillaumeLam opened this issue · comments

train_datasets[task_id], batch_size_to_use, cuda=cuda, drop_last=True

Hello! My team and I are currently running experiments with your BI-R repo. We are trying to find the limit between ER and GR approaches and see how much we can degrade examplars (stored or generate) to acheive similar accuracies. As such, we ported your implementation of ER to the other repo. We noticed that on the line 123 of train.py, that the dataset train_datasets is used rather than previous_datasets. Is train_datasets not the full scale dataset? We would love to have your input on this potential issue and if you have and ablation/directions of experiments to run, please do tell!
Btw, your code is absolutely beautiful and so well documented!

Hi, thank you for your feedback! You’re right, there was indeed a mistake in the line that you point out, it should have used previous_datasets rather than train_datasets. This mistake caused wrong behaviour in the setting --replay=exemplars (i.e., replay from a limited-size memory buffer) in the Task-IL scenario. Sorry about that! I hope it didn't cause too much trouble. I just fixed it. Many thanks for letting me know!

Thank you for answering so quickly! We picked up the issue when integrating the code so no harm done!