GMvandeVen / continual-learning

PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR, BI-R, ER, A-GEM, iCaRL, Generative Classifier) in three different scenarios.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Datasets more complicated than MNIST

xu-ji opened this issue · comments

commented

First of all, thank you for releasing this code.

Do you know of any results for generative replay (i.e. where images from across tasks are generated) on datasets more complicated than MNIST? For example CIFAR or ImageNet.

It seems your feedback connections paper, three scenarios for continual learning paper and the original deep generative replay paper only test on MNIST. Did you try any other datasets? Do you think there is something about the combination of more complex natural images + the continual training of the generator that makes it difficult? Because surely someone has tried.

I’m very sorry for the late response! Yes, you are right that so far the successful applications of generative replay have only been on datasets with relatively simple inputs (e.g., MNIST). Based on my own attempts (as well as based on informal discussions with others), scaling up generative replay to problems with more complex inputs (e.g., natural images) turns out not to be straight-forward. So you are probably right that there is something about the combination of more complex inputs + the continual training of the generator that makes it difficult. I hope to soon share my attempts of scaling up generative replay.