GMvandeVen / continual-learning

PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR, BI-R, ER, A-GEM, iCaRL, Generative Classifier) in three different scenarios.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

permutedMNIST accs

Johswald opened this issue · comments

Hey - thank you for the good implementation of all these methods. Very helpfull.
To start a permutedMNIST run, I executed

python main.py --experiment 'permMNIST' --scenario 'task' --tasks 10 --replay=generative --distill --feedback --iters 5000

iters need to be 5000 to get the results reported in the paper, correct?

Correct, the permuted MNIST results reported in the paper used 5000 iterations per task. I should mention that for permuted MNIST there were also 1000 units in each hidden layer (as opposed to 400 for split MNIST) and the learning rate was 0.0001 (as opposed to 0.001), so to reproduce the results reported in the paper you would also have to add --fc-units=1000 --lr=0.0001.

sorry, of course. Forgot about that. Thanks!