GMvandeVen / continual-learning

PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR, BI-R, ER, A-GEM, iCaRL, Generative Classifier) in three different scenarios.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

0 accuracy values for task-free setting

hiteshvaidya opened this issue · comments

Hello,

I tried compare_task_free.py and main_task_free.py script for a setting where task boundaries are not available and --iters=1 and --budget=0 but such a setting either throws errors or gives 0 accuracy values for all tasks and 1.0 for last class of CIFAR10. I set --contexts 10 for this experiment. I would highly appreciate your help in this matter.

Thank you!

Hi, for the experiment you describe (with one class per task and no storing of data from past tasks), I would indeed expect that for many continual learning methods the result should be a model that only predicts the last class. Regarding the errors, if you give some more details I can see whether I could help.

Thanks for replying @GMvandeVen, I need to recreate the errors and will post them as soon as I find them again. In the meantime, could you please also share if there is any way to obtain a task matrix of accuracies to obtain metrics like BWT, FWT, Forgetting Measure and Learning Accuracy?

To compute a task matrix of accurcies, you can use the flag --results-dict when running main.py. At the end of each task, the accuracy is then computed for each task so far and stored in 'plotting_dict':

continual-learning/main.py

Lines 343 to 345 in 11215d2

plotting_dict = evaluate.initiate_plotting_dict(args.contexts) if (
checkattr(args, 'pdf') or checkattr(args, 'results_dict')
) else None

If you want to do the above while also computing the accuracy for future tasks, you can change this if-statement:

if (current_context is None) or (i+1 <= current_context):
allowed_classes = None
if model.scenario=='task' and not checkattr(model, 'singlehead'):
allowed_classes = list(range(model.classes_per_context * i, model.classes_per_context * (i + 1)))
precs.append(test_acc(model, datasets[i], test_size=test_size, verbose=verbose,
allowed_classes=allowed_classes, no_context_mask=no_context_mask, context_id=i))
else:
precs.append(0)

Hope this helps!

Here's the error that I got,

(cl-pytorch) [hvaidya@forest.usf.edu@GPU12 continual-learning]$ ./compare_task_free.py --experiment=CIFAR10 --scenario=class --iters 1 --budget 1 --contexts 10 --replay none --joint --stream academic-setting
usage: ./compare_task_free.py [-h] [--seed SEED] [--n-seeds N_SEEDS] [--no-gpus] [--no-save] [--full-stag STAG] [--full-ltag LTAG] [--data-dir D_DIR] [--model-dir M_DIR] [--plot-dir P_DIR] [--results-dir R_DIR]
                              [--time] [--visdom] [--results-dict] [--acc-n ACC_N] [--experiment {splitMNIST,permMNIST,CIFAR10,CIFAR100}] [--stream {fuzzy-boundaries,academic-setting,random}] [--fuzziness ITERS]
                              [--scenario {task,domain,class}] [--contexts N] [--iters ITERS] [--batch BATCH] [--no-norm] [--conv-type {standard,resNet}] [--n-blocks N_BLOCKS] [--depth DEPTH]
                              [--reducing-layers RL] [--channels CHANNELS] [--conv-bn CONV_BN] [--conv-nl {relu,leakyrelu}] [--global-pooling] [--fc-layers FC_LAY] [--fc-units N] [--fc-drop FC_DROP]
                              [--fc-bn FC_BN] [--fc-nl {relu,leakyrelu,none}] [--z-dim Z_DIM] [--singlehead] [--lr LR] [--optimizer {adam,sgd}] [--momentum MOMENTUM] [--pre-convE] [--convE-ltag LTAG]
                              [--seed-to-ltag] [--freeze-convE] [--recon-loss {MSE,BCE}] [--update-every N] [--replay-update N] [--xdg] [--gating-prop PROP] [--fc-units-sep N] [--epsilon EPSILON] [--c SI_C]
                              [--temp TEMP] [--budget BUDGET] [--eps-agem EPS_AGEM] [--eval-s EVAL_S] [--fc-units-gc N] [--fc-lay-gc N] [--z-dim-gc N] [--no-context-spec] [--no-si] [--no-agem]
./compare_task_free.py: error: argument --replay-update: invalid int value: 'none'

I am trying to recreate a setting where there are no task boundaries provided and no replay

The function ./compare_task_free.py does not have an option --replay. By giving --replay none as input, you set --replay-update to none, which is not a valid value for that option.

So can I still do no replay and one class per task with no task boundaries with ./compare_task_free.py ? Apart from the changes you suggested in main.py, are there any other changes needed so I could obtain a task matrix for all methods with compare_task_free.py?

Thanks! for your help

In principle you can use ./compare_task_free.py with one class per task and no replay, but note that a substantial amount of the methods that are compared in this script expect to store data and/or use replay.

Regarding obtaining the task matrices, with the changes I described it should indeed be possible to obtain such task matrices, although you of course have to make a few changes to the code yourself to then obtain them in the format you want.

I made changes to #28 (comment)
and removed the or (i+1 <= current_context) so that a task matrix is stored in the store/results folder. But, the results folder still has text files with only single accuracy value and not a task matrix. I would highly appreaciate your help here @GMvandeVen

The values of the task matrix should then be stored in the dictionary plotting_dict. This dictionary is not written out to a text file by default, you would have to change the code yourself to do that.