🗣 This repository includes the main code to reproduce our results from the following sections of the paper:
- 4.1 ImageNet Transfer
- 4.4 Depth-wise Probes and Comparison to CKA
The results shown in Sections 4.2 and 4.3 can be reproduced via the code in this repository.
pypoetry doc is very well written and detailed.
First, be sure to not be in a virtual env.
To install poetry with the right version :
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | POETRY_VERSION=1.1.6 python
(on Windows, from PowerShell) (Invoke-WebRequest -Uri https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py -UseBasicParsing).Content | POETRY_VERSION=1.1.6 python -
By default, poetry will create a virtualenv in a cache-dir folder. To have it created in the repository, under a .venv
folder, you need to first run poetry config virtualenvs.in-project true
(https://python-poetry.org/docs/configuration/#virtualenvsin-project-boolean).
Then go to our repository, and run poetry install
. It will create a virtualenv that can be used in PyCharm, with all the dependencies needed.
Before running the code, you need to make sure the datasets are placed in a root
data folder, the code does not download any datasets and instead will throw an error if it cannot find the datasets.
- For ImageNet we need to have the following files inside the
root
data folder:ILSVRC2012_img_val.tar
ILSVRC2012_img_train.tar
ILSVRC2012_img_test_v10102019.tar
ILSVRC2012_devkit_t12.tar.gz
- For CUB we need to have a folder called
CUB_200_2011
inside theroot
data folder containing the dataset. - For Scenes we need to have a folder called
Scenes
inside theroot
data folder containing the dataset.
In order to run these experiments, you need to follow a two-step procedure:
- Train a model with a CL
strategy
(eitherFineTuning
,LwF
, orEWC
) on a sequence of tasks (scenario
). Thescenario
has the form oftask_one2task_two
. So for the task sequence of ImageNet ➡ Scenes ➡ CUB, we would have thescenario
ofImageNet2Scenes2CUB
. - Step 1 will save snapshots of the model at the end of each task, we can then run a linear prob model on these snapshots.
For example, running and evaluating EWC
on ImageNet2Scenes2CUB
would look like:
poetry run python main.py --scenario "ImageNet2Scenes2CUB" --data_root "path_to_root_data_dir" --strategy "EWC"
Now to run the linear prob evaluation at the end of training for Scenes
and CUB
we need to run the followings:
poetry run python main.py --probe --probe_caller "Scenes" --scenario "ImageNet2Scenes2CUB" --data_root "path_to_root_data_dir" --model_path "../../model_zoo/EWC_ImageNet2Scenes2CUB_VGG16_scenes.pt"
poetry run python main.py --probe --probe_caller "CUB" --scenario "ImageNet2Scenes2CUB" --data_root "path_to_root_data_dir" --model_path "../../model_zoo/EWC_ImageNet2Scenes2CUB_VGG16_cub.pt"
The tensorboard
results will be saved in the tb_logs
directory.
@inproceedings{davari2022probing,
title = {Probing Representation Forgetting in Supervised and Unsupervised Continual Learning},
author = {Davari, MohammadReza and Asadi, Nader and Mudur, Sudhir and Aljundi, Rahaf and Belilovsky, Eugene},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022)},
year = {2022},
month = {June},
}