arthurdouillard / CVPR2021_PLOP

Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation

Home Page:https://arxiv.org/abs/2011.11390

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Reproduce problem on 15-1

ygjwd12345 opened this issue · comments

I reproduce 15-1 setting
the result is:
  | 1-15 | 16-20 | mIoU | all
mib | 35.1 | 13.5 | 29.7 | -
PLOP(reproduce train) | 64.94 | 20.77 | 53.90 | 55.16
PLOP(reproduce test) | 65.01 | 16.85 | 52.97 | 54.11
PLOP(paper) | 65.12 | 21.11 | 54.64 | 67.21
it is reasonable for train but for add --test the result drops too much.

In other words, resutl is different between the test after the whole training or test it by using the same script and parameters but using the command --test

Hello, thanks for your report, it's indeed weird. I haven't found it before because I was always re-training all steps.

I've made a hot fix where we always reload the saved weight (381cb79) but I'll try to undertstand later why does it happen.

I've just re-run a 15-1 overlap plop and here are my results with training mode and with testing mode as you did:

(old/new/all/avg)

Train mode: 66.4 / 19.31 / 55.19 / 67.09
Test mode: 66.4 / 19.31 / 55.19 / 67.09

Results are a bit different from paper results (a bit better in old, and a bit worse in new, overall better than paper results), but that's expected as I've run this on another machine than the one I've used for the paper.