mr-eggplant / SAR

Code for ICLR 2023 paper (Oral) — Towards Stable Test-Time Adaptation in Dynamic Wild World

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Initialization of moving average of entropy

e0jun opened this issue · comments

Hello author,
Thanks for the release of the code of your paper.

The (pseudo) code shows that the moving average value of entropy loss is not initialized to 0 after the model recovery.

I think the value should be initialized to 0. Could you give me an answer?

Thank you in advance:-)

Hi e0jun,

The e_0 should be re-initialized after each time recovery (as we did in sar.py). We missed this in the pseudo-code. Thank u for pointing this out!

Best,

Thank you for your quick reply!

In sar.py, the ema value is returned with reset_flag in line 49.
If reset_flag is True, the self.ema is re-initialized in line 51.
But, in line 52, the returned ema is re-allocated to self.ema, making the re-initialization negligible.

It could be my misunderstanding, but could you explain the re-initialization process? If you give me an answer, I really appreciate it!!

Hi,

To make the re-initialization of $e_m$ effective, line 52 should actually be placed before line 50.

However, I think that the performance would not be clearly affected whether we reset $e_m$ or not. This is because, once the model is reset, the entropy of the new predictions $e$ will be much higher than $e_m$ , causing $e_m$ to quickly become large after a few iteration steps of ema update.

Hope that helps

I understand! I really appreciate your reply:)