InterDigitalInc / CompressAI

A PyTorch library and evaluation platform for end-to-end compression research

Home Page:https://interdigitalinc.github.io/CompressAI/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training using MSE than finetuning using MS-SSIM but aux loss goes bigger

gitlabspy opened this issue · comments

commented

Following https://interdigitalinc.github.io/CompressAI/zoo.html#training, I first train with mse with $\lambda$ 0.18 and switch to ms-ssim with $\lambda$ 220, but aux loss goes bigger and bigger. Is this normal? Should I finetune the model with ms-ssim using lower learning rate like 1e-5?

aux_loss being large on its own isn't a problem. Typical values I have observed are aux_loss ~= 2 * batch_size, but this varies depending on factors such as the number of channels, and the batch-to-batch stability of the EntropyBottleneck distributions. But if you want to reduce it to 0, you can try #231.

Related: