The-Learning-And-Vision-Atelier-LAVA / ArbSR

[ICCV 2021] Learning A Single Network for Scale-Arbitrary Super-Resolution

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

When i am training the ArbRCAN model, the training loss is so large. Is it a normal situation?

RayTan183 opened this issue · comments

commented

When i am training the ArbRCAN model, the training is so large that it is a normal situation. Sometime the loss so so large that skip the batch. And i tried to test by using the middle weights such as 70.pth. But the result even worse than the original pre-trained model.

commented

image