freeze CLIP TextEncoder
Ahnsun opened this issue · comments
EnYu commented
Hi, the paper pointed out that we should freeze the text encoder of CLIP during training. I wonder how to achieve it and where is the corrresponding code. Thanks!
Sosuke Kobayashi commented
In my understanding (though I'm not an author), the optimizer selects which parameters to be updated.
https://github.com/isl-org/lang-seg/blob/main/modules/lsegmentation_module.py#L119-L175
Because these lines do not select CLIP parameters (self.net.clip_pretrained
), they are frozen.