kkkls / FFTformer

[CVPR 2023] Effcient Frequence Domain-based Transformer for High-Quality Image Deblurring

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Finetune using single GPU

riestiyazain opened this issue · comments

Dear authors, thank you for open sourcing the code. I have only 1 4080 GPU laptop, is it possible to finetune the pretrained weights using current resource?

Hello, due to the limited 16GB VRAM on the 4080, you can try using a patch size of 128x128 and a batch size of 2 as a possible solution.

Thank you for the fast reply. Do you have any advice on the hyperparameters I can use to finetune the model?

Thank you for the fast reply. Do you have any advice on the hyperparameters I can use to finetune the model?

You can try fine-tuning with a learning rate of 1e-4, patch size of 128x28, and a batch size of 2.