TMM 2024 The code of ' Degradation-Aware Self-Attention Based Transformer for Blind Image Super-Resolution '.
Geek Repo:Geek Repo
Github PK Tool:Github PK Tool
bo-henry opened this issue 9 months ago · comments
Hi, great work!
I‘m wondering what's the training costs (days, GPUs, GPU type that used) of DSAT ?
Thank you very much for your interest in our work, we train with RTX3090 for 3-4 days.