Why does the model of first stage for finetune train all parameters instead of using LoRA?
ohheysherry66 opened this issue · comments
Thank you for your great job! The question is described in title.
@ohheysherry66 You can also fine-tune SD using LoRA. We tried it at the time and it achieves similar performance.