mobaidoctor / med-ddpm

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Speed up the training

rassaire opened this issue · comments

Hi again,

I remenber you mentioned that keeping the batch size at 1 aids in model convergence. Typically, we increase the batch size when we have high computing resources to accelerate learning. Given that we will maintain a batch size of 1, how can we speed up training with a large number of training datasets?

Hi @rassaire, It looks like there might have been a misunderstanding. Could you please review my previous response? #26 (comment). If you still have questions or need further clarification, feel free to let us know. Thank you!

@mobaidoctor, Your comment was clear; I misinterpreted it. I apologize.