SHI-Labs / StyleNAT

New flexible and efficient image generation framework that sets new SOTA on FFHQ-256 with FID 2.05, 2022

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Memory usage

MengZhen-Chi opened this issue · comments

could you tell me the memory usage of stylenat during the training and inference process

This is entirely dependent on what batch size you use. But to give you some intuition, a batch of 4 running a 256 sized image during training will use approximately 17G/GPU (remember that this isn't 17G/4 to get per batch number). For inference I see just about 4G for a batch of 4 and just under 2G for a batch of 1 (both saving 4 images).

Any Turing or above series GeForce card should be able to run inference (I've tested on 2080Ti and a 3080Ti). About training, just an FYI, for GANs this is often intensive. Our FFHQ 256 runs took about 2 weeks on 8 A100s with a batch size of 8.

My mistake, I've been cleaning up some code and noticed we're calling more than needed. This will be resolved in the next push. In the mean time, you can either warp main.py:51 and main.py:67 with a conditional looking for args.type == "train" or just comment out those lines. Inference only uses the EMA model after all.

This will reduce inference memory but not training memory.

Closing now due to inactivity. Please reopen if you have more questions. I pushed changes that should fix the above issues. But also note that this is research code and there is plenty of optimization opportunities on the table.