yumingj / Text2Human

Code for Text2Human (SIGGRAPH 2022). Paper: Text2Human: Text-Driven Controllable Human Image Generation

Home Page:https://yumingj.github.io/projects/Text2Human.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CUDA ran out of memory

etomasic opened this issue · comments

When trying the UI demo, the first step of parsing works, but the second runs out of memory. Is there an easy way to like, reduce the batch size or something so I could run it? That seems to be the most common suggestion to fix this type of problem with PyTorch. I will also try running it on a GPU with twice as much memory, and see if that works.

Traceback (most recent call last):
File "ui_demo.py", line 162, in generate_human
result = self.sample_model.sample_and_refine()
File "C:\Users\erikn\Text2Human-main\models\sample_model.py", line 245, in sample_and_refine
dec = self.decoder(top_quant, bot_h=bot_dec_res)
File "C:\Users\erikn\miniconda3\envs\text2human\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\erikn\Text2Human-main\models\archs\vqgan_arch.py", line 1018, in forward
h = self.up[i_level].block[i_block](h, temb)
File "C:\Users\erikn\miniconda3\envs\text2human\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\erikn\Text2Human-main\models\archs\vqgan_arch.py", line 617, in forward
return x + h
RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 4.00 GiB total capacity; 3.41 GiB already allocated; 0 bytes free; 3.44 GiB reserved in total by PyTorch)

I guess, if nothing else, some information about the amount of GPU processing power/memory required to at least run the demo would be nice. Also, I noticed there is a "Sample Steps" parameter on the Hugging Face Space. Where in the code could I modify this parameter for the demo?

It did end up working on my 8 GB 1070 Ti, if anyone ends up looking at this.