VAST-AI-Research / TripoSR

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Out of Memory.

Bikimaharjan opened this issue · comments

I have 4gb Vram and it gives me out of memory error. How to switch to cpu only?

To switch to CPU, please run with run.py --device cpu

File "E:\TripSo\TripoSR\env\lib\site-packages\transformers\models\vit\modeling_vit.py", line 219, in forward
attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.00 MiB. GPU 0 has a total capacity of 4.00 GiB of which 0 bytes is free. Of the allocated memory 6.05 GiB is allocated by PyTorch, and 158.95 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "E:\TripSo\TripoSR\env\lib\site-packages\gradio\queueing.py", line 501, in process_events
response = await self.call_prediction(awake_events, batch)
File "E:\TripSo\TripoSR\env\lib\site-packages\gradio\queueing.py", line 465, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None

what is the minimum gpu vram required. i have 4gb vram. i tried lowering the model.renderer.set_chunk_size(8192) to model.renderer.set_chunk_size(8) but still getting out of memory

ran fine on CPU though but took almost 6mins