SpongebBob / Finetune-ChatGLM2-6B

ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CUDA out of memory. Tried to allocate 11.63 GiB (GPU 0; 23.69 GiB total capacity; 11.63 GiB already allocated; 11.28 GiB free

harbor1981 opened this issue · comments

CUDA out of memory. Tried to allocate 11.63 GiB (GPU 0; 23.69 GiB total capacity; 11.63 GiB already allocated; 11.28 GiB free; 11.63 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
[2023-07-11 08:56:40,747] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 16794

请问全量微调硬件条件是什么啊,一张3090可以跑吗?两张呢?

8张