lm-sys / FastChat

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

启动模型的时候指定gpu报错

ilovecomet opened this issue · comments

命令:
python -m fastchat.serve.cli --model-path ~/data/model/chatglm3-6b --gpus 2
报错信息:
Traceback (most recent call last):
File "/root/miniconda3/envs/ragllm/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/miniconda3/envs/ragllm/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/root/miniconda3/envs/ragllm/lib/python3.10/site-packages/fastchat/serve/cli.py", line 304, in
main(args)
File "/root/miniconda3/envs/ragllm/lib/python3.10/site-packages/fastchat/serve/cli.py", line 227, in main
chat_loop(
File "/root/miniconda3/envs/ragllm/lib/python3.10/site-packages/fastchat/serve/inference.py", line 361, in chat_loop
model, tokenizer = load_model(
File "/root/miniconda3/envs/ragllm/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 367, in load_model
model.to(device)
File "/root/miniconda3/envs/ragllm/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2595, in to
return super().to(*args, **kwargs)
File "/root/miniconda3/envs/ragllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1152, in to
return self._apply(convert)
File "/root/miniconda3/envs/ragllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 802, in _apply
module._apply(fn)
File "/root/miniconda3/envs/ragllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 802, in _apply
module._apply(fn)
File "/root/miniconda3/envs/ragllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 802, in _apply
module._apply(fn)
File "/root/miniconda3/envs/ragllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 825, in _apply
param_applied = fn(param)
File "/root/miniconda3/envs/ragllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1150, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "/root/miniconda3/envs/ragllm/lib/python3.10/site-packages/torch/cuda/init.py", line 321, in _lazy_init
raise DeferredCudaCallError(msg) from e
torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=1, num_gpus=

What do you have as the variable CUDA_VISIBLE_DEVICES? And what does show up in nvidia-smi?

The code itself is saying that this is a PyTorch bug, but in any case, could you check those two things? Thanks!