vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Home Page:https://docs.vllm.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Bug]: Running vllm docker image with neuron fails

yaronr opened this issue · comments

Your current environment

root@9c92d584ab5f:/app# python3 ./collect_env.py
Collecting environment information...
WARNING 05-15 15:13:52 ray_utils.py:46] Failed to import Ray with ModuleNotFoundError("No module named 'ray'"). For multi-node inference, please install Ray with pip install ray.
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.3
Libc version: glibc-2.31

Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-4.14.343-260.564.amzn2.x86_64-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7R13 Processor
Stepping: 1
CPU MHz: 3553.882
BogoMIPS: 5299.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB
L1i cache: 64 KiB
L2 cache: 1 MiB
L3 cache: 8 MiB
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected, RAS-Poisoning: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr arat npt nrip_save vaes vpclmulqdq rdpid

Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] nvidia-nccl-cu12==2.18.1
[pip3] sagemaker_pytorch_inference==2.0.21
[pip3] torch==2.1.2
[pip3] torch-model-archiver==0.9.0
[pip3] torch-neuronx==2.1.1.2.0.1b0
[pip3] torch-xla==2.1.1
[pip3] torchserve==0.9.0
[pip3] torchvision==0.16.2
[pip3] triton==2.1.0
[conda] mkl 2024.0.0 ha957f24_49657 conda-forge
[conda] mkl-include 2024.0.0 ha957f24_49657 conda-forge
[conda] numpy 1.25.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.18.1 pypi_0 pypi
[conda] sagemaker-pytorch-inference 2.0.21 pypi_0 pypi
[conda] torch 2.1.2 pypi_0 pypi
[conda] torch-model-archiver 0.9.0 pypi_0 pypi
[conda] torch-neuronx 2.1.1.2.0.1b0 pypi_0 pypi
[conda] torch-xla 2.1.1 pypi_0 pypi
[conda] torchserve 0.9.0 pypi_0 pypi
[conda] torchvision 0.16.2 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypiROCM Version: Could not collect
Neuron SDK Version: (0, 'instance-type: inf2.xlarge\ninstance-id: i-072ac184a3a22e2b5\n+--------+--------+--------+---------+\n| NEURON | NEURON | NEURON | PCI |\n| DEVICE | CORES | MEMORY | BDF |\n+--------+--------+--------+---------+\n| 0 | 2 | 32 GB | 00:1f.0 |\n+--------+--------+--------+---------+', '')
vLLM Version: 0.4.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

🐛 Describe the bug

I built the docker image as follows:

git clone https://github.com/vllm-project/vllm.git
cd vllm
docker build -f ./Dockerfile.neuron . -t vllm:0.4.2-neuron

Then ran it with

docker run -ti --device=/dev/neuron0 -e...
python3 -m vllm.entrypoints.openai.api_server
				--model=meta-llama/Meta-Llama-3-8B-Instruct 
				--device=neuron 
				--tensor-parallel-size=2 
				--gpu-memory-utilization=0.9 
				--enforce-eager
Logs:

WARNING 05-15 15:19:38 ray_utils.py:46] Failed to import Ray with ModuleNotFoundError("No module named 'ray'"). For multi-node inference, please install Ray with pip install ray.
WARNING 05-15 15:19:39 config.py:404] Possibly too large swap space. 8.00 GiB out of the 15.31 GiB total CPU memory is allocated for the swap space.
INFO 05-15 15:19:39 llm_engine.py:103] Initializing an LLM engine (v0.4.2) with config: model='meta-llama/Meta-Llama-3-8B-Instruct', speculative_config=None, tokenizer='meta-llama/Meta-Llama-3-8B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=8192, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=2, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, quantization_param_path=None, device_config=cpu, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=meta-llama/Meta-Llama-3-8B-Instruct)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
WARNING 05-15 15:19:42 utils.py:443] Pin memory is not supported on Neuron.
Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last):
File "/usr/local/bin/dockerd-entrypoint.py", line 28, in
subprocess.check_call(shlex.split(" ".join(sys.argv[1:])))
File "/opt/conda/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['python3', '-m', 'vllm.entrypoints.openai.api_server', '--model=meta-llama/Meta-Llama-3-8B-Instruct', '--device=neuron', '--tensor-parallel-size=2', '--gpu-memory-utilization=0.9', '--enforce-eager']' died with <Signals.SIGKILL: 9>.