vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Home Page:https://docs.vllm.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Bug]: `assert num_new_tokens == 1` fails when `SamplingParams.n` is not `1` and `max_tokens` is large.

tongyx361 opened this issue · comments

Your current environment

PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.31

Python version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
GPU 5: NVIDIA A100 80GB PCIe
GPU 6: NVIDIA A100 80GB PCIe
GPU 7: NVIDIA A100 80GB PCIe

Nvidia driver version: 535.161.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   43 bits physical, 48 bits virtual
CPU(s):                          256
On-line CPU(s) list:             0-254
Off-line CPU(s) list:            255
Thread(s) per core:              1
Core(s) per socket:              64
Socket(s):                       2
NUMA node(s):                    2
Vendor ID:                       AuthenticAMD
CPU family:                      23
Model:                           49
Model name:                      AMD EPYC 7742 64-Core Processor
Stepping:                        0
Frequency boost:                 enabled
CPU MHz:                         1500.000
CPU max MHz:                     2250.0000
CPU min MHz:                     1500.0000
BogoMIPS:                        4500.41
Virtualization:                  AMD-V
L1d cache:                       2 MiB
L1i cache:                       2 MiB
L2 cache:                        32 MiB
L3 cache:                        256 MiB
NUMA node0 CPU(s):               0-63,128-191
NUMA node1 CPU(s):               64-127,192-254
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall sev_es fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca

Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.19.3
[pip3] torch==2.2.1
[pip3] triton==2.2.0
[pip3] vllm-nccl-cu12==2.18.1.0.4.0
[conda] blas                      1.0                         mkl    defaults
[conda] mkl                       2023.1.0         h213fc3f_46344    defaults
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.19.3                   pypi_0    pypi
[conda] pytorch-cuda              12.1                 ha16c6d3_5    pytorch
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torch                     2.2.1                    pypi_0    pypi
[conda] triton                    2.2.0                    pypi_0    pypi
[conda] vllm-nccl-cu12            2.18.1.0.4.0             pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    NIC1CPU Affinity     NUMA Affinity   GPU NUMA ID
GPU0     X      NODE    NODE    NODE    SYS     SYS     SYS     SYS     SYS SYS      0-63,128-191    0               N/A
GPU1    NODE     X      NODE    NODE    SYS     SYS     SYS     SYS     SYS SYS      0-63,128-191    0               N/A
GPU2    NODE    NODE     X      NODE    SYS     SYS     SYS     SYS     SYS SYS      0-63,128-191    0               N/A
GPU3    NODE    NODE    NODE     X      SYS     SYS     SYS     SYS     SYS SYS      0-63,128-191    0               N/A
GPU4    SYS     SYS     SYS     SYS      X      NODE    NODE    NODE    NODENODE     64-127,192-254  1               N/A
GPU5    SYS     SYS     SYS     SYS     NODE     X      NODE    NODE    NODENODE     64-127,192-254  1               N/A
GPU6    SYS     SYS     SYS     SYS     NODE    NODE     X      NODE    PHB PHB      64-127,192-254  1               N/A
GPU7    SYS     SYS     SYS     SYS     NODE    NODE    NODE     X      NODENODE     64-127,192-254  1               N/A
NIC0    SYS     SYS     SYS     SYS     NODE    NODE    PHB     NODE     X  PIX
NIC1    SYS     SYS     SYS     SYS     NODE    NODE    PHB     NODE    PIX  X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: ibp194s0f0
  NIC1: ibp194s0f1

🐛 Describe the bug

assert num_new_tokens == 1 fails when SamplingParams.n and max_tokens are large, e.g.

sampling_params = SamplingParams(n=32, best_of=32, temperature=1.6, top_p=0.95, max_tokens=2048)

Error messages:

Traceback (most recent call last):
  File "./vllm-bug.py", line 19, in <module>
    outputs = llm.generate(prompts, sampling_params)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/python/env/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 214, in generate
    return self._run_engine(use_tqdm)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/python/env/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 242, in _run_engine
    step_outputs = self.llm_engine.step()
                   ^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/python/env/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 557, in step
    seq_group_metadata_list, scheduler_outputs = self.scheduler.schedule()
                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/python/env/lib/python3.11/site-packages/vllm/core/scheduler.py", line 890, in schedule
    scheduler_outputs = self._schedule()
                        ^^^^^^^^^^^^^^^^
  File "/path/to/python/env/lib/python3.11/site-packages/vllm/core/scheduler.py", line 863, in _schedule
    return self._schedule_default()
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/python/env/lib/python3.11/site-packages/vllm/core/scheduler.py", line 733, in _schedule_default
    remaining_swapped, swapped_in = self._schedule_swapped(
                                    ^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/python/env/lib/python3.11/site-packages/vllm/core/scheduler.py", line 548, in _schedule_swapped
    assert num_new_tokens == 1
           ^^^^^^^^^^^^^^^^^^^
AssertionError

Reproducing code:

from vllm import LLM, SamplingParams
from datasets import load_dataset
from transformers import AutoTokenizer

model_id = "deepseek-ai/deepseek-math-7b-rl"
tokenizer = AutoTokenizer.from_pretrained(model_id)

gsm8k_ds = load_dataset("gsm8k", "main")["test"]
prompts = [
    tokenizer.apply_chat_template({"role": "user", "content": f"{row['question']}\nPlease reason step by step, and put your final answer within \\boxed{{}}."}, tokenize=False) for row in gsm8k_ds
]

# print(prompts)

n_paths = 2
sampling_params = SamplingParams(n=n_paths, best_of=n_paths, temperature=1.6, top_p=0.95, max_tokens=2048)

llm = LLM(model=model_id, swap_space=60)

outputs = llm.generate(prompts, sampling_params)

# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

same issue, repeat N times to generate with (n=1) to solve it temporarily

cc @rkooo567 is this something you can fix in chunked scheduler?

yeah let me fix this by today (and add tests)

Fix PR here; #4451 (review)