wanzhenchn / llm-benchmarks

LLM benchmark tools for LMDeploy, vLLM, and TensorRT-LLM.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LLM-Benchmarks

A Benchmark Toolbox for LLM Performance (Inference and Evalution).

license


Latest News 🔥

  • [2024/07/04] Support for evaluation with vLLM backend using lm-evaluation-harness.
  • [2024/06/21] Added support for inference performance benchmark with LMDeploy and vLLM.
  • [2024/06/14] Added support for inference performance benchmark with TensorRT-LLM.
  • [2024/06/14] We officially released LLM-Benchmarks!

LLM-Benchmarks Overview

LLM-Benchmarks is an easy-to-use toolbox for benchmarking Large Language Models (LLMs) performance on inference and evalution.

Getting Started

Download the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json

Prepare for Docker image and container environment

You can build docker images by running:

# for tensorrt-llm
bash scripts/trt_llm/build_docker.sh all

# for lmdeploy
bash scripts/lmdeploy/build_docker.sh

# for vllm
bash scripts/vllm/build_docker.sh

Run benchmarks

  • Inference Performance
bash run_benchmark.sh model_path dataset_path sample_num device_id(like 0 or 0,1)
  • Task Evaluation
# Build evalution image
bash scripts/evaluation/build_docker.sh vllm # (or lmdeploy or trt-llm)

# Evalution with vLLM backend
bash run_eval.sh mode(fp16, fp8-kv-fp16, fp8-kv-fp8) model_path device_id(like 0 or 0,1)"

About

LLM benchmark tools for LMDeploy, vLLM, and TensorRT-LLM.

License:Apache License 2.0


Languages

Language:Python 69.7%Language:Shell 30.3%