tensorchord / inference-benchmark

Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Inference Benchmark

Maximize the potential of your models with the inference benchmark (tool).

discord invitation link trackgit-views

What is it

Inference benchmark provides a standard way to measure the performance of inference workloads. It is also a tool that allows you to evaluate and optimize the performance of your inference workloads.

Results

Bert

We benchmarked pytriton (triton-inference-server) and mosec with bert. We enabled dynamic batching for both frameworks with max batch size 32 and max wait time 10ms. Please checkout the result for more details.

DistilBert

More results with different models on different serving frameworks are coming soon.

About

Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)


Languages

Language:Python 87.5%Language:Dockerfile 11.4%Language:Makefile 1.1%