getorca / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Home Page:https://docs.vllm.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

This repository is not active

About

A high-throughput and memory-efficient inference and serving engine for LLMs

https://docs.vllm.ai

License:Apache License 2.0


Languages

Language:Python 76.4%Language:Cuda 20.7%Language:C++ 1.8%Language:Shell 0.5%Language:Dockerfile 0.2%Language:C 0.2%Language:Jinja 0.1%