A high-throughput and memory-efficient inference and serving engine for LLMs
Home Page:https://vllm.readthedocs.io
Repository from Github https://github.comsmallcloudai/vllmRepository from Github https://github.comsmallcloudai/vllm