Anindyadeep / optimum-nvidia

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Optimum-NVIDIA

Optimized inference with NVIDIA and Hugging Face

Documentation python cuda trt-llm version license


Optimum-NVIDIA delivers the best inference performance on the NVIDIA platform through Hugging Face. Run LLaMA 2 at 1,200 tokens/second (up to 28x faster than the framework) by changing just a single line in your existing transformers code.

Installation

You can use a Docker container to try Optimum-NVIDIA today. Images are available on the Hugging Face Docker Hub.

docker pull huggingface/optimum-nvidia

An Optimum-NVIDIA package that can be installed with pip will be made available soon.

Building from source

Instead of using the pre-built docker container, you can build Optimum-NVIDIA from source:

TARGET_SM="90-real;89-real"
git clone --recursive --depth=1 https://github.com/huggingface/optimum-nvidia.git
cd optimum-nvidia/third-party/tensorrt-llm
make -C docker release_build CUDA_ARCHS=$TARGET_SM
cd ../.. && docker build -t <organisation_name/image_name>:<version> -f docker/Dockerfile .

Quickstart Guide

Pipelines

Hugging Face pipelines provide a simple yet powerful abstraction to quickly set up inference. If you already have a pipeline from transformers, you can unlock the performance benefits of Optimum-NVIDIA by just changing one line.

- from transformers.pipelines import pipeline
+ from optimum.nvidia.pipelines import pipeline

pipe = pipeline('text-generation', 'meta-llama/Llama-2-7b-chat-hf', use_fp8=True)
pipe("Describe a real-world application of AI in sustainable energy.")

Generate

If you want control over advanced features like quantization and token selection strategies, we recommend using the generate() API. Just like with pipelines, switching from existing transformers code is super simple.

- from transformers import AutoModelForCausalLM
+ from optimum.nvidia import AutoModelForCausalLM
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf", padding_side="left")

model = AutoModelForCausalLM.from_pretrained(
  "meta-llama/Llama-2-7b-chat-hf",
+ use_fp8=True,  
)

model_inputs = tokenizer(["How is autonomous vehicle technology transforming the future of transportation and urban planning?"], return_tensors="pt").to("cuda")

generated_ids = model.generate(
                    **model_inputs, 
                    top_k=40, 
                    top_p=0.7, 
                    repetition_penalty=10,
)

tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

To learn more about text generation with LLMs, check out this guide!

Support Matrix

We test Optimum-NVIDIA on 4090, L40S, and H100 Tensor Core GPUs, though it is expected to work on any GPU based on the following architectures:

  • Volta
  • Turing (with experimental support for T4 / RTX Quadro x000)
  • Ampere (A100/A30 are supported. Experimental support for A10, A40, RTX Ax000)
  • Hopper
  • Ada-Lovelace

Note that FP8 support is only available on GPUs based on Hopper and Ada-Lovelace architectures.

Optimum-NVIDIA works on Linux will support Windows soon.

Optimum-NVIDIA currently accelerates text-generation with LLaMAForCausalLM, and we are actively working to expand support to include more model architectures and tasks.

Contributing

Check out our Contributing Guide

About

License:Apache License 2.0


Languages

Language:Python 98.4%Language:Dockerfile 1.5%Language:Makefile 0.2%