IlyasMoutawwakil / optimum-nvidia

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Optimum-NVIDIA

Optimized inference with NVIDIA and Hugging Face

Documentation python cuda trt-llm version license


Optimum-NVIDIA delivers the best inference performance on the NVIDIA platform through Hugging Face. Run LLaMA 2 at 1,200 tokens/second (up to 28x faster than the framework) by changing just a single line in your existing transformers code.

Installation

You can use a Docker container to try Optimum-NVIDIA today. Images are available on the Hugging Face Docker Hub.

docker pull huggingface/optimum-nvidia

An Optimum-NVIDIA package that can be installed with pip will be made available soon.

Quickstart Guide

Pipelines

Hugging Face pipelines provide a simple yet powerful abstraction to quickly set up inference. If you already have a pipeline from transformers, you can unlock the performance benefits of Optimum-NVIDIA by just changing one line.

- from transformers.pipelines import pipeline
+ from optimum.nvidia.pipelines import pipeline

pipe = pipeline('text-generation', 'meta-llama/Llama-2-7b-chat-hf', use_fp8=True)
pipe("Describe a real-world application of AI in sustainable energy.")

Generate

If you want control over advanced features like quantization and token seleciton strategies, we recommend using the generate() API. Just like with pipelines, switching from existing transformers code is super simple.

- from transformers import LlamaForCausalLM
+ from optimum.nvidia import LlamaForCausalLM
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf", padding_side="left")

model = LlamaForCausalLM.from_pretrained(
  "meta-llama/Llama-2-7b-chat-hf",
+ use_fp8=True,  
)

model_inputs = tokenizer(["How is autonomous vehicle technology transforming the future of transportation and urban planning?"], return_tensors="pt").to("cuda")

generated_ids = model.generate(
                    **model_inputs, 
                    top_k=40, 
                    top_p=0.7, 
                    repetition_penalty=10,
)

tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

To learn more about text generation with LLMs, check out this guide!

Support Matrix

We test Optimum-NVIDIA on 4090, L40S, and H100 Tensor Core GPUs, though it is expected to work on any GPU based on the following architectures:

  • Volta
  • Turing
  • Ampere
  • Hopper
  • Ada-Lovelace

Note that FP8 support is only available on GPUs based on Hopper and Ada-Lovelace architectures.

Optimum-NVIDIA works on Linux will support Windows soon.

Optimum-NVIDIA currently accelerates text-generation with LLaMAForCausalLM, and we are actively working to expand support to include more model architectures and tasks.

Contributing

Check out our Contributing Guide

About

License:Apache License 2.0


Languages

Language:Python 98.5%Language:Dockerfile 1.5%