Hugging Face's repositories
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
text-generation-inference
Large Language Model Text Generation Inference
accelerate
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
text-embeddings-inference
A blazing fast inference solution for text embeddings models
huggingface.js
Utilities to use the Hugging Face Hub API
dataset-viewer
Lightweight web API for visualizing and exploring any dataset - computer vision, speech, text, and tabular - stored on the Hugging Face Hub
optimum-intel
🤗 Optimum Intel: Accelerate inference with Intel optimization tools
optimum-benchmark
A unified multi-backend utility for benchmarking Transformers, Timm, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
optimum-neuron
Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
diffusion-fast
Faster generation with text-to-image diffusion models.
optimum-habana
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
optimum-tpu
Google TPU optimizations for transformers models
visual-blocks-custom-components
Custom Hugging Face Nodes for Google Visual Blocks for ML