regisss's repositories

accelerate

πŸš€ A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

blog

Public repo for HF blog posts

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

codecarbon

Track emissions from Compute and recommend ways to reduce their impact on the environment.

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

course

The Hugging Face course

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

diffusers

πŸ€— Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

doc-builder

The package used to build the documentation of our Hugging Face repos

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

exporters

Export Hugging Face models to Core ML and TensorFlow Lite

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

nfl_helmet_assignment

NFL Health & Safety - Helmet Assignment: Segment and label helmets in video footage

Language:Jupyter NotebookStargazers:0Issues:1Issues:0

onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

Language:C++License:MITStargazers:0Issues:0Issues:0

optimum

🏎️ Accelerate training and inference of πŸ€— Transformers with easy to use hardware optimization tools

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

optimum-amd

AMD related optimizations for transformer models

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

optimum-furiosa

Accelerated inference of πŸ€— models using FuriosaAI NPU chips.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

optimum-graphcore-fork

Blazing fast training of πŸ€— Transformers on Graphcore IPUs

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

optimum-habana

Easy and lightning fast training of πŸ€— Transformers on Habana Gaudi processor (HPU)

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

optimum-neuron

Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

transformers

πŸ€— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

optimum-benchmark

A unified multi-backend utility for benchmarking Transformers and Diffusers with full support of Optimum's hardware optimizations & quantization schemes.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

text-generation-inference

Large Language Model Text Generation Inference

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

transformers-bloom-inference

Fast Inference Solutions for BLOOM

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0