Younes Belkada's repositories
transformers
π€ Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.
DataDreamer
Prompt. Generate Synthetic Data. Train & Align Models. π€π€
hf-torch-compile-benchmark
A repository to benchmark the expected speedups using `torch.compile` and `torch.scaled_dot_product_attention`
accelerate
π A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
StarCoderReview
Get StarCoder to review your PR's
segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
alpaca-lora
Code for reproducing the Stanford Alpaca InstructLLaMA result on consumer hardware
bitsandbytes-1
8-bit CUDA functions for PyTorch
blog
Public repo for HF blog posts
diffusers
π€ Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
EETQ
Easy and Efficient Quantization for Transformers
fsdp_qlora
Training LLMs with QLoRA + FSDP
guidance
A guidance language for controlling large language models.
lion-pytorch
π¦ Lion, new optimizer discovered by Google Brain that is purportedly better than Adam(w), in Pytorch
LLaMA-Factory
Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)
llama.cpp
LLM inference in C/C++
llm-awq
AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
LOMO
LOMO: LOw-Memory Optimization
quanto
A pytorch Quantization Toolkit
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
unsloth
5X faster 60% less memory QLoRA finetuning