Yeojoon's starred repositories

LLMs-from-scratch

Implementing a ChatGPT-like LLM in PyTorch from scratch, step by step

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:27820Issues:0Issues:0

nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs.

Language:PythonLicense:MITStargazers:36413Issues:0Issues:0

simple-local-rag

Build a RAG (Retrieval Augmented Generation) pipeline from scratch and have it all run locally.

Language:Jupyter NotebookStargazers:457Issues:0Issues:0
Language:Jupyter NotebookStargazers:2299Issues:0Issues:0

micrograd

A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API

Language:Jupyter NotebookLicense:MITStargazers:10099Issues:0Issues:0

logix

AI Logging for Interpretability and Explainability🔬

Language:PythonLicense:Apache-2.0Stargazers:77Issues:0Issues:0

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Language:PythonLicense:Apache-2.0Stargazers:34897Issues:0Issues:0

GPU-Puzzles

Solve puzzles. Learn CUDA.

Language:Jupyter NotebookLicense:MITStargazers:8874Issues:0Issues:0

bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.

Language:PythonLicense:MITStargazers:6104Issues:0Issues:0

torchtune

A Native-PyTorch Library for LLM Fine-tuning

Language:PythonLicense:BSD-3-ClauseStargazers:4033Issues:0Issues:0

LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

Language:PythonLicense:MITStargazers:10405Issues:0Issues:0
Language:PythonLicense:MITStargazers:193Issues:0Issues:0

LLM-Adapters

Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"

Language:PythonLicense:Apache-2.0Stargazers:1049Issues:0Issues:0

lm-evaluation-harness

A framework for few-shot evaluation of language models.

Language:PythonLicense:MITStargazers:6547Issues:0Issues:0

lighteval

Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends

Language:PythonLicense:MITStargazers:685Issues:0Issues:0

llmtools

Finetuning Large Language Models on One Consumer GPU in Under 4 Bits

Language:PythonStargazers:697Issues:0Issues:0

hivemind

Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.

Language:PythonLicense:MITStargazers:2001Issues:0Issues:0

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Language:PythonLicense:Apache-2.0Stargazers:27533Issues:0Issues:0

flash-attention

Fast and memory-efficient exact attention

Language:PythonLicense:BSD-3-ClauseStargazers:13567Issues:0Issues:0

TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

Language:C++License:Apache-2.0Stargazers:8289Issues:0Issues:0

SqueezeLLM

[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization

Language:PythonLicense:MITStargazers:631Issues:0Issues:0

CoLLiE

Collaborative Training of Large Language Models in an Efficient Way

Language:PythonLicense:Apache-2.0Stargazers:407Issues:0Issues:0

low-bit-optimizers

Low-bit optimizers for PyTorch

Language:PythonLicense:Apache-2.0Stargazers:111Issues:0Issues:0
Language:TeXStargazers:18Issues:0Issues:0

FedML

FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs on any GPU cloud or on-premise cluster. Built on this library, TensorOpera AI (https://TensorOpera.ai) is your generative AI platform at scale.

Language:PythonLicense:Apache-2.0Stargazers:4157Issues:0Issues:0

Awesome-Federated-Learning

FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai

Stargazers:1923Issues:0Issues:0

betty

Betty: an automatic differentiation library for generalized meta-learning and multilevel optimization

Language:PythonLicense:Apache-2.0Stargazers:329Issues:0Issues:0

FedTorch

FedTorch is a generic repository for benchmarking different federated and distributed learning algorithms using PyTorch Distributed API.

Language:PythonLicense:GPL-2.0Stargazers:185Issues:0Issues:0

FedAc-NeurIPS20

Code for "Federated Accelerated Stochastic Gradient Descent" (NeurIPS 2020)

Language:PythonStargazers:14Issues:0Issues:0

gitignore

A collection of useful .gitignore templates

License:CC0-1.0Stargazers:161404Issues:0Issues:0