~'s starred repositories

project-based-learning

Curated list of project-based tutorials

grok-1

Grok open release

Language:PythonLicense:Apache-2.0Stargazers:49376Issues:562Issues:206

MetaGPT

🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming

Language:PythonLicense:MITStargazers:43255Issues:891Issues:619

LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

Language:PythonLicense:Apache-2.0Stargazers:18892Issues:159Issues:1454

llama2.c

Inference Llama 2 in one file of pure C

awesome-ai-agents

A list of AI autonomous agents

TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

Language:C++License:Apache-2.0Stargazers:7999Issues:87Issues:1739

tree-of-thought-llm

[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models

Language:PythonLicense:MITStargazers:4515Issues:120Issues:54

OLMo

Modeling, training, eval, and inference code for OLMo

Language:PythonLicense:Apache-2.0Stargazers:4309Issues:44Issues:186

torchtune

A Native-PyTorch Library for LLM Fine-tuning

Language:PythonLicense:BSD-3-ClauseStargazers:3841Issues:43Issues:469

Eureka

Official Repository for "Eureka: Human-Level Reward Design via Coding Large Language Models" (ICLR 2024)

Language:Jupyter NotebookLicense:MITStargazers:2764Issues:25Issues:36

Awesome-LLM-Inference

📖A curated list of Awesome LLM Inference Paper with codes, TensorRT-LLM, vLLM, streaming-llm, AWQ, SmoothQuant, WINT8/4, Continuous Batching, FlashAttention, PagedAttention etc.

AgentBench

A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)

Language:PythonLicense:Apache-2.0Stargazers:2091Issues:29Issues:137

OpenRLHF

An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & Mixtral)

Language:PythonLicense:Apache-2.0Stargazers:1763Issues:21Issues:179

AgentTuning

AgentTuning: Enabling Generalized Agent Abilities for LLMs

Xwin-LM

Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment

RRHF

[NIPS2023] RRHF & Wombat

StyleSelectorXL

This repository contains a Automatic1111 Extension allows users to select and apply different styles to their inputs using SDXL 1.0.

bytepiece

更纯粹、更高压缩率的Tokenizer

Language:PythonLicense:Apache-2.0Stargazers:436Issues:9Issues:16

GPT-Fathom

GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well as OpenAI's earlier models on 20+ curated benchmarks under aligned settings.

Language:PythonLicense:MITStargazers:350Issues:1Issues:6

pretraining-with-human-feedback

Code accompanying the paper Pretraining Language Models with Human Preferences

Language:PythonLicense:MITStargazers:171Issues:6Issues:8

cpl

Code for Contrastive Preference Learning (CPL)

Language:PythonLicense:MITStargazers:145Issues:3Issues:10

ReMax

Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)

EMO

[ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)

Click

Codes and data for ACL 2023 Findings paper "Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning"

GIFT-Graph-guided-Feature-Transfer-Network

Source code for GIFT (CIKM 22)

Language:PythonStargazers:12Issues:1Issues:0