hxdaze / awesome-decentralized-llm

Repos and resources for running LLMs locally. (e.g. LLaMA, Cerebras, RWKV)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

awesome-decentralized-llm

This is a collection of resources that I will at some point clean up and organize.

Repositories

  • xturing - Build and control your own LLMs (2023-04-03, stochastic.ai)

  • GPTQ-for-LLaMA - 4 bits quantization of LLaMA using GPTQ (2023-04-01, qwopqwop200, Meta ToS)

  • GPT4All - LLM trained with ~800k GPT-3.5-Turbo Generations based on LLaMa. (2023-03-28, Nomic AI, OpenAI ToS)

  • Dolly - Large language model trained on the Databricks Machine Learning Platform (2023-03-24, Databricks Labs, Apache)

  • bloomz.cpp Inference of HuggingFace's BLOOM-like models in pure C/C++. (2023-03-16, Nouamane Tazi, MIT License)

  • alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM (2023-03-16, Kevin Kwok, MIT License)

  • Stanford Alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data. (2023-03-13, Stanford CRFM, Apache License, Non-Commercial Data, Meta/OpenAI ToS)

  • llama.cpp - Port of Facebook's LLaMA model in C/C++. (2023-03-10, Georgi Gerganov, MIT License)

  • ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source. (2023-01-09, PENG Bo, Apache License)

  • RWKV-LM - RNN with Transformer-level LLM performance. Combines best of RNN and transformer: fast inference, saves VRAM, fast training. (2022?, PENG Bo, Apache License)

Spaces, Models & Datasets

Resources

About

Repos and resources for running LLMs locally. (e.g. LLaMA, Cerebras, RWKV)

License:MIT License