awesome-decentralized-llm
This is a collection of resources that I will at some point clean up and organize.
Repositories
-
xturing - Build and control your own LLMs (2023-04-03, stochastic.ai)
-
GPTQ-for-LLaMA - 4 bits quantization of LLaMA using GPTQ (2023-04-01, qwopqwop200, Meta ToS)
-
GPT4All - LLM trained with ~800k GPT-3.5-Turbo Generations based on LLaMa. (2023-03-28, Nomic AI, OpenAI ToS)
-
Dolly - Large language model trained on the Databricks Machine Learning Platform (2023-03-24, Databricks Labs, Apache)
-
bloomz.cpp Inference of HuggingFace's BLOOM-like models in pure C/C++. (2023-03-16, Nouamane Tazi, MIT License)
-
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM (2023-03-16, Kevin Kwok, MIT License)
-
Stanford Alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data. (2023-03-13, Stanford CRFM, Apache License, Non-Commercial Data, Meta/OpenAI ToS)
-
llama.cpp - Port of Facebook's LLaMA model in C/C++. (2023-03-10, Georgi Gerganov, MIT License)
-
ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source. (2023-01-09, PENG Bo, Apache License)
-
RWKV-LM - RNN with Transformer-level LLM performance. Combines best of RNN and transformer: fast inference, saves VRAM, fast training. (2022?, PENG Bo, Apache License)
Spaces, Models & Datasets
-
Cerebras-GPT 7 Models (2023-03-28, Huggingface, Cerebras, Apache License)
-
Alpine Data Cleaned (2023-03-21, Gene Ruebsamen, Apache & OpenAI ToS)
-
Alpaca Dataset (2023-03-13, Huggingface, Tatsu-Lab, Meta ToS/OpenAI ToS)
-
Alpaca Model Search (Huggingface, Meta ToS/OpenAI ToS)
Resources
-
Running GPT4All On a Mac Using Python langchain in a Jupyter Notebook (2023-04-04, Tony Hirst, Blog Post)
-
Cerebras-GPT vs LLaMA AI Model Comparison (2023-03-29, LunaSec, Blog Post)
-
Cerebras-GPT: Family of Open, Compute-efficient, LLMs (2023-03-28, Cerebras, Blog Post)
-
Hello Dolly: Democratizing the magic of ChatGPT with open models (2023-03-24, databricks, Blog Post)
-
The RWKV language model: An RNN with the advantages of a transformer (2023-03-23, Johan Sokrates Wind, Blog Post)
-
Bringing Whisper and LLaMA to the masses (2023-03-15, The Changelog & Georgi Gerganov, Podcast Episode)
-
Alpaca: A Strong, Replicable Instruction-Following Model (2023-03-13, Stanford CRFM, Project Homepage)
-
Large language models are having their Stable Diffusion moment (2023-03-10, Simon Willison, Blog Post)
-
Running LLaMA 7B and 13B on a 64GB M2 MacBook Pro with llama.cpp (2023-03-10, Simon Willison, Blog/Today I Learned)
-
Introducing LLaMA: A foundational, 65-billion-parameter large language model (2023-02-24, Meta AI, Meta ToS)