arianpasquali / awesome-totally-open-chatgpt

A list of totally open alternatives to ChatGPT

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Awesome Totally Open Chatgpt

ChatGPT is GPT-3.5 finetuned with RLHF (Reinforcement Learning with Human Feedback) for human instruction and chat.

Alternatives are projects featuring different instruct finetuned language models for chat. Projects are not counted if they are:

  • Alternative frontend projects which simply call OpenAI's APIs.
  • Using language models which are not finetuned for human instruction or chat.

Tags:

  • Bare: only source code, no data, no model's weight, no chat system
  • Standard: yes data, yes model's weight, bare chat via API
  • Full: full yes data, yes model's weight, fancy chat system including TUI and GUI
  • Complicated: semi open source, not really open source, based on closed model, etc...

Table of Contents

  1. The template
  2. The list

The template

Append the new project at the end of file

## [{owner}/{project-name}]{https://github.com/link/to/project}

Description goes here

Tags: Bare/Standard/Full/Complicated

The list

kuleshov/minillm

MiniLLM: Large Language Models on Consumer GPUs. It is a minimal system for running modern LLMs on consumer-grade GPUs. While llama-cpp allows running LLMs on Apple hardware, MiniLLM enables running a larger set of models on most recent Nvidia GPUs.

lucidrains/PaLM-rlhf-pytorch

Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM

Tags: Bare

togethercomputer/OpenChatKit

OpenChatKit provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications.

Related links:

Tags: Full

oobabooga/text-generation-webui

A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.

Tags: Full

KoboldAI/KoboldAI-Client

This is a browser-based front-end for AI-assisted writing with multiple local & remote AI models. It offers the standard array of tools, including Memory, Author’s Note, World Info, Save & Load, adjustable AI settings, formatting options, and the ability to import existing AI Dungeon adventures. You can also turn on Adventure mode and play the game like AI Dungeon Unleashed.

Tags: Full

LAION-AI/Open-Assistant

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

Related links:

Tags: Full

tatsu-lab/stanford_alpaca

This is the repo for the Stanford Alpaca project, which aims to build and share an instruction-following LLaMA model. Related links:

See these Reddit comments first #1

Tags: Complicated

BlinkDL/ChatRWKV

ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.

Tags: Full

THUDM/ChatGLM-6B

ChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level).

Related links:

Tags: Full

bigscience-workshop/xmtf

This repository provides an overview of all components used for the creation of BLOOMZ & mT0 and xP3 introduced in the paper Crosslingual Generalization through Multitask Finetuning.

Related links:

Tags: Standard

carperai/trlx

A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF), supporting online RL up to 20b params and offline RL to larger models. Basically what you would use to finetune GPT into ChatGPT.

Tags: Bare

databrickslabs/dolly

Script to fine tune GPT-J 6B model on the Alpaca dataset. Insightful if you want to fine tune LLMs.

Related links:

Tags: Bare

LianjiaTech/BELLE

The goal of this project is to promote the development of the open-source community for Chinese language large-scale conversational models. This project optimizes Chinese performance in addition to original Stanford Alpaca. The model finetuning uses only data generated via ChatGPT (without other data). This repo contains: 175 chinese seed tasks used for generating the data, code for generating the data, 0.5M generated data used for fine-tuning the model, model finetuned from BLOOMZ-7B1-mt on data generated by this project.

Related links:

Tags: Standard

ethanyanjiali/minChatGPT

A minimum example of aligning language models with RLHF similar to ChatGPT

Related links:

Tags: Standard

About

A list of totally open alternatives to ChatGPT

License:Creative Commons Zero v1.0 Universal