gaffner / cataiGPT

the same loved alpaca model, wrapped in chatGPT like frontend. run your AI assistant locally

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CatAI Logo

CatAI

npm version npm downloads GitHub license


Run Alpaca model on your computer with a chat ui.

Your own AI assistant run locally on your computer.

Inspired by Dalai, Node-Llama, Alpaca.cpp

Installation & Use

npm install -g catai

catai install Vicuna-7B
catai serve

catai

Features

  • Auto detect programming language 🧑‍💻
  • Auto detect code block 📃
  • Click on user icon to show original message 💬
  • Real time text streaming ⏱️
  • Fast model downloads 🚀

Intro

You can use any Alpaca model as long as your computer can handle it.

catai install Vicuna-13B

You can also download a custom model like this:

catai install https://example.com/model.tar.bin --tag myModel

If you want to switch between models you can use catai use command.

catai use Vicuna-7B

You can use all the UIs from the client directory (default catai).

catai server --ui chatGPT

Cross-platform

You can use it on Windows, Linux and Mac.

This package from version 1.6.0 could depend on llama-node which supports:

  • darwin-x64
  • darwin-arm64
  • linux-x64-gnu
  • win32-x64-msvc

Memory usage

Runs on most modern computers. Unless your computer is very very old, it should work.

According to a llama.cpp discussion thread, here are the memory requirements:

  • 7B => ~4 GB
  • 13B => ~8 GB
  • 30B => ~16 GB

Configuration

You can change the configuration by edition the config.js file.

catai config --editor [editor]

After you change the configuration, you need to restart the server.

  • 💡To increase the model understanding, try change the context size.
  • 💡To increase the model output, try change the numPredict size.

List models

You can list all the models that you have installed.

catai list

Uninstall models

You can uninstall models that you don't need.

catai remove Vicuna-7B

Uninstall package

You can uninstall the package.

catai remove all # remove all downloaded data
npm uninstall -g catai

Good to know

  • All download data will be downloaded at ~/catai folder.
  • The download is multi-threaded, so it may use a lot of bandwidth, but it will download faster!

Development

If you want to run the source code locally, you can follow the steps below.

To run the client.

cd client/catai
npm install
npm run dev

To run the server.

cd server
npm install
npm run install-model Vicuna-7B
npm start

Troubleshooting

Error loading model OR executable error

Try change the config:

export const SELECTED_BINDING = 'alpaca-cpp';

It may be slower, but it has more chance to work with alpaca models.

Windows Subsystem for Linux has no installed distributions

Problem with the dependency zx, try to run inside git-bash.

License

This project use Alpaca.cpp to run Alpaca models on your computer. So any license applied to Alpaca.cpp is also applied to this project.

Credits

The GPT frontend is built on top of the chatGPT Frontend mimic


Star please

If you like this repo, star it ✨                                                    

About

the same loved alpaca model, wrapped in chatGPT like frontend. run your AI assistant locally

License:MIT License


Languages

Language:TypeScript 43.7%Language:JavaScript 41.8%Language:Svelte 10.7%Language:CSS 2.0%Language:HTML 1.6%Language:Dockerfile 0.2%