R00tendo / webllama

An easy way to self host a LLM AI chat website.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

WebLLama

Webllama is a minimal self hostable web interface for the ollama project.

Installation and usage

  1. Install ollama from https://ollama.com/
git clone https://github.com/R00tendo/webllama
cd webllama/
npm i
npm run build
ollama pull dolphin-llama3 #or whatever model you want to use
npm run start
  1. Start ollama API with ollama serve
  2. Navigate to http://localhost:3000

Configuring

If you want to change the model, edit webllama.json in the project root. Default:

{
    "model": "dolphin-llama3"
}

Screenshots

About

An easy way to self host a LLM AI chat website.

License:GNU General Public License v3.0


Languages

Language:CSS 98.2%Language:JavaScript 1.7%Language:SCSS 0.1%