myexchworld / akash-chat

LLM based AI chat service running on Akash

Home Page:https://chat.akash.network/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Akash-Chat

How to Deploy

Requirements:

  • MySQL Database for statistics (optional)
  • Node.js
  • Instance of Ollama (local or on Akash)
  • Docker
  1. Install all dependencies

    • Install Yarn with npm install --global yarn.
    • Run yarn in the frontend and backend folders.
  2. Download font

  3. Deploy Ollama instance

    • The text is generated by one or more Ollama instances running on Akash, but you can also use a local instance.

    3.1. Deploy on Akash

    • The easiest option is to deploy Ollama on Akash. Use this SDL for deployment.

    3.2. Deploy locally

    • To deploy locally, run the following command to automatically pull the latest Ollama Docker image and start it:
      docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
      After completion, run different models with the command:
      docker exec -it ollama ollama run mistral
      Find a full list of models on the Ollama website.
  4. Setup the GPU Load Balancer

    • Ollama supports switching models on the go, and for load balancing, use HAProxy. Adjust the HAProxy config based on the number of instances. Build the Docker image for HAProxy in the backend/haproxy folder:
      docker build -t my-haproxy .
      Start it with the command:
      docker run -d --name my-running-haproxy -p 3333:3333 -p 3332:3332 my-haproxy
      This exposes the load balancer on port 3332 and the stats page on port 3333. Change the admin password in the config if used in production.
  5. Setup MySQL DB (optional)

    • For production logging of request amounts, set up a MySQL database. Create a table with the supplied database schema backend/logdb.sql. To disable logging, set LOGDB to false in your .env file or environment.
  6. Start/Build the Backend

    • Navigate to the backend and run yarn dev to start it locally or yarn build to build the source for a Akash deployment. To deploy to Akash, run
      docker build -t yourusername/backend:version .
      push it to a registry of your choice and create a deployment with it on akash. Make sure to set the exposed port (port 3001) in your deployment file. Set CORS_ORIGIN in your environment or .env to allow requests from your frontend.
  7. Start/Build the Frontend

    • Open the .env.example file, rename it to .env, and set the variables to use the backend URL. Navigate to the frontend and run yarn start to start it locally or yarn build for Akash deployment. To deploy to Akash, run
      docker build -t yourusername/frontend:version .
      and use the supplied SDL for the deployment.

Now you should be set up. If you encounter any problems, feel free to open an issue.

About

LLM based AI chat service running on Akash

https://chat.akash.network/


Languages

Language:TypeScript 85.4%Language:SCSS 10.3%Language:HTML 2.0%Language:CSS 1.4%Language:Dockerfile 0.8%