LAION-AI / Open-Assistant

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

Home Page:https://open-assistant.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

what do I do after install the open assistant from git hub?

CT1800098 opened this issue · comments

I have no idea on what should I do to get the local version up and running after install it on my pc

What do you want to work on?
Backend
Inference or
Frontend

what is the differece between them?

What do you want to work on? Backend Inference or Frontend

I sort of want to do what the web version does

The docs detail how you can setup and start backend on your local PC

Create a venv
Clone the repo and install the required libraries as guided in the backend readme file
Make sure you have postgres and redis running
Start the backend server using the script inside the backend directory
( Again, instructions in the readme file of the backend directory)

Check documentation in your local machine at localhost:8080/docs

The docs detail how you can setup and start backend on your local PC

Create a venv Clone the repo and install the required libraries as guided in the backend readme file Make sure you have postgres and redis running Start the backend server using the script inside the backend directory ( Again, instructions in the readme file of the backend directory)

Check documentation in your local machine at localhost:8080/docs
So I just follow the instruction on the redme in the backend?

A small remark: unless you have a very powerful GPU you will be only able to run the website locally, but not the chat (=Inference).

what is the differece between them?

Backend is to work on the open assistant web backend
Front is to improve the UI for the same web
Inference is to work on the LLMs

A small remark: unless you have a very powerful GPU you will be only able to run the website locally, but not the chat (=Inference).

how many VRM do I need?

what is the differece between them?

Backend is to work on the open assistant web backend Front is to improve the UI for the same web Inference is to work on the LLMs

what is LLM?

what is the differece between them?

Backend is to work on the open assistant web backend Front is to improve the UI for the same web Inference is to work on the LLMs

what is LLM?

The chat itself
Large language models
They have so many parameters you need more hardware like @stefangrotz said

what is the differece between them?

Backend is to work on the open assistant web backend Front is to improve the UI for the same web Inference is to work on the LLMs

what is LLM?

The chat itself Large language models They have so many parameters you need more hardware like @stefangrotz said

how many VRM do I need to run?

For the 70b model you'll need 48GB VRAM.

If you want to run models locally I recommend https://gpt4all.io/
Its for language models that are optimized for consumer hardware and it is easy to use.

For the 70b model you'll need 48GB VRAM.

If you want to run models locally I recommend https://gpt4all.io/ Its for language models that are optimized for consumer hardware.

is it censored or uncensored?