Ollama is an AI model management tool that allows users to install and use custom large language models locally.
The project aims to:
- Create a Discord bot that will utilize Ollama and chat to chat with users!
- User Preferences on Chat
- Message Persistance on Channels and Threads
- Threads
- Channels
- Containerization with Docker
- Slash Commands Compatible
- Generated Token Length Handling for >2000
- Token Length Handling of any message size
- User vs. Server Preferences
- Redis Caching
- Administrator Role Compatible
- Multi-User Chat Generation (Multiple users chatting at the same time)
- Automatic and Manual model pulling through the Discord client
- Allow others to create their own models personalized for their own servers!
- Documentation on creating your own LLM
- Documentation on web scrapping and cleaning
- Clone this repo using
git clone https://github.com/kevinthedang/discord-ollama.git
or just use GitHub Desktop to clone the repo. - You will need a
.env
file in the root of the project directory with the bot's token. There is a.env.sample
is provided for you as a reference for what environment variables.- For example,
CLIENT_TOKEN = [Bot Token]
- For example,
- Please refer to the docs for bot setup.
- Local Machine Setup
- Docker Setup for Servers and Local Machines
- Nvidia is recommended for now, but support for other GPUs should be development.
- Local use is not recommended.
- Creating a Discord App
- NodeJS
- This project runs on
lts\hydrogen
.- To run dev in
ts-node
/nodemon
, usingv18.18.2
is recommended. - To run dev with
tsx
, you can usev20.10.0
or earlier.
- To run dev in
- This project supports any NodeJS version above
16.x.x
to only allow ESModules.
- This project runs on
- Ollama
Caution
v18.X.X
or lts/hydrogen
will not run properly for npm run dev-mon
. It is recommended to just use npm run dev-tsx
for development. The nodemon version will likely be removed in a future update.
discord-ollama © 2023 by Kevin Dang is licensed under CC BY 4.0