lalanikarim / ai-chatbot-ollama

Langchain + Streamlit + Ollama w/ Mistral

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

title emoji colorFrom colorTo sdk sdk_version app_file pinned license
Ai Chatbot w/ Langchain, Ollama, and Streamlit
📊
indigo
gray
streamlit
1.28.0
main.py
false
mit

Streamlit + Langchain + Ollama w/ Mistral

Run your own AI Chatbot locally on a GPU or even a CPU.

To make that possible, we use the Mistral 7b model.
We will run use an LLM inference engine called Ollama to run our LLM and to serve
an inference api endpoint and have LangChain connect to it instead of running the LLM directly.

This AI chatbot will allow you to define its personality and respond to the questions accordingly.
There is no chat memory in this iteration, so you won't be able to ask follow-up questions. The chatbot will essentially behave like a Question/Answer bot.

TL;DR instructions

  1. Install ollama
  2. Install langchain
  3. Install streamlit
  4. Run streamlit

Step by Step instructions

The setup assumes you have python already installed and venv module available.

  1. Install ollama from ollama.ai.
  2. Start ollama:
ollama serve
  1. Download mistral llm using ollama:
ollama pull mistral
  1. Download the code or clone the repository.
  2. Inside the root folder of the repository, initialize a python virtual environment:
python -m venv .venv
  1. Activate the python environment:
source .venv/bin/activate
  1. Install required packages (langchain and streamlit):
pip install -r requirements.txt
  1. Start streamlit:
streamlit run main.py

About

Langchain + Streamlit + Ollama w/ Mistral

License:MIT License


Languages

Language:Python 100.0%