Vectorize Github discussions of your choice. Ask questions and get answers powered by LLM models. You can plugin any LLM model of your choice. I have been using ollama.
- Install ollama - https://ollama.com/download & litellm
pip install litellm
- Open terminal and run Gemma-2b model using
ollama run gemma:2b
. You can run any model of your choice here e.g. mistral:7b and llama:7b - Open another terminal and run
litellm --model ollama/gemma:2b
. This command will start serving gemma:2b at http://0.0.0.0:8000 - Clone this repository and run
make init
. This will install all the required libraries for this project and will ask for a link of the repository to scrape github discussions and store in chroma db in vector form. When asked enter repository of your choice which has github discussions e.g.https://github.com/kong/kong
- Finally run inference using
make run
and ask your queries, All your queries will be answered only in the context of scraped github discussions.
Enjoy