jeffWelling / codey

A simple little coding buddy in a website, like ChatGPT but running locally.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Codey

This is my first attempt at using LLMs to write a coding buddy. This was written on an Apple Silicon laptop and requires that the model chosen fits within the memory constraints of your system.

Getting Started

Choose a model. You'll need to choose one that fits within your available memory, this defaults to llama3:latest which is one of the smallest models I can find. Set your model in my_model on line 19.

Setup a virtual env

python3 -m venv .venv
source .venv/bin/activate

Install the required packages

pip install -r requirements.txt

Start the server

streamlit run codey.py

Questions

Feel free to ask questions and file issues, but this is really nothing more than some glue holding together streamlit and llama_index. I'm happy to help but I'm no expert and you may need to ask around those communities for assistance.

Further reading

License

This project is under BSD-3-Clause license.

Copyright (c) 2024, Jeff Welling

About

A simple little coding buddy in a website, like ChatGPT but running locally.

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:Python 100.0%