akusok / productive-llms

A course that investigates best ways of getting value out of LLMs for a software developer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

productive-llms

A course that investigates best ways of getting value out of LLMs for a software developer

Local LLMs

llama.cpp repo is a way of installing and running LLMs locally on an M2 Macbook with GPU acceleration out-of-the-box.

Models can be downloaded from the Huggingface. Grab the GGUF format models with some quantization level already.

Memory requirements:

  • nothing for 7B model
  • 20G for 13B model at q6_K quantization

Easy way of testing local LLMs

Get the Text Generation WebUI tool. It accepts names of custom repos on Huggingface, and downloads them by itself.

Code support

Tools: https://continue.dev. Connects any LLM to VS Code or PyCharm.

Main question: How do I pass the whole repository as context?

Testing support

Documentation support

Help in unfamiliar languages

About

A course that investigates best ways of getting value out of LLMs for a software developer

License:MIT License