vicuna-tools / vicuna-installation-guide

The "vicuna-installation-guide" provides step-by-step instructions for installing and configuring Vicuna 13 and 7B

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Vicuna Installation Guide

Stargazers Forks

Detailed instructions for installing and configuring Vicuna

Installation - Usage

latest changes

  • updated the guide to vicuna 1.5 10.10.23
  • fixed the guide
  • added instructions for 7B model
  • fixed the wget command
  • modified the chat-with-vicuna-v1.txt in my llama.cpp fork
  • updated this guide to vicuna version 1.1

Requirements

Installation

One-line install script for Vicuna-1.1-13B

git clone https://github.com/fredi-python/llama.cpp.git && cd llama.cpp && make -j && cd models && wget -c https://huggingface.co/TheBloke/vicuna-13B-v1.5-GGUF/resolve/main/vicuna-13b-v1.5.Q4_K_M.gguf

One-line install script for Vicuna-1.1-7B

git clone https://github.com/fredi-python/llama.cpp.git && cd llama.cpp && make -j && cd models && wget -c https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/resolve/main/vicuna-7b-v1.5.Q4_K_M.gguf

Manual Installation

1. Clone the llama.cpp respository

git clone https://github.com/fredi-python/llama.cpp.git

2. Change directory

cd llama.cpp

3. Make it!

make -j

4. Move to the llama.cpp/models folder

cd models

5. a) Download the latest Vicuna model (13B) from Huggingface

wget -c https://huggingface.co/TheBloke/vicuna-13B-v1.5-GGUF/resolve/main/vicuna-13b-v1.5.Q4_K_M.gguf

5. b) Download the latest Vicuna model (7B) from Huggingface

wget -c https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/resolve/main/vicuna-7b-v1.5.Q4_K_M.gguf

Usage

Navigate back to the llama.cpp folder

cd ..

Example of how to run the 13b model with llama.cpp's chat-with-vicuna-v1.txt

./main -m models/vicuna-13b-v1.5.Q4_K_M.gguf --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-vicuna-v1.txt

About

The "vicuna-installation-guide" provides step-by-step instructions for installing and configuring Vicuna 13 and 7B