lxe / cerebras-lora-alpaca

LoRA weights for Cerebras-GPT-2.7b finetuned on Alpaca dataset with shorter prompt

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

title emoji colorFrom colorTo sdk sdk_version app_file pinned license
Lora Cerebras Gpt2.7b Alpaca Shortprompt
🐨
yellow
pink
gradio
3.23.0
app.py
false
apache-2.0

πŸ¦™πŸ•πŸ§  Cerebras-GPT2.7B LoRA Alpaca ShortPrompt

Open In Colab Open In Spaces

Scripts to finetune Cerebras GPT2.7B on the Alpaca dataset, as well as inference demos.

πŸ“ˆ Warnings

The model tends to be pretty coherent, but it also hallucinates a lot of factually incorrect responses. Avoid using it for anything requiring factual correctness.

πŸ“š Instructions

  1. Be on a machine with an NVIDIA card with 12-24 GB of VRAM.

  2. Get the environment ready

conda create -n cerberas-lora python=3.10
conda activate cerberas-lora
conda install -y cuda -c nvidia/label/cuda-11.7.0
conda install -y pytorch=1.13.1 pytorch-cuda=11.7 -c pytorch
  1. Clone the repo and install requirements
git clone https://github.com/lxe/cerebras-lora-alpaca.git && cd !!
pip install -r requirements.txt
  1. Run the inference demo
python app.py

To reproduce the finetuning results, do the following:

  1. Install jupyter and run it
pip install jupyter
jupyter notebook
  1. Navigate to the inference.ipynb notebook and test out the inference demo.

  2. Navigate to the finetune.ipynb notebook and reproduce the finetuning results.

  • It takes about 5 hours with the default settings
  • Adjust the batch size and gradient accumulation steps to fit your GPU

πŸ“ License

Apache 2.0

About

LoRA weights for Cerebras-GPT-2.7b finetuned on Alpaca dataset with shorter prompt


Languages

Language:Jupyter Notebook 92.9%Language:Python 7.1%