zetavg / LLaMA-LoRA-Tuner

UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

offload-between-cpu-and-gpu

seanychen opened this issue · comments

image

Question 1: I got this error message when using my own dataset to fine-tune. What is the cause?

Question 2: Would it be an issue to disk/memory if I got too many LoRA models tuned and saved? How should I delete the previously trained models?