zetavg / LLaMA-LoRA-Tuner

UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Anyone get this to install / run on anywhere but Colab?

quantumalchemy opened this issue · comments

seems like lots of missing dependencies like.. " llamaconverter requires the protobuf " .. installed it but no go lot of other issues
got to inf and finetune on colab with the fix #29 (comment)

Would be great to run on a local or docker would be even better
(I am looking into that with different python ver (states python=3.8 but seems not working with huggingf stuff) / pip etc -- so maybe the way to go )
This was The only thing I was able to actually achieve success fine tuning a llm -- oobabooga .. always breaking .. never got a clean Lora / training session without crapping out halfway though -- So.. This thing works!
but damn .. cant get it loaded on anything other than gd colab -- what are your experiences?

I've installed in an Oracle Cloud GPU with a Data Science Marketplace image, works with no issue.

Have you tried with a clean Linux OS installation (RHEL or Ubuntu)?