zetavg / LLaMA-LoRA-Tuner

UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Running locally without git and intternet

im50889 opened this issue · comments

Hi,
I have already downloaded base model 33 files and its there in one folder. Can you please let me know changes to run this model locally without any internet . Currently it is trying to download base model from git and its failing.

Thanks

Hi, at the base model dropdown selector on the top right, you can actually type in any custom value. If you set it to the absolute path of the directory where you stored the base model it should work!

Thanks a lot for your response. When I am giving complete folder path for base model it's giving me error as it is trying to connect to internet for file download. I download all files from huggingface and have put into base folder . My question how should I convert 33 files into single file show that base model can be loaded using pre_trained api

model_class.from_pretrained(
model_name,
device_map={"": device},
low_cpu_mem_usage=True,
from_tf=from_tf,
force_download=force_download,
trust_remote_code=Config.trust_remote_code,
use_auth_token=Config.hf_access_token
)

While I don't currently have an answer to that, but if you're still stuck on running without internet, another way which might be able to do this is to run the first time with a computer that has internet (or run download_base_model.py), and copy the hugging face cache (~/.cache/huggingface) to the other computer without internet connection.

That way, it should just use the cache without trying to download models from HF hub.