AI4Finance-Foundation / FinGPT

FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.

Home Page:https://ai4finance.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

A ImportError when I run the program "FinGPT_Training_LoRA_with_ChatGLM2_6B_for_Beginners.ipynb"

YRookieBoy opened this issue · comments

Hi,
When I try to run "FinGPT_Training_LoRA_with_ChatGLM2_6B_for_Beginners.ipynb" in google colab, I came aross a problem.
The code is

model_name = "THUDM/chatglm2-6b"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModel.from_pretrained(
model_name,
quantization_config=q_config,
trust_remote_code=True,
device='cuda'
)

and, the error is

ImportError: Using load_in_8bit=True requires Accelerate: pip install accelerate and the latest version of bitsandbytes pip install -i https://test.pypi.org/simple/ bitsandbytes or pip install bitsandbytes`
model = prepare_model_for_int8_training(model, use_gradient_checkpointing=True)

Last, I program the code in Gcolab pro and I am sure both packages is installed.
Please help me solve the problem, thank you so much!

Hi, based on my experience, you can try to reinstall these two packages when this error shows, then restart your kernel to run your code. Hope this works.

Thank you very much! I have already run the code successfully.