AI4Finance-Foundation / FinGPT

FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.

Home Page:https://ai4finance.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error in AutoModel.from_pretained .... requires Accelerate and bitsandbytes but both of them already installed

protocold opened this issue · comments

I have got the below error when running the "FinGPT_Training_LoRA_with_ChatGLM2_6B_for_Beginners.ipynb"... but both Accelerate and bitsandbytes packages have been installed. Have I done anything wrong?

I was running it in jupyter notebook in a local env(windows/WSL2) with cuda installed.


ImportError Traceback (most recent call last)
Cell In[22], line 5
3 model_name = "THUDM/chatglm2-6b"
4 tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
----> 5 model = AutoModel.from_pretrained(
6 model_name,
7 quantization_config=q_config,
8 trust_remote_code=True,
9 device='cuda'
10 )
11 model = prepare_model_for_int8_training(model, use_gradient_checkpointing=True)

File ~/anaconda3/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:479, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
475 model_class = get_class_from_dynamic_module(
476 class_ref, pretrained_model_name_or_path, **hub_kwargs, **kwargs
477 )
478 _ = hub_kwargs.pop("code_revision", None)
--> 479 return model_class.from_pretrained(
480 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
481 )
482 elif type(config) in cls._model_mapping.keys():
483 model_class = _get_model_class(config, cls._model_mapping)

File ~/anaconda3/lib/python3.11/site-packages/transformers/modeling_utils.py:2257, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
2255 if load_in_8bit or load_in_4bit:
2256 if not (is_accelerate_available() and is_bitsandbytes_available()):
-> 2257 raise ImportError(
2258 "Using load_in_8bit=True requires Accelerate: pip install accelerate and the latest version of"
2259 " bitsandbytes pip install -i https://test.pypi.org/simple/ bitsandbytes or"
2260 " pip install bitsandbytes" 2261 ) 2263 if torch_dtype is None: 2264 # We force thedtypeto be float16, this is a requirement frombitsandbytes2265 logger.info( 2266 f"Overriding torch_dtype={torch_dtype} withtorch_dtype=torch.float16due to " 2267 "requirements ofbitsandbytes` to enable model loading in 8-bit or 4-bit. "
2268 "Pass your own torch_dtype to specify the dtype of the remaining non-linear layers or pass"
2269 " torch_dtype=torch.float16 to remove this warning."
2270 )

ImportError: Using load_in_8bit=True requires Accelerate: pip install accelerate and the latest version of bitsandbytes pip install -i https://test.pypi.org/simple/ bitsandbytes or pip install bitsandbytes`

i have edited the above hopefully provide more detail.. any help?

I had the same and managed to fix this by running

pip install transformers==4.32.0

and restarting the session..

You need to restart the session if this happened. You may also refer to this repo for more detailed explanation with my articles inside. https://github.com/AI4Finance-Foundation/FinGPT-Research