Error running medAlpaca in colab
DriesSmit opened this issue · comments
Dries Smit commented
Hello there.
I tried running the model in Colab, but got an error. Furthermore, I have connected a GPU device. Any help would be greatly appreciated.
Code:
!git clone https://github.com/kbressem/medAlpaca.git
%cd medAlpaca
!pip install -r requirements.txt
from medalpaca.inferer import Inferer
model = Inferer(
model_name="medalpaca/medalapca-lora-7b-8bit",
prompt_template="medalpaca/prompt_templates/medalpaca.json",
base_model="decapoda-research/llama-7b-hf",
peft=True,
load_in_8bit=True,
Error message:
Loading checkpoint shards: 100%
33/33 [01:05<00:00, 2.06s/it]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /usr/local/lib/python3.10/dist-packages/peft/utils/config.py:106 in from_pretrained │
│ │
│ 103 │ │ │ config_file = os.path.join(path, CONFIG_NAME) │
│ 104 │ │ else: │
│ 105 │ │ │ try: │
│ ❱ 106 │ │ │ │ config_file = hf_hub_download( │
│ 107 │ │ │ │ │ pretrained_model_name_or_path, CONFIG_NAME, subfolder=subfolder, **k │
│ 108 │ │ │ │ ) │
│ 109 │ │ │ except Exception: │
│ │
│ /usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py:118 in _inner_fn │
│ │
│ 115 │ │ if check_use_auth_token: │
│ 116 │ │ │ kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=ha │
│ 117 │ │ │
│ ❱ 118 │ │ return fn(*args, **kwargs) │
│ 119 │ │
│ 120 │ return _inner_fn # type: ignore │
│ 121 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: hf_hub_download() got an unexpected keyword argument 'torch_dtype'
During handling of the above exception, another exception occurred:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <cell line: 3>:3 │
│ │
│ /content/medAlpaca/medalpaca/inferer.py:54 in __init__ │
│ │
│ 51 │ │ │ │ "This would load the base model only" │
│ 52 │ │ │ ) │
│ 53 │ │ │
│ ❱ 54 │ │ self.model = self._load_model( │
│ 55 │ │ │ model_name = model_name, │
│ 56 │ │ │ base_model = base_model or model_name, │
│ 57 │ │ │ load_in_8bit = load_in_8bit, │
│ │
│ /content/medAlpaca/medalpaca/inferer.py:94 in _load_model │
│ │
│ 91 │ │ ) │
│ 92 │ │ │
│ 93 │ │ if peft: │
│ ❱ 94 │ │ │ model = PeftModel.from_pretrained( │
│ 95 │ │ │ │ model, │
│ 96 │ │ │ │ model_id=model_name, │
│ 97 │ │ │ │ torch_dtype=torch_dtype, │
│ │
│ /usr/local/lib/python3.10/dist-packages/peft/peft_model.py:180 in from_pretrained │
│ │
│ 177 │ │ │
│ 178 │ │ # load the config │
│ 179 │ │ config = PEFT_TYPE_TO_CONFIG_MAPPING[ │
│ ❱ 180 │ │ │ PeftConfig.from_pretrained(model_id, subfolder=kwargs.get("subfolder", None) │
│ 181 │ │ ].from_pretrained(model_id, subfolder=kwargs.get("subfolder", None), **kwargs) │
│ 182 │ │ │
│ 183 │ │ if (getattr(model, "hf_device_map", None) is not None) and len( │
│ │
│ /usr/local/lib/python3.10/dist-packages/peft/utils/config.py:110 in from_pretrained │
│ │
│ 107 │ │ │ │ │ pretrained_model_name_or_path, CONFIG_NAME, subfolder=subfolder, **k │
│ 108 │ │ │ │ ) │
│ 109 │ │ │ except Exception: │
│ ❱ 110 │ │ │ │ raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name │
│ 111 │ │ │
│ 112 │ │ loaded_attributes = cls.from_json_file(config_file) │
│ 113 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Can't find 'adapter_config.json' at 'medalpaca/medalapca-lora-7b-8bit'
Keno commented
I believe there is a typo in your code as it says 'medalApca' not medalpaca.
Dries Smit commented
Thanks for the quick response :) I am using this config which seems to spell it medalapca? Let me know if I am misunderstanding something.
Keno commented
Then the typo is on me. It should spell as in the huggingface repo.
Dries Smit commented
Ah okay. Thanks for pointing that out.