OpenGVLab / LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AttributeError: module 'clip' has no attribute 'load'

parasmech opened this issue · comments

Hello

I have downloaded model and ran demo.py

But getting this error:
Loading LLaMA-Adapter from ckpts/7fa55208379faf2dd862565284101b0e4a2a72114d6490a95e432cf9d9b6c813_BIAS-7B.pth

AttributeError Traceback (most recent call last)
in <cell line: 11>()
9
10 # choose from BIAS-7B, LORA-BIAS-7B, CAPTION-7B.pth
---> 11 model, preprocess = llama.load("BIAS-7B", llama_dir, device)
12 model.eval()
13

1 frames
/content/llama/llama_adapter.py in init(self, llama_ckpt_dir, llama_tokenizer, max_seq_len, max_batch_size, clip_model, v_embed_dim, v_depth, v_num_heads, v_mlp_ratio, query_len, query_layer, w_bias, w_lora, lora_rank, w_new_gate, phase)
36
37 # 1. clip and clip projector
---> 38 self.clip, self.clip_transform = clip.load(clip_model)
39
40 clip_dim = self.clip.visual.proj.shape[1]

AttributeError: module 'clip' has no attribute 'load'

Please help.

working now

so what's the cause of this problem? I'm facing the same error, but this issue seemed to be closed without any reference solutions....