FlagAI-Open / FlagAI

FlagAI (Fast LArge-scale General AI models) is a fast, easy-to-use and extensible toolkit for large-scale model.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Question]: Model hub is not reachable!

susht3 opened this issue · comments

commented

Description

有网络,但是连接不通:
******************** txt_img_matching altclip-xlmr-l
Model hub is not reachable.
Model hub is not reachable!
Traceback (most recent call last):
File "inference.py", line 8, in
loader = AutoLoader(
File "/nvme/nvme0/github/FlagAI/flagai/auto_model/auto_loader.py", line 218, in init
self.model = getattr(LazyImport(self.model_name[0]),
File "/nvme/nvme0/github/FlagAI/flagai/model/mm/AltCLIP.py", line 446, in from_pretrain
super().download(download_path, model_name, only_download_config=only_download_config)
File "/nvme/nvme0/github/FlagAI/flagai/model/base_model.py", line 264, in download
if model_id and model_id != "null":
UnboundLocalError: local variable 'model_id' referenced before assignment

Alternatives

No response

commented

上午的时候还可以调通,下午就报错了。

commented

可以直接加载模型吗?不要download

commented

删除”download“这一行:super().download(download_path, model_name, only_download_config=only_download_config)
直接加载ckpt,报错模型大小不符:

Traceback (most recent call last):
File "inference.py", line 8, in
loader = AutoLoader(
File "/mnt/lustrenew/FlagAI/flagai/auto_model/auto_loader.py", line 218, in init
self.model = getattr(LazyImport(self.model_name[0]),
File "/mnt/lustrenew/FlagAI/flagai/model/mm/AltCLIP.py", line 452, in from_pretrain
return CLIPHF.from_pretrained(pretrained_model_name_or_path)
File "/mnt/lustre/miniconda3/envs/py3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2326, in from_pretrained
model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model(
File "/mnt/lustre/miniconda3/envs/py3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2595, in _load_pretrained_model
raise RuntimeError(f"Error(s) in loading state_dict for {model.class.name}:\n\t{error_msg}")
RuntimeError: Error(s) in loading state_dict for CLIPHF:
size mismatch for vision_model.embeddings.class_embedding: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for vision_model.embeddings.position_ids: copying a param with shape torch.Size([1, 257]) from checkpoint, the shape in current model is torch.Size([1, 50]).
size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1024, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([768, 3, 32, 32])
.
size mismatch for vision_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([257, 1024]) from checkpoint, the shape in current model is torch.Size([50, 768]).
size mismatch for vision_model.pre_layrnorm.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for vision_model.pre_layrnorm.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).

能否把代码发一下,我试试。