sun-hailong / LAMDA-PILOT

🎉 PILOT: A Pre-trained Model-Based Continual Learning Toolbox

Home Page:https://arxiv.org/abs/2309.07117

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Pretrained Model Loading Error

zzsyjl opened this issue · comments

Hi, I experienced the same error when I run the experiment on L2P, DualPrompt, and CodaPrompt. How can I fix that?

Traceback (most recent call last):
File "/data/hdc/jinglong/LAMDA-PILOT/main.py", line 25, in
main()
File "/data/hdc/jinglong/LAMDA-PILOT/main.py", line 11, in main
train(args)
File "/data/hdc/jinglong/LAMDA-PILOT/trainer.py", line 18, in train
_train(args)
File "/data/hdc/jinglong/LAMDA-PILOT/trainer.py", line 62, in _train
model = factory.get_model(args["model_name"], args)
File "/data/hdc/jinglong/LAMDA-PILOT/utils/factory.py", line 34, in get_model
return Learner(args)
File "/data/hdc/jinglong/LAMDA-PILOT/models/l2p.py", line 20, in init
self._network = PromptVitNet(args, True)
File "/data/hdc/jinglong/LAMDA-PILOT/utils/inc_net.py", line 517, in init
self.backbone = get_backbone(args, pretrained)
File "/data/hdc/jinglong/LAMDA-PILOT/utils/inc_net.py", line 100, in get_backbone
model = timm.create_model(
File "/data/hdc/jinglong/anaconda3/envs/torch2/lib/python3.9/site-packages/timm/models/_factory.py", line 114, in create_model
model = create_fn(
File "/data/hdc/jinglong/LAMDA-PILOT/backbone/vision_transformer_l2p.py", line 810, in vit_base_patch16_224_l2p
model = _create_vision_transformer('vit_base_patch16_224', pretrained=pretrained, **model_kwargs)
File "/data/hdc/jinglong/LAMDA-PILOT/backbone/vision_transformer_l2p.py", line 723, in _create_vision_transformer
pretrained_custom_load='npz' in pretrained_cfg['url'],
TypeError: 'PretrainedCfg' object is not subscriptable

Hi @zzsyjl,
It appears to be a version problem with timm. You can resolve it by installing a lower version of timm, such as pip install timm==0.6.12.