[Question]: FlagAI\examples\AltCLIP>python altclip_inference.py 报错
panghongwei17 opened this issue · comments
panghongwei17 commented
Description
size mismatch for vision_model.encoder.layers.11.mlp.fc1.bias: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for vision_model.encoder.layers.11.mlp.fc2.weight: copying a param with shape torch.Size([1024, 4096]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
size mismatch for vision_model.encoder.layers.11.mlp.fc2.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for vision_model.encoder.layers.11.layer_norm2.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for vision_model.encoder.layers.11.layer_norm2.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for vision_model.post_layernorm.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for vision_model.post_layernorm.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for visual_projection.weight: copying a param with shape torch.Size([768, 1024]) from checkpoint, the shape in current model is torch.Size([768, 768]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
Alternatives
No response
BAAI-OpenPlatform commented
我不能复现这个报错,也许试试pull最新代码并安装,然后删除本地模型并重新加载一遍模型?
BAAI-OpenPlatform commented
此问题已关闭,如有疑问可以重新打开