rabbityl / lepard

[CVPR 2022, Oral] Learning Partial point cloud matching in Rigid and Deformable scenes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

loading state_dict error during 3dmatch inference

hadleyhzy34 opened this issue · comments

thanks for sharing such inspiring work, i was trying to retest inference performance based on 3dmatch dataset and followed the instruction using this command to start testing:
python main.py configs/test/3dmatch.yaml
however inference procedure failed since the error below:

File "/home/hadley/Development/lepard/lib/trainer.py", line 80, in _load_pretrain
    if os.path.isfile(resume):
  File "/home/hadley/anaconda3/envs/torch_12/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1604, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Pipeline:
	Missing key(s) in state_dict: "backbone.encoder_blocks.0.KPConv.weights", "backbone.encoder_blocks.0.KPConv.kernel_points", "backbone.encoder_blocks.1.unary1.mlp.weight", "backbone.encoder_blocks.1.KPConv.weights", "backbone.encoder_blocks.1.KPConv.kernel_points", "backbone.encoder_blocks.1.unary2.mlp.weight", "backbone.encoder_blocks.1.unary_shortcut.mlp.weight", "backbone.encoder_blocks.2.unary1.mlp.weight", "backbone.encoder_blocks.2.KPConv.weights", "backbone.encoder_blocks.2.KPConv.kernel_points", "backbone.encoder_blocks.2.unary2.mlp.weight", "backbone.encoder_blocks.3.unary1.mlp.weight", "backbone.encoder_blocks.3.KPConv.weights", "backbone.encoder_blocks.3.KPConv.kernel_points", "backbone.encoder_blocks.3.unary2.mlp.weight", "backbone.encoder_blocks.3.unary_shortcut.mlp.weight", "backbone.encoder_blocks.4.unary1.mlp.weight", "backbone.encoder_blocks.4.KPConv.weights", "backbone.encoder_blocks.4.KPConv.kernel_points", "backbone.encoder_blocks.4.unary2.mlp.weight", "backbone.encoder_blocks.5.unary1.mlp.weight", "backbone.encoder_blocks.5.KPConv.weights", "backbone.encoder_blocks.5.KPConv.kernel_points", "backbone.encoder_blocks.5.unary2.mlp.weight", "backbone.encoder_blocks.6.unary1.mlp.weight", "backbone.encoder_blocks.6.KPConv.weights", "backbone.encoder_blocks.6.KPConv.kernel_points", "backbone.encoder_blocks.6.unary2.mlp.weight", "backbone.encoder_blocks.6.unary_shortcut.mlp.weight", "backbone.encoder_blocks.7.unary1.mlp.weight", "backbone.encoder_blocks.7.KPConv.weights", "backbone.encoder_blocks.7.KPConv.kernel_points", "backbone.encoder_blocks.7.unary2.mlp.weight", "backbone.encoder_blocks.8.unary1.mlp.weight", "backbone.encoder_blocks.8.KPConv.weights", "backbone.encoder_blocks.8.KPConv.kernel_points", "backbone.encoder_blocks.8.unary2.mlp.weight", "backbone.encoder_blocks.9.unary1.mlp.weight", "backbone.encoder_blocks.9.KPConv.weights", "backbone.encoder_blocks.9.KPConv.kernel_points", "backbone.encoder_blocks.9.unary2.mlp.weight", "backbone.encoder_blocks.9.unary_shortcut.mlp.weight", "backbone.encoder_blocks.10.unary1.mlp.weight", "backbone.encoder_blocks.10.KPConv.weights", "backbone.encoder_blocks.10.KPConv.kernel_points", "backbone.encoder_blocks.10.unary2.mlp.weight", "backbone.coarse_out.weight", "backbone.coarse_out.bias", "backbone.coarse_in.weight", "backbone.coarse_in.bias", "backbone.decoder_blocks.1.mlp.weight", "backbone.decoder_blocks.3.mlp.weight", "backbone.decoder_blocks.5.mlp.weight", "backbone.fine_out.weight", "backbone.fine_out.bias", "coarse_transformer.layers.0.q_proj.weight", "coarse_transformer.layers.0.k_proj.weight", "coarse_transformer.layers.0.v_proj.weight", "coarse_transformer.layers.0.merge.weight", "coarse_transformer.layers.0.mlp.0.weight", "coarse_transformer.layers.0.mlp.2.weight", "coarse_transformer.layers.0.norm1.weight", "coarse_transformer.layers.0.norm1.bias", "coarse_transformer.layers.0.norm2.weight", "coarse_transformer.layers.0.norm2.bias", "coarse_transformer.layers.1.q_proj.weight", "coarse_transformer.layers.1.k_proj.weight", "coarse_transformer.layers.1.v_proj.weight", "coarse_transformer.layers.1.merge.weight", "coarse_transformer.layers.1.mlp.0.weight", "coarse_transformer.layers.1.mlp.2.weight", "coarse_transformer.layers.1.norm1.weight", "coarse_transformer.layers.1.norm1.bias", "coarse_transformer.layers.1.norm2.weight", "coarse_transformer.layers.1.norm2.bias", "coarse_transformer.layers.2.0.src_proj.weight", "coarse_transformer.layers.2.0.tgt_proj.weight", "coarse_transformer.layers.2.0.instNormLayer.weight", "coarse_transformer.layers.2.0.instNormLayer.bias", "coarse_transformer.layers.2.0.edgeNormLayer.weight", "coarse_transformer.layers.2.0.edgeNormLayer.bias", "coarse_transformer.layers.3.q_proj.weight", "coarse_transformer.layers.3.k_proj.weight", "coarse_transformer.layers.3.v_proj.weight", "coarse_transformer.layers.3.merge.weight", "coarse_transformer.layers.3.mlp.0.weight", "coarse_transformer.layers.3.mlp.2.weight", "coarse_transformer.layers.3.norm1.weight", "coarse_transformer.layers.3.norm1.bias", "coarse_transformer.layers.3.norm2.weight", "coarse_transformer.layers.3.norm2.bias", "coarse_transformer.layers.4.q_proj.weight", "coarse_transformer.layers.4.k_proj.weight", "coarse_transformer.layers.4.v_proj.weight", "coarse_transformer.layers.4.merge.weight", "coarse_transformer.layers.4.mlp.0.weight", "coarse_transformer.layers.4.mlp.2.weight", "coarse_transformer.layers.4.norm1.weight", "coarse_transformer.layers.4.norm1.bias", "coarse_transformer.layers.4.norm2.weight", "coarse_transformer.layers.4.norm2.bias", "coarse_matching.src_proj.weight", "coarse_matching.tgt_proj.weight", "coarse_matching.instNormLayer.weight", "coarse_matching.instNormLayer.bias", "coarse_matching.edgeNormLayer.weight", "coarse_matching.edgeNormLayer.bias". 
	Unexpected key(s) in state_dict: "kpf_encoder.encoder_blocks.0.KPConv.weights", "kpf_encoder.encoder_blocks.0.KPConv.kernel_points", "kpf_encoder.encoder_blocks.1.unary1.mlp.weight", "kpf_encoder.encoder_blocks.1.KPConv.weights", "kpf_encoder.encoder_blocks.1.KPConv.kernel_points", "kpf_encoder.encoder_blocks.1.unary2.mlp.weight", "kpf_encoder.encoder_blocks.1.unary_shortcut.mlp.weight", "kpf_encoder.encoder_blocks.2.unary1.mlp.weight", "kpf_encoder.encoder_blocks.2.KPConv.weights", "kpf_encoder.encoder_blocks.2.KPConv.kernel_points", "kpf_encoder.encoder_blocks.2.unary2.mlp.weight", "kpf_encoder.encoder_blocks.3.unary1.mlp.weight", "kpf_encoder.encoder_blocks.3.KPConv.weights", "kpf_encoder.encoder_blocks.3.KPConv.kernel_points", "kpf_encoder.encoder_blocks.3.unary2.mlp.weight", "kpf_encoder.encoder_blocks.3.unary_shortcut.mlp.weight", "kpf_encoder.encoder_blocks.4.unary1.mlp.weight", "kpf_encoder.encoder_blocks.4.KPConv.weights", "kpf_encoder.encoder_blocks.4.KPConv.kernel_points", "kpf_encoder.encoder_blocks.4.unary2.mlp.weight", "kpf_encoder.encoder_blocks.5.unary1.mlp.weight", "kpf_encoder.encoder_blocks.5.KPConv.weights", "kpf_encoder.encoder_blocks.5.KPConv.kernel_points", "kpf_encoder.encoder_blocks.5.unary2.mlp.weight", "kpf_encoder.encoder_blocks.6.unary1.mlp.weight", "kpf_encoder.encoder_blocks.6.KPConv.weights", "kpf_encoder.encoder_blocks.6.KPConv.kernel_points", "kpf_encoder.encoder_blocks.6.unary2.mlp.weight", "kpf_encoder.encoder_blocks.6.unary_shortcut.mlp.weight", "kpf_encoder.encoder_blocks.7.unary1.mlp.weight", "kpf_encoder.encoder_blocks.7.KPConv.weights", "kpf_encoder.encoder_blocks.7.KPConv.kernel_points", "kpf_encoder.encoder_blocks.7.unary2.mlp.weight", "kpf_encoder.encoder_blocks.8.unary1.mlp.weight", "kpf_encoder.encoder_blocks.8.KPConv.weights", "kpf_encoder.encoder_blocks.8.KPConv.kernel_points", "kpf_encoder.encoder_blocks.8.unary2.mlp.weight", "kpf_encoder.encoder_blocks.9.unary1.mlp.weight", "kpf_encoder.encoder_blocks.9.KPConv.weights", "kpf_encoder.encoder_blocks.9.KPConv.kernel_points", "kpf_encoder.encoder_blocks.9.unary2.mlp.weight", "kpf_encoder.encoder_blocks.9.unary_shortcut.mlp.weight", "kpf_encoder.encoder_blocks.10.unary1.mlp.weight", "kpf_encoder.encoder_blocks.10.KPConv.weights", "kpf_encoder.encoder_blocks.10.KPConv.kernel_points", "kpf_encoder.encoder_blocks.10.unary2.mlp.weight", "feat_proj.weight", "feat_proj.bias", "transformer_encoder.layers.0.self_attn.in_proj_weight", "transformer_encoder.layers.0.self_attn.in_proj_bias", "transformer_encoder.layers.0.self_attn.out_proj.weight", "transformer_encoder.layers.0.self_attn.out_proj.bias", "transformer_encoder.layers.0.multihead_attn.in_proj_weight", "transformer_encoder.layers.0.multihead_attn.in_proj_bias", "transformer_encoder.layers.0.multihead_attn.out_proj.weight", "transformer_encoder.layers.0.multihead_attn.out_proj.bias", "transformer_encoder.layers.0.linear1.weight", "transformer_encoder.layers.0.linear1.bias", "transformer_encoder.layers.0.linear2.weight", "transformer_encoder.layers.0.linear2.bias", "transformer_encoder.layers.0.norm1.weight", "transformer_encoder.layers.0.norm1.bias", "transformer_encoder.layers.0.norm2.weight", "transformer_encoder.layers.0.norm2.bias", "transformer_encoder.layers.0.norm3.weight", "transformer_encoder.layers.0.norm3.bias", "transformer_encoder.layers.1.self_attn.in_proj_weight", "transformer_encoder.layers.1.self_attn.in_proj_bias", "transformer_encoder.layers.1.self_attn.out_proj.weight", "transformer_encoder.layers.1.self_attn.out_proj.bias", "transformer_encoder.layers.1.multihead_attn.in_proj_weight", "transformer_encoder.layers.1.multihead_attn.in_proj_bias", "transformer_encoder.layers.1.multihead_attn.out_proj.weight", "transformer_encoder.layers.1.multihead_attn.out_proj.bias", "transformer_encoder.layers.1.linear1.weight", "transformer_encoder.layers.1.linear1.bias", "transformer_encoder.layers.1.linear2.weight", "transformer_encoder.layers.1.linear2.bias", "transformer_encoder.layers.1.norm1.weight", "transformer_encoder.layers.1.norm1.bias", "transformer_encoder.layers.1.norm2.weight", "transformer_encoder.layers.1.norm2.bias", "transformer_encoder.layers.1.norm3.weight", "transformer_encoder.layers.1.norm3.bias", "transformer_encoder.layers.2.self_attn.in_proj_weight", "transformer_encoder.layers.2.self_attn.in_proj_bias", "transformer_encoder.layers.2.self_attn.out_proj.weight", "transformer_encoder.layers.2.self_attn.out_proj.bias", "transformer_encoder.layers.2.multihead_attn.in_proj_weight", "transformer_encoder.layers.2.multihead_attn.in_proj_bias", "transformer_encoder.layers.2.multihead_attn.out_proj.weight", "transformer_encoder.layers.2.multihead_attn.out_proj.bias", "transformer_encoder.layers.2.linear1.weight", "transformer_encoder.layers.2.linear1.bias", "transformer_encoder.layers.2.linear2.weight", "transformer_encoder.layers.2.linear2.bias", "transformer_encoder.layers.2.norm1.weight", "transformer_encoder.layers.2.norm1.bias", "transformer_encoder.layers.2.norm2.weight", "transformer_encoder.layers.2.norm2.bias", "transformer_encoder.layers.2.norm3.weight", "transformer_encoder.layers.2.norm3.bias", "transformer_encoder.layers.3.self_attn.in_proj_weight", "transformer_encoder.layers.3.self_attn.in_proj_bias", "transformer_encoder.layers.3.self_attn.out_proj.weight", "transformer_encoder.layers.3.self_attn.out_proj.bias", "transformer_encoder.layers.3.multihead_attn.in_proj_weight", "transformer_encoder.layers.3.multihead_attn.in_proj_bias", "transformer_encoder.layers.3.multihead_attn.out_proj.weight", "transformer_encoder.layers.3.multihead_attn.out_proj.bias", "transformer_encoder.layers.3.linear1.weight", "transformer_encoder.layers.3.linear1.bias", "transformer_encoder.layers.3.linear2.weight", "transformer_encoder.layers.3.linear2.bias", "transformer_encoder.layers.3.norm1.weight", "transformer_encoder.layers.3.norm1.bias", "transformer_encoder.layers.3.norm2.weight", "transformer_encoder.layers.3.norm2.bias", "transformer_encoder.layers.3.norm3.weight", "transformer_encoder.layers.3.norm3.bias", "transformer_encoder.layers.4.self_attn.in_proj_weight", "transformer_encoder.layers.4.self_attn.in_proj_bias", "transformer_encoder.layers.4.self_attn.out_proj.weight", "transformer_encoder.layers.4.self_attn.out_proj.bias", "transformer_encoder.layers.4.multihead_attn.in_proj_weight", "transformer_encoder.layers.4.multihead_attn.in_proj_bias", "transformer_encoder.layers.4.multihead_attn.out_proj.weight", "transformer_encoder.layers.4.multihead_attn.out_proj.bias", "transformer_encoder.layers.4.linear1.weight", "transformer_encoder.layers.4.linear1.bias", "transformer_encoder.layers.4.linear2.weight", "transformer_encoder.layers.4.linear2.bias", "transformer_encoder.layers.4.norm1.weight", "transformer_encoder.layers.4.norm1.bias", "transformer_encoder.layers.4.norm2.weight", "transformer_encoder.layers.4.norm2.bias", "transformer_encoder.layers.4.norm3.weight", "transformer_encoder.layers.4.norm3.bias", "transformer_encoder.layers.5.self_attn.in_proj_weight", "transformer_encoder.layers.5.self_attn.in_proj_bias", "transformer_encoder.layers.5.self_attn.out_proj.weight", "transformer_encoder.layers.5.self_attn.out_proj.bias", "transformer_encoder.layers.5.multihead_attn.in_proj_weight", "transformer_encoder.layers.5.multihead_attn.in_proj_bias", "transformer_encoder.layers.5.multihead_attn.out_proj.weight", "transformer_encoder.layers.5.multihead_attn.out_proj.bias", "transformer_encoder.layers.5.linear1.weight", "transformer_encoder.layers.5.linear1.bias", "transformer_encoder.layers.5.linear2.weight", "transformer_encoder.layers.5.linear2.bias", "transformer_encoder.layers.5.norm1.weight", "transformer_encoder.layers.5.norm1.bias", "transformer_encoder.layers.5.norm2.weight", "transformer_encoder.layers.5.norm2.bias", "transformer_encoder.layers.5.norm3.weight", "transformer_encoder.layers.5.norm3.bias", "transformer_encoder.norm.weight", "transformer_encoder.norm.bias", "correspondence_decoder.coor_mlp.0.weight", "correspondence_decoder.coor_mlp.0.bias", "correspondence_decoder.coor_mlp.2.weight", "correspondence_decoder.coor_mlp.2.bias", "correspondence_decoder.coor_mlp.4.weight", "correspondence_decoder.coor_mlp.4.bias", "correspondence_decoder.conf_logits_decoder.weight", "correspondence_decoder.conf_logits_decoder.bias", "feature_criterion.W", "feature_criterion_un.W".

what could be possibly the issue here? it seems that backbone or any other block name changed during training and testing? like 'backbone.encoder_blocks....' is missing while 'kpf_encoder.encoder_blocks...' is unexpected keys. Any hint or help would be appreciated!

commented

I can not reproduce this error on my PC ... Did you download the right model?

my mistake, i forgot i had made some changes to the model before, now it works