thuyngch / Human-Segmentation-PyTorch

Human segmentation models, training/inference code, and trained weights, implemented in PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

error test video

NguyenDangBinh opened this issue · comments

/Human-Segmentation-PyTorch$ python inference_video.py --watch --checkpoint ./checkpoint/UNet_ResNet18.pth
Traceback (most recent call last):
File "inference_video.py", line 80, in
model.load_state_dict(trained_dict, strict=False)
File "/home/anaconda3/envs/humansegmentation/lib/python3.6/site-packages/torch/nn/modules/module.py", line 845, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for UNet:
size mismatch for decoder1.deconv.weight: copying a param with shape torch.Size([512, 256, 4, 4]) from checkpoint, the shape in current model is torch.Size([1280, 96, 4, 4]).
size mismatch for decoder1.deconv.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for decoder2.deconv.weight: copying a param with shape torch.Size([256, 128, 4, 4]) from checkpoint, the shape in current model is torch.Size([96, 32, 4, 4]).
size mismatch for decoder2.deconv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for decoder3.deconv.weight: copying a param with shape torch.Size([128, 64, 4, 4]) from checkpoint, the shape in current model is torch.Size([32, 24, 4, 4]).
size mismatch for decoder3.deconv.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([24]).
size mismatch for decoder4.deconv.weight: copying a param with shape torch.Size([64, 64, 4, 4]) from checkpoint, the shape in current model is torch.Size([24, 16, 4, 4]).
size mismatch for decoder4.deconv.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for conv_last.0.weight: copying a param with shape torch.Size([3, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 16, 3, 3]).

In the "inference_video.py" file, the backbone is set as "mobilenetv2". Therefore, if you use UNet_ResNet18 checkpoint, you need to change the backbone to "resnet18" in here: https://github.com/AntiAegis/Human-Segmentation-PyTorch/blob/master/inference_video.py#L73