thuyngch / Human-Segmentation-PyTorch

Human segmentation models, training/inference code, and trained weights, implemented in PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

error: python inference_video.py --watch --checkpoint ./checkpoint/UNet_ResNet18.pth

NguyenDangBinh opened this issue · comments

dear,
~/Human-Segmentation-PyTorch$ python inference_video.py --watch --checkpoint ./checkpoint/UNet_ResNet18.pth
OpenCV: FFMPEG: tag 0x5634504d/'MP4V' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
Traceback (most recent call last):
File "inference_video.py", line 80, in
model.load_state_dict(trained_dict, strict=False)
File "/home/anaconda3/envs/humansegmentation/lib/python3.6/site-packages/torch/nn/modules/module.py", line 845, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for UNet:
size mismatch for decoder1.deconv.weight: copying a param with shape torch.Size([512, 256, 4, 4]) from checkpoint, the shape in current model is torch.Size([1280, 96, 4, 4]).
size mismatch for decoder1.deconv.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for decoder2.deconv.weight: copying a param with shape torch.Size([256, 128, 4, 4]) from checkpoint, the shape in current model is torch.Size([96, 32, 4, 4]).
size mismatch for decoder2.deconv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for decoder3.deconv.weight: copying a param with shape torch.Size([128, 64, 4, 4]) from checkpoint, the shape in current model is torch.Size([32, 24, 4, 4]).
size mismatch for decoder3.deconv.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([24]).
size mismatch for decoder4.deconv.weight: copying a param with shape torch.Size([64, 64, 4, 4]) from checkpoint, the shape in current model is torch.Size([24, 16, 4, 4]).
size mismatch for decoder4.deconv.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for conv_last.0.weight: copying a param with shape torch.Size([3, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 16, 3, 3]).

Can you show me what is error?

You can check here #9 for the answer.