TypeError: pic should be PIL Image or ndarray. Got <class 'NoneType'>
dqyyds opened this issue · comments
Thanks so much again for your amazing work. Below is the latest error I encountered when I run python3 gendata.py ./images.
./images/2.jpg
Gathering dlatents...
Done!
100%|███████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 604.67it/s]
/home/ubuntu/extend/anaconda-temp/envs/dq_StyleFlow/lib/python3.6/site-packages/torch/nn/functional.py:3613: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode)
/home/ubuntu/extend/anaconda-temp/envs/dq_StyleFlow/lib/python3.6/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
/home/ubuntu/extend/anaconda-temp/envs/dq_StyleFlow/lib/python3.6/site-packages/torch/nn/functional.py:3658: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
"The default behavior for interpolate/upsample with float scale_factor changed "
Projecting image(s) 1/1
loss: 1.871e+06, lpips_distance: 0.4391, lr: 0
noise_reg: 18.71
100%|██████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.08it/s]
Traceback (most recent call last):
File "gendata.py", line 112, in
faceimg = faceattr_trans(faceimg)
File "/home/ubuntu/extend/anaconda-temp/envs/dq_StyleFlow/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 60, in call
img = t(img)
File "/home/ubuntu/extend/anaconda-temp/envs/dq_StyleFlow/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 97, in call
return F.to_tensor(pic)
File "/home/ubuntu/extend/anaconda-temp/envs/dq_StyleFlow/lib/python3.6/site-packages/torchvision/transforms/functional.py", line 102, in to_tensor
raise TypeError('pic should be PIL Image or ndarray. Got {}'.format(type(pic)))
TypeError: pic should be PIL Image or ndarray. Got <class 'NoneType'>
I would be super grateful if you could tell me how to fix it.
Means face is not detected. There are some defect in face detecting. You can remove it.
Thank you so much for your reply. I didn't remove face detecting. But I add faceimg = transforms.ToPILImage()(faceimg) that you actually removed it. And it actually worked.
xxx: accuracy of age and eyeglasses is very poor
#faceimg = cv.resize(faceimg, (224,224))
#faceimg = faceimg[..., ::-1] # RGB
faceimg = transforms.ToPILImage()(faceimg)
faceimg = faceimg.resize((224,224))
faceimg = faceattr_trans(faceimg)
inputs = torch.unsqueeze(faceimg, 0).float().to(device)
I would like to ask you how this will affect the final experimental, because I feel that adjusting arbitrary face pictures is not that good. Thank you for your time.
Remove picture from your directory in which face is not detected.
Thank you so much for your kindly reply. https://gitee.com/harazdbuild/aesthetics_fcy_dq/raw/master/73.jpg was the face I input. And https://gitee.com/harazdbuild/aesthetics_fcy_dq/raw/master/13.png was the result I got after I changed the expression.
What do you think of the result? Because I noticed that the generated face is not exactly the same as the custom face. I would be very grateful if you could reply to me, as my next work will be based on this. Thank you so much.
Your url cannot be accessed.
seems okay. you can try e4e encoder if the result is based on psp encoder. you can try stylespace or styleclip also.