Need help to do inference for gray scale image
mkothule opened this issue · comments
Want to run the network for gray scale images (single channel)
I get this error while running network on gray images
Traceback (most recent call last):
File "demo_imgs.py", line 100, in
demo(args)
File "demo_imgs.py", line 50, in demo
image1 = load_image(imfile1)
File "demo_imgs.py", line 29, in load_image
img = torch.from_numpy(img).permute(2, 0, 1).float()
RuntimeError: number of dims don't match in permute
I tried copying same gray values for all 3 channel but results are not very good.
I see eth3d is gray scale image dataset so I also tried with eth3d network shared.
But I still get above error.
Can you please share what change is needed to adapt network to gray images?
can you give me your gray images?
You can use KITTI pretrained model, that will perform well.
thanks gangweiX. I see sensible output with kitti2015 pre-trained network for above images.