gangweiX / IGEV

[CVPR 2023] Iterative Geometry Encoding Volume for Stereo Matching and Multi-View Stereo

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Need help to do inference for gray scale image

mkothule opened this issue · comments

Want to run the network for gray scale images (single channel)

I get this error while running network on gray images
Traceback (most recent call last):
File "demo_imgs.py", line 100, in
demo(args)
File "demo_imgs.py", line 50, in demo
image1 = load_image(imfile1)
File "demo_imgs.py", line 29, in load_image
img = torch.from_numpy(img).permute(2, 0, 1).float()
RuntimeError: number of dims don't match in permute

I tried copying same gray values for all 3 channel but results are not very good.

I see eth3d is gray scale image dataset so I also tried with eth3d network shared.
But I still get above error.

Can you please share what change is needed to adapt network to gray images?

can you give me your gray images?

Currently I am using KITTIT images (RGB2Gray converted) for experimentation.
000000_10_image_3 png_gray
000000_10_image_2_gray

You can use KITTI pretrained model, that will perform well.

thanks gangweiX. I see sensible output with kitti2015 pre-trained network for above images.