hmorimitsu / ptlflow

PyTorch Lightning Optical Flow models, scripts, and pretrained weights.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Difference inference result with and without using batch inference?

ZHAOZHIHAO opened this issue · comments

Hi,

First thanks for your nice library.

I try to inference a batch like #28, but the result seems not aligned with no batch inference. My code for batch is as follows:

    # images: a list of numpy array in H,W,C
    io_adapter = IOAdapter(model, images[0].shape[:2])
    inputs = io_adapter.prepare_inputs(np.array(images))
    input_images = inputs["images"][0]
    video1 = input_images[:-1]
    video2 = input_images[1:]
    input_images = torch.stack((video1, video2), dim=1)
    inputs["images"] = input_images
    out = model(inputs)
    out = io_adapter.unpad_and_unscale(out)

Best

Hi, thanks for reporting.

Could you also show how you do the non-batch inference? I did a simple test and they give the same results, but maybe you are doing in a different way. Please try to provide a more complete example showing the difference.

Best,

Hi, yesterday I was testing with CPU. I'll test it with GPU and see if they are aligned.

While using GPU, I found that the algorithm takes more than 32G memory if I test a batch of images in a sequential manner, which is unreasonable. And "with torch.no_grad():" solves this problem.

Best

Hi,

I think the batch version should be correct. I had different input for batch and sequential before.

Best