researchmm / STTN

[ECCV'2020] STTN: Learning Joint Spatial-Temporal Transformations for Video Inpainting

Home Page:https://arxiv.org/abs/2007.10247

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

output resolution?

antithing opened this issue · comments

Thank you for making this code available, it works great, however it downsamples the input to 432x240 as shown here:

w, h = 432, 240
ref_length = 20
neighbor_stride = 5
default_fps = 24

This obviously downsamples the output as well, resulting in a lower resolution result.

Can I run this on full size (HD, 4kUHD) videos and output the same resolution as input?

Can you also please explain what ref_length and neighbor_stride do, and when to change these values?

Thank you!

I managed to figure this out. You can set image width and height in test.py to your image size, but you also need to change the patch size in model/sttn.py. This resolves issue #7

test.py, line 40:
w, h = [your_image_width, your_image_height]

model/sttn.py, line 70:
patchsize = [(image_height / 4, image_width / 4), (image_height / 8, image_width / 8), (image_height / 16, image_width / 16), (image_height / 32, image_width / 32)]

This worked for me at 256x256 resolution. I was worried that the weights wouldn't load because of the patch size change but it doesn't seem to be a problem.

ref_length is how many frames the model takes as a reference for one inpainted frame, neighbour_stride is how many it jumps, or strides, along the frames in the loop. It's got nothing to do with the image sizes.

Hi, and thanks! I have tried this at 1920x1080 resolution and I get a CUDA out of memory error.(Single RTX 3090).

Would changing ref_length or neighbour_stride help with this?

I'm not sure, perhaps changing the ref_length would help as it's attending to fewer frames but I don't know as I haven't looked into this. Don't want to make this discussion off-topic, I suggest closing the issue if my fix above solved your problem.

Hard to say if it fixes the issue as I cant run it, but as it is running out of memory, i assume that it is trying. Thanks for your help!

I managed to figure this out. You can set image width and height in test.py to your image size, but you also need to change the patch size in model/sttn.py. This resolves issue #7

test.py, line 40: w, h = [your_image_width, your_image_height]

model/sttn.py, line 70: patchsize = [(image_height / 4, image_width / 4), (image_height / 8, image_width / 8), (image_height / 16, image_width / 16), (image_height / 32, image_width / 32)]

This worked for me at 256x256 resolution. I was worried that the weights wouldn't load because of the patch size change but it doesn't seem to be a problem.

ref_length is how many frames the model takes as a reference for one inpainted frame, neighbour_stride is how many it jumps, or strides, along the frames in the loop. It's got nothing to do with the image sizes.

Hi, seems patchsize in model/sttn.py has different calculalte method with your method.

Code in test.py and model/sttn.py list blow:

w, h = 432, 240
patchsize = [(108, 60), (36, 20), (18, 10), (9, 5)]

If using your post the patchsize should be [(108.0, 60.0), (54.0, 30.0), (27.0, 15.0), (13.5, 7.5)]. And how should we deal with float than integer.

Hi, yes i did some further work on this after my comment. The patch size calculations I outlined in my original comment work for 256x256 resolution, but for other resolutions you need to make sure that the resolution can be divided by the patch size evenly (i.e. into integers, not floats). You don't have to go down in patch size by factors of 4, 8, 16, and 32 (e.g. in the original code the factors are 4, 12, 24, and 48).

@alex-flwls thanks for your further explanation. So the target of set patchsize is to get the factors?

It's just a workaround, the code doesn't seem like it was written to support multiple resolutions. You can get around it by adjusting the patch sizes.

@alex-flwls I'll try it, thanks.