knazeri / edge-connect

EdgeConnect: Structure Guided Image Inpainting using Edge Prediction, ICCV 2019 https://arxiv.org/abs/1901.00212

Home Page:http://openaccess.thecvf.com/content_ICCVW_2019/html/AIM/Nazeri_EdgeConnect_Structure_Guided_Image_Inpainting_using_Edge_Prediction_ICCVW_2019_paper.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Hello, After reading your paper, may I have a question that why you choice 178 for the celebA dataset drop size.

FavorMylikes opened this issue · comments

Here is what the paper describe.

With CelebA, we cropped the center 178x178 of the images, then resized them to 256x256 using bilinear interpolation. For Paris StreetView, since the images in the dataset are elongated (936 x 537), we separate each image into three: 1) Left 537 x 537, 2) middle 537 x 537, 3) right 537 x 537, of the image. These images are scaled down to 256x256 for our model, totaling 44; 700 images.

And after little test, I feel this number has a big impact on the results.

So, maybe you have some experience about it.

Could you share it? I really appreciate it.