yc-cui / blindinpainting_vcnet

VCNet: a robust approach to blind image inpainting, ECCV2020

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

by Yi Wang, Ying-Cong Chen, Xin Tao, and Jiaya Jia. The training & testing specifications will be updated.

Introduction

This repository gives the implementation of our method in ECCV 2020 paper, 'VCNet: a robust approach to blind image inpainting' (supplementary file). It studies how to repair images with unknown contaminations automatically.

learned semantic layouts

Blind inpainting is a task to automatically complete visual contents without specifying masks for missing areas in an image. Previous work assumes known missing-region-pattern, limiting the application scope. We instead relax the assumption by defining a new blind inpainting setting, making training a neural system robust against various unknown missing region patterns.

Prerequisites

  • Python3.5 (or higher)
  • Tensorflow 1.4 (or later versions, excluding 2.x) with NVIDIA GPU or CPU
  • OpenCV
  • numpy
  • scipy
  • easydict

For tensorflow implementations

Pretrained models

FFHQ-HQ_p256 trained with stroke masks. (Password: ted9)

CelebA-HQ_p256 trained with stroke masks. (Password: 7dzs)

Acknowledgments

Our code benefits a lot from pix2pixHD and Generative Image Inpainting with Contextual Attention.

Citation

If our method is useful for your research, please consider citing:

@article{wang2020vcnet,
    title={VCNet: A Robust Approach to Blind Image Inpainting},
    author={Wang, Yi and Chen, Ying-Cong and Tao, Xin and Jia, Jiaya},
    journal={arXiv preprint arXiv:2003.06816},
    year={2020}
}

Contact

Please send email to yiwang@cse.cuhk.edu.hk.

About

VCNet: a robust approach to blind image inpainting, ECCV2020

License:MIT License


Languages

Language:Python 93.2%Language:Shell 6.8%