ofirkris / VQFR

ECCV2022, Oral, VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder

Home Page:https://ycgu.site/projects/VQFR_Project/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

VQFR (ECCV 2022 Oral)

This paper aims at investigating the potential and limitation of Vector-Quantized (VQ) dictionary for blind face restoration.
We propose a new framework VQFR – incoporating the Vector-Quantized Dictionary and the Parallel Decoder. Compare with previous arts, VQFR produces more realistic facial details and keep the comparable fidelity.


VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder

[Paper]   [Project Page]   [Video]   [B站]   [Poster]   [Slides]
Yuchao Gu, Xintao Wang, Liangbin Xie, Chao Dong, Gen Li, Ying Shan, Ming-Ming Cheng
Nankai University; Tencent ARC Lab; Tencent Online Video; Shanghai AI Laboratory;
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences


🔧 Dependencies and Installation

Installation

  1. Clone repo

    git clone https://github.com/TencentARC/VQFR.git
    cd VQFR
  2. Install dependent packages

    # Build VQFR with extension
    pip install -r requirements.txt
    VQFR_EXT=True python setup.py develop
    
    # Following packages are required to run demo.py
    
    # Install basicsr - https://github.com/xinntao/BasicSR
    pip install basicsr
    
    # Install facexlib - https://github.com/xinntao/facexlib
    # We use face detection and face restoration helper in the facexlib package
    pip install facexlib
    
    # If you want to enhance the background (non-face) regions with Real-ESRGAN,
    # you also need to install the realesrgan package
    pip install realesrgan

⚡ Quick Inference

Download pre-trained VQFR model [Google Drive|腾讯微云].

Inference

# for real-world image
python demo.py -i inputs/whole_imgs -o results -v 1.0 -s 2

# for cropped face
python demo.py -i inputs/cropped_faces/ -o results -v 1.0 -s 1 --aligned
Usage: python demo.py -i inputs/whole_imgs -o results -v 1.0 -s 2 [options]...

  -h                   show this help
  -i input             Input image or folder. Default: inputs/whole_imgs
  -o output            Output folder. Default: results
  -v version           VQFR model version. Option: 1.0. Default: 1.0
  -s upscale           The final upsampling scale of the image. Default: 2
  -bg_upsampler        background upsampler. Default: realesrgan
  -bg_tile             Tile size for background sampler, 0 for no tile during testing. Default: 400
  -suffix              Suffix of the restored faces
  -only_center_face    Only restore the center face
  -aligned             Input are aligned faces
  -ext                 Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto

💻 Training

We provide the training codes for VQFR (used in our paper).

Codebook Training

  • Pre-train VQ codebook on FFHQ datasets.
 python -m torch.distributed.launch --nproc_per_node=8 --master_port=2022 vqfr/train.py -opt options/train/VQGAN/train_vqgan_v1_B16_800K.yml --launcher pytorch
  • Or download our pretrained VQ codebook [Google Drive|腾讯微云] and put them in the experiments/pretrained_models folder.

Restoration Training

  • Modify the configuration file options/train/VQFR/train_vqfr_v1_B16_200K.yml accordingly.

  • Training

python -m torch.distributed.launch --nproc_per_node=8 --master_port=2022 vqfr/train.py -opt options/train/VQFR/train_vqfr_v1_B16_200K.yml --launcher pytorch

📏 Evaluation

We evaluate VQFR on one synthetic dataset CelebA-Test, and three real-world datasets LFW-Test, CelebChild and Webphoto-Test. For reproduce our evaluation results, you need to perform the following steps:

  1. Download testing datasets (or VQFR results) by the following links:
Name Datasets Short Description Download VQFR Results
Testing Datasets CelebA-Test(LQ/HQ) 3000 (LQ, HQ) synthetic images for testing Google Drive / 腾讯微云 Google Drive / 腾讯微云
LFW-Test(LQ) 1711 real-world images for testing
CelebChild(LQ) 180 real-world images for testing
Webphoto-Test(LQ) 469 real-world images for testing
  1. Install related package and download pretrained models for different metrics:
    # LPIPS
    pip install lpips

    # Deg.
    cd metric_paper/
    git clone https://github.com/ronghuaiyang/arcface-pytorch.git
    mv arcface-pytorch/ arcface/
    rm arcface/config/__init__.py arcface/models/__init__.py

    # put pretrained models of different metrics to "experiments/pretrained_models/metric_weights/"
Metrics Pretrained Weights Download
FID inception_FFHQ_512.pth Google Drive / 腾讯微云
Deg resnet18_110.pth
LMD alignment_WFLW_4HG.pth
  1. Generate restoration results:
  • Specify the dataset_lq/dataset_gt to the testing dataset root in test_vqfr_v1.yml.

  • Then run the following command:

    python vqfr/test.py -opt options/test/VQFR/test_vqfr_v1.yml
  1. Run evaluation:
    # LPIPS|PSNR/SSIM|LMD|Deg.
    python metric_paper/[calculate_lpips.py|calculate_psnr_ssim.py|calculate_landmark_distance.py|calculate_cos_dist.py]
    -restored_folder folder_to_results -gt_folder folder_to_gt

    # FID|NIQE
    python metric_paper/[calculate_fid_folder.py|calculate_niqe.py] -restored_folder folder_to_results

📜 License

VQFR is released under Apache License Version 2.0.

👀 Acknowledgement

Thanks to the following open-source projects:

Taming-transformers

GFPGAN

DistSup

📋 Citation

@inproceedings{gu2022vqfr,
  title={VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder},
  author={Gu, Yuchao and Wang, Xintao and Xie, Liangbin and Dong, Chao and Li, Gen and Shan, Ying and Cheng, Ming-Ming},
  year={2022},
  booktitle={ECCV}
}

📧 Contact

If you have any question, please email yuchaogu9710@gmail.com.

About

ECCV2022, Oral, VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder

https://ycgu.site/projects/VQFR_Project/

License:Other


Languages

Language:Python 83.3%Language:Cuda 10.0%Language:C++ 6.7%