ZeldaM1 / interactive_portrat_retouching

PyTorch implementation of the paper "Region-Aware Portrait Retouching with Sparse Interactive Guidance“ published in IEEE Transactions on Multimedia

Home Page:https://github.com/ZeldaM1/interactive_portrat_retouching

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Region-Aware Portrait Retouching with Sparse Interactive Guidance

Huimin Zeng, Jie Huang, Jiacheng Li, Zhiwei Xiong

IEEE Transactions on Multimedia 2023

[Paper] [arXiv]

💡 Overview

❗ Prerequisites

  • Python 3.7
  • Pytorch 1.8.1

To get started, first please clone the repo

git clone https://github.com/ZeldaM1/interactive_portrat_retouching.git

You can use our docker by running the following commands:

docker pull registry.cn-hangzhou.aliyuncs.com/zenghuimin/zhm_docker:py37-torch18

🙌 Quick start

You can try our Demo!

  1. Download the pre-trained models.
  2. Put the downloaded pre-trained models to ./ckpt.
  3. Run the interactive portrait retouching demo
cd code
python demo.py  --checkpoint ckpt/c_ckpt.pth

If everything works, you will find an interactive GUI like:

You can also retouch your own portrait. All you need to do is to change the input and output paths, have fun!

👇 Training

First, please prepare the dataset for training.

  1. Please download PPR10K dataset in the official link.
  2. Download the annotations for each instance here.
  3. Unzip images and anntations of PPR10K to ./dataset, organize them as follows:
dataset
├── train 
│   ├── masks_360p
│   ├── masks_ins_360p
│   ├── source
│   ├── target_a
│   ├── target_b
│   └── target_c
└── val
    ├── masks_360p
    ├── masks_ins_360p
    ├── source
    ├── target_a
    ├── target_b
    └── target_c

Our codes adopt a three-stage training process as follows. First we train the automatic branch.

cd code
python train.py -opt options/train/c_s1_base.yml

Then train the interactive branch.

python  train.py  -opt_base   options/train/c_s1_base.yml  -opt   options/train/c_s2_inter.yml

Third, train the joint model.

python  train_dual_branch.py  -opt_base   options/train/c_s1_base.yml   -opt options/train/c_s3_joint.yml

👇 Testing

cd code
python test.py  -opt /disk2/zenghm/CSRNet/codes_share_v1_inter/options/test/auto/c_s3_joint.yml  -model ./ckpt/c_ckpt.pth  --save_results # automatic retouching evaluation
python test.py  -opt /disk2/zenghm/CSRNet/codes_share_v1_inter/options/test/inter/c_s3_joint.yml  -model ./ckpt/c_ckpt.pth  --save_results  # interactive retouching evaluation

👍 Citation

If our work inspires your research or some part of the codes are useful for your work, please star this repo and cite our paper:star::

@ARTICLE{10081407,
  author={Zeng, Huimin and Huang, Jie and Li, Jiacheng and Xiong, Zhiwei},
  journal={IEEE Transactions on Multimedia}, 
  title={Region-Aware Portrait Retouching with Sparse Interactive Guidance}, 
  year={2023},
  volume={},
  number={},
  pages={1-13},
  doi={10.1109/TMM.2023.3262185}}

📧 Contact

If you have any questions, please contact us via

👏 Acknowledgement

Some parts of this repo are based on RITM and CSRNet.

About

PyTorch implementation of the paper "Region-Aware Portrait Retouching with Sparse Interactive Guidance“ published in IEEE Transactions on Multimedia

https://github.com/ZeldaM1/interactive_portrat_retouching

License:GNU General Public License v3.0


Languages

Language:Python 83.9%Language:Cuda 7.9%Language:C++ 5.5%Language:MATLAB 2.2%Language:Cython 0.4%Language:Shell 0.2%