CodingMice / MGMatting

This repository includes the official project of Mask Guided (MG) Matting, presented in our paper: Mask Guided Matting via Progressive Refinement Network

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Mask Guided Matting via Progressive Refinement Network

This repository includes the official project of Mask Guided (MG) Matting, presented in our paper:

Mask Guided Matting via Progressive Refinement Network

Johns Hopkins University, Adobe Research

MODNet refers to recently released trimap free matting method https://github.com/ZHKKKe/MODNet. PS refers to Adobe Photoshop 2021.

Check out more visual results and comparisons of MG Matting to other matting methods. We provide visual comparison in two settings, where the first one MG Matting and competitors are trained on Composition-1k only (except LFM), and another one all methods are trained with different internal datasets.


or preview them at

We note that MG Matting, though not utilizing temporal information yet, can also potentially produce great results on videos. Please refer to the following links for video demos.

Video Demo 1 | Video Demo 2 | Video Demo 3 | Video Demo 4

Highlights

  • Trimap-free Alpha Estimation: MG Matting does not require a carefully annotated trimap as guidance inputs. Instead, it takes a general rough mask, which could be generated by segmentation or saliency models automatically, and predicts an alpha matte with great details;

  • Foreground Color Prediction: MG Matting predicts the foreground color besides alpha matte, we notice and address the inaccuracy of foreground annotations in Composition-1k by Random Alpha Blending;

  • No Additional Training Data: MG Matting is trained only with the widely-used publicly avaliable synthetic dataset Composition-1k, and shows great performance on both synthetic and real-world benchmarks.

News

TODO

  • Inference demo, real-world portrait benchmark shall be released soon. Before that, if you want to test your model on the real-world portrait benchmark or compare results with MG Matting, feel free to contact Qihang Yu (yucornetto@gmail.com).

Visualization Examples

We provide examples for visually comparing MG Matting with other matting methods. We also note that our model can even potentially deal with video matting.

In addition, we also use an internal portrait dataset consisting of 4395 images to train an even stronger MG Matting model, and construct a fully automatic matting system based on it. We provide visually comparison of this automatic matting system with other latest matting methods including MODNet and commercial softwares such as Adobe PhotoShop.

Please refer to RESULT.md for more visualization results.

Dataset

In our experiments, only Composition-1k training set is used to train the model. And the obtained model is evaluated on three dataset: Composition-1k, Distinction-646, and our real-world portrait dataset.

For Compsition-1k, please contact Brian Price (bprice@adobe.com) requesting for the dataset. And please refer to GCA Matting for dataset preparation.

For Distinction-646, please refer to HAttMatting for the dataset.

Our real-world portrait dataset shall be released to public soon.

Citation

If you find this work or code useful for your research, please use the following BibTex entry:

@article{yu2020mask,
  title={Mask Guided Matting via Progressive Refinement Network},
  author={Yu, Qihang and Zhang, Jianming and Zhang, He and Wang, Yilin and Lin, Zhe and Xu, Ning and Bai, Yutong and Yuille, Alan},
  journal={arXiv preprint arXiv:2012.06722},
  year={2020}
}

Lisence

Research only

About

This repository includes the official project of Mask Guided (MG) Matting, presented in our paper: Mask Guided Matting via Progressive Refinement Network

License:Other