paper | Code and Demo will coming soon
Image-based virtual try-on enables users to virtually try on different garments by altering original clothes in their photographs. Generative Adversarial Networks (GANs) dominate the research field in image-based virtual try-on, but have not resolved problems such as unnatural deformation of garments and the blurry generation quality. Recently, diffusion models have emerged with surprising performance across various image generation tasks. While the generative quality of diffusion models is impressive, achieving controllability poses a significant challenge when applying it to virtual try-on tasks and multiple denoising iterations limit its potential for real-time applications. In this paper, we propose Controllable Accelerated virtual Try-on with Diffusion Model called CAT-DM. To enhance the controllability, a basic diffusion-based virtual try-on network is designed, which utilizes ControlNet to introduce additional control conditions and improves the feature extraction of garment images. In terms of acceleration, CAT-DM initiates a reverse denoising process with an implicit distribution generated by a pre-trained GAN-based model. Compared with previous try-on methods based on diffusion models, CAT-DM not only retains the pattern and texture details of the in-shop garment but also reduces the sampling steps without compromising generation quality. Extensive experiments demonstrate the superiority of CAT-DM against both GAN-based and diffusion-based methods in producing more realistic images and accurately reproducing garment patterns.
Our experiments were conducted on two NVIDIA GeForce RTX 4090 graphics cards, with a single RTX 4090 having 24GB of video memory. Please note that our model cannot be trained on graphics cards with less video memory than the RTX 4090.
- Clone the repository
git clone https://github.com/zengjianhao/CAT-DM
- A suitable
conda
environment namedCAT-DM
can be created and activated with:
conda env create -f environment.yaml
conda activate CAT-DM
- If you want to change the name of the environment you created, you need to modify the
name
in bothenvironment.yaml
andsetup.py
. - You need to make sure that
conda
is installed on your computer. - If there is a network error, try updating the environment using
conda env update -f environment.yaml
.
- Installing xFormers:
git clone https://github.com/facebookresearch/xformers.git
cd xformers
git submodule update --init --recursive
pip install -r requirements.txt
pip install -U xformers
cd ..
rm -rf xformers
- open
src/taming-transformers/taming/data/utils.py
, deletefrom torch._six import string_classes
, and changeelif isinstance(elem, string_classes):
toelif isinstance(elem, str):
- Download the DressCode dataset
- Generate the mask images and the agnostic images
# Generate the dresses dataset mask images and the agnostic images
python tools/dresscode_mask.py DatasetPath/dresses DatasetPath/dresses/mask
# Generate the lower_body dataset mask images and the agnostic images
python tools/dresscode_mask.py DatasetPath/lower_body DatasetPath/lower_body/mask DatasetPath/lower_body/agnostic
# Generate the upper_body dataset mask images and the agnostic images
python tools/dresscode_mask.py DatasetPath/upper_body DatasetPath/upper_body/mask DatasetPath/upper_body/agnostic
- Download the VITON-HD dataset
- Generate the mask images
# Generate the train dataset mask images
python tools/viton_mask.py DatasetPath/train DatasetPath/train/mask
# Generate the test dataset mask images
python tools/viton_mask.py DatasetPath/test DatasetPath/test/mask
- Download the Paint-by-Example model
- Download the divov2 ViT-L/14 model
- Make the ControlNet model
- DressCode:
python tools/add_control.py checkpoints/pbe.ckpt checkpoints/pbe-dim5.ckpt configs/train-dresscode.yaml
- VITON-HD:
python tools/add_control.py checkpoints/pbe.ckpt checkpoints/pbe-dim6.ckpt configs/train-viton.yaml
- DressCode:
bash test-dresscode.sh
bash test-viton.sh
bash train-dresscode.sh
bash train-viton.sh
@article{zeng2023cat,
title={CAT-DM: Controllable Accelerated Virtual Try-on with Diffusion Model},
author={Zeng, Jianhao and Song, Dan and Nie, Weizhi and Tian, Hongshuo and Wang, Tongtong and Liu, Anan},
journal={arXiv preprint arXiv:2311.18405},
year={2023}
}