We convert the original data VITON into different directories for easily use.
You can get the processed data at GoogleDrive or by running:
We just use L1 loss for criterion in this code.
TV norm constraints for the offsets will make GMM more robust.
An example training command is
python train.py --name gmm_train_new --stage GMM --workers 4 --save_count 5000 --shuffle
Choose the different source data for eval with the option --datamode
.
An example training command is
python test.py --name gmm_traintest_new --stage GMM --workers 4 --datamode test --data_list test_pairs.txt --checkpoint checkpoints/gmm_train_new/gmm_final.pth
You can see the results in tensorboard, as show below.
Using cp_dataset.py
for keep background and cp_dataset_old
for remove background
Before the trainning, you should generate warp-mask & warp-cloth, using the test process of GMM with --datamode train
.
Then move these files or make soft links under the directory data/train
.
An example training command is
python train.py --name tom_train_new --stage TOM --workers 4 --save_count 5000 --shuffle
You can see the results in tensorboard, as show below.
An example training command is
python test.py --name tom_test_new --stage TOM --workers 4 --datamode test --data_list test_pairs.txt --checkpoint checkpoints/tom_train_new/tom_final.pth
You can see the results in tensorboard, as show below.
You can download here