jjdbear / SimSwap-train

Reimplement of SimSwap training code

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SimSwap-train

Reimplement of SimSwap training code
趁着中秋节复现了一版SimSwap的训练代码,支持高分辨率数据的训练,分享给大家。

Instructions

1.Environment Preparation

(1)Refer to the README document of SIMSWAP to configure the environment and download the pretrained model;
(2)In order to support custom resolution, you need to modify two places in /*your envs*/site-packages/insightface/utils/face_align.py:
  line28: src_all = np.array([src1, src2, src3, src4, src5])
  line53: src = src_all * image_size / 112

2.Making Training Data

python make_dataset.py --dataroot ./dataset/CelebA --extract_size 512 --output_img_dir ./dataset/CelebA/imgs --output_latent_dir ./dataset/CelebA/latents

The face images and latents will be recored in the output_img_dir and output_latent_dir directories.

3.Start Training

(1)New Training

CUDA_VISIBLE_DEVICES=0 python train.py --name CelebA_512 --dataroot ./dataset/CelebA --image_size 512 --display_winsize 512

Training visualization, loss log-files and model weights will be stored in chekpoints/name folder.

(2)Finetuning

CUDA_VISIBLE_DEVICES=0 python train.py --name CelebA_512_finetune --dataroot ./dataset/CelebA --image_size 512 --display_winsize 512 --continue_train

If chekpoints/name is an un-existed folder, it will first copy the official model from chekpoints/people to chekpoints/name; then finetuning.

4.Training Result

(1)CelebA with 224x224 res

Image text

(2)CelebA with 512x512 res

Image text Image text

5.Inference

  I applied spNorm to the high-resolution image during training, which is conducive to the the model learning. Therefore, during Inference you need to modify
  swap_result = swap_model(None, frame_align_crop_tenor, id_vetor, None, True)[0]
  to
  swap_result = swap_model(None, spNorm(frame_align_crop_tenor), id_vetor, None, True)[0]

About

Reimplement of SimSwap training code

License:Other


Languages

Language:Python 79.5%Language:Jupyter Notebook 20.2%Language:Shell 0.2%