XWalways / SimSwap-train

Reimplement of SimSwap training code

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SimSwap-train

Reimplement of SimSwap training code

  • 20210919 这份代码原本是中秋节的时候写的;
  • 20211130 后来我们团队有换脸相关需求了,做了很多改进与优化,不过应该没法分享出来;
  • 20211211 我把这份代码的使用文档更新了一版,512pix是可以训的,希望能帮助到各位。


Instructions

1. Environment preparation

Step 1.Install packages for python

  1. Refer to the SIMSWAP preparation to install the python packages.
  2. Refer to the SIMSWAP preparation to download the 224-pix pretrained model (for finetune) or none and other necessary pretrained weights.

Step 2.Modify the insightface package to support arbitrary-resolution training

  • If you use CONDA and conda environment name is simswap, then find the code in place:
    C://Anaconda/envs/simswap/Lib/site-packages/insightface/utils/face_align.py

    change #line 28 & 29:
    src = np.array([src1, src2, src3, src4, src5])
    src_map = {112: src, 224: src * 2}
    into
    src_all = np.array([src1, src2, src3, src4, src5])
    #src_map = {112: src, 224: src * 2}

    change #line 53:
    src = src_map[image_size]
    into
    src = src_all * image_size / 112

    After modifying code, we can extract faces of any resolution and pass them to the model for training.
  • If you don't use CONDA, just find the location of the package and change the code in the same way as above.



2. Preparing training data

Preparing image files

  • Put all the image files in your datapath (eg. ./dataset/CelebA)
  • We recommend you with CelebA dataset which contains clear and diverse face images.

Pre-Processing image files

  • Run the commad with :
    CUDA_VISIBLE_DEVICES=0 python make_dataset.py \
      --dataroot ./dataset/CelebA \
      --extract_size 512 \
      --output_img_dir ./dataset/CelebA/imgs \
      --output_latent_dir ./dataset/CelebA/latents

Getting extracted images and latents

  • When data-processing is done, two folders will be created in ./dataset/CelebA/:
    ./dataset/CelebA/imgs/: extracted 512-pix images
    ./dataset/CelebA/latents/: extracted image face latents embedded from ArcNet network



3. Start Training

Finetuning

  • Run the command with:
    CUDA_VISIBLE_DEVICES=0 python train.py \
      --name CelebA_512_finetune \
      --which_epoch latest \
      --dataroot ./dataset/CelebA \
      --image_size 512 \
      --display_winsize 512 \
      --continue_train

    NOTICE:
      If chekpoints/CelebA_512_finetune is an un-existed folder, it will first copy the official model from chekpoints/people/latest_net_*.pth to chekpoints/CelebA_512_finetune/.

New training

  • Run the command with:
    CUDA_VISIBLE_DEVICES=0 python train.py \
      --name CelebA_512 \
      --which_epoch latest \
      --dataroot ./dataset/CelebA \
      --image_size 512 \
      --display_winsize 512

  • When training is done, several files will be created in chekpoints/CelebA_512_finetune folder:
    web/: training-process visualization files
    latest_net_G.pth: Latest checkpoint of G network
    latest_net_D1.pth: Latest checkpoint of D1 network
    latest_net_D2.pth: Latest checkpoint of D2 network
    loss_log.txt: Doc to record loss during whole training process
    iter.txt: Doc to record iter information
    opt.txt: Doc to record options for the training




4.Training Result

(1)CelebA with 224x224 res

Image text

(2)CelebA with 512x512 res

Image text Image text



5.Inference

Face swapping for video with 1 face

  • Run the command with:
    python test_video_swapsingle.py \
      --image_size 512 \
      --use_mask \
      --name CelebA_512_finetune \
      --Arc_path arcface_model/arcface_checkpoint.tar \
      --pic_a_path ./demo_file/Iron_man.jpg \
      --video_path ./demo_file/multi_people_1080p.mp4 \
      --output_path ./output/multi_test_swapsingle.mp4 \
      --temp_path ./temp_results

Face swapping for video/images with more faces

Differences from our codes and official codes

  • param crop_size -> image_size
  • I applied spNorm to the high-resolution image during training, which is conducive to the the model learning.
  • This code can be compatible with the official SimSwap pretrained-weight.



About

Reimplement of SimSwap training code

License:Other


Languages

Language:Python 79.6%Language:Jupyter Notebook 20.1%Language:Shell 0.2%