The dataset is created by us, using Pillow library for Python 3. Images are in size 500x500px.
The assets
folder contains all the data files required to create the dataset images:
- Alice in Wonderland text file,
assets/alice_in_wonderland.txt
- used in order to generate sentences in English that will be put in the images. - Images found on the internet of transparent spots or stains - located in
assets/spots
folder.
We create the input images by taking a random sentence from the .txt file and putting it in a random position of the image. Located on input_images
folder.
We take an input image from input_images
folder, and add to it a random image of spot/stain taken from the assets/spots
folder.
We randomize some parameters:
- Position of the added spot/stain
- Angle of the image
After that, the output is saved. A PyTorch implementation of AttGAN - Arbitrary Facial Attribute Editing: Only Change What You Want
Test on the CelebA validating set
Inverting 13 attributes respectively. From left to right: Input, Reconstruction, Bald, Bangs, Black_Hair, Blond_Hair, Brown_Hair, Bushy_Eyebrows, Eyeglasses, Male, Mouth_Slightly_Open, Mustache, No_Beard, Pale_Skin, Young
The original TensorFlow version can be found here.
- Python 3
- PyTorch 0.4.0
- TensorboardX
pip3 install -r requirements.txt
If you'd like to train with multiple GPUs, please install PyTorch v0.4.0 instead of v1.0.0 or above. The so-called stable version of PyTorch has a bunch of problems with regard to nn.DataParallel()
. E.g. pytorch/pytorch#15716, pytorch/pytorch#16532, etc.
pip3 install --upgrade torch==0.4.0
- Dataset
- CelebA dataset
- Images should be placed in
./data/img_align_celeba/*.jpg
- Attribute labels should be placed in
./data/list_attr_celeba.txt
- Images should be placed in
- HD-CelebA (optional)
- Please see here.
- CelebA-HQ dataset (optional)
- Please see here.
- Images should be placed in
./data/celeba-hq/celeba-*/*.jpg
- Image list should be placed in
./data/image_list.txt
- CelebA dataset
- Pretrained models: download the models you need and unzip the files to
./output/
as below,output ├── 128_shortcut1_inject0_none ├── 128_shortcut1_inject1_none ├── 256_shortcut1_inject0_none ├── 256_shortcut1_inject1_none ├── 256_shortcut1_inject0_none_hq ├── 256_shortcut1_inject1_none_hq ├── 384_shortcut1_inject0_none_hq └── 384_shortcut1_inject1_none_hq
CUDA_VISIBLE_DEVICES=0 \
python train.py \
--img_size 128 \
--shortcut_layers 1 \
--inject_layers 1 \
--experiment_name 128_shortcut1_inject1_none \
--gpu
CUDA_VISIBLE_DEVICES=0 \
python train.py \
--data CelebA-HQ \
--img_size 256 \
--shortcut_layers 1 \
--inject_layers 1 \
--experiment_name 256_shortcut1_inject1_none_hq \
--gpu \
--multi_gpu
tensorboard \
--logdir ./output
CUDA_VISIBLE_DEVICES=0 \
python test.py \
--experiment_name 128_shortcut1_inject1_none \
--test_int 1.0 \
--gpu
CUDA_VISIBLE_DEVICES=0 \
python test_multi.py \
--experiment_name 128_shortcut1_inject1_none \
--test_atts Pale_Skin Male \
--test_ints 0.5 0.5 \
--gpu
In our dataset, you need to create a folder in Bylevels-AttGAN/data/custom test. Example:
CUDA_VISIBLE_DEVICES=0 \
python3 test_multi.py --experiment_name 128_shortcut1_inject1_none_16000_bytype \
--test_atts Clean Stain_Level_1 \
--test_ints -1 1 \
--gpu \
--custom_img
CUDA_VISIBLE_DEVICES=0 \
python test_slide.py \
--experiment_name 128_shortcut1_inject1_none \
--test_att Male \
--test_int_min -1.0 \
--test_int_max 1.0 \
--n_slide 10 \
--gpu
CUDA_VISIBLE_DEVICES=0 \
python test.py \
--experiment_name 384_shortcut1_inject1_none_hq \
--test_int 1.0 \
--gpu \
--custom_img
Your custom images are supposed to be in ./data/custom
and you also need an attribute list of the images ./data/list_attr_custom.txt
. Please crop and resize them into square images in advance.