JiahuiYu / generative_inpainting

DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral

Home Page:http://jiahuiyu.com/deepfill/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Format of 'flist'

AterLuna opened this issue · comments

I want to train network with new dataset.
For training, I tried to modify inpaint.yml file, but I'm not sure about how to set dataset path.
It seems that I have to add new dataset as 'flist' file to DATA_FLIST, but I cannot find how to make appropriate 'flist' files.
Is there any reference for flist file format, or some examples of them?

Hi AterLuna,
You have to write code for yourself to generate the flist file. Here is my code:

#!/usr/bin/python

import argparse
import os
from random import shuffle

parser = argparse.ArgumentParser()
parser.add_argument('--folder_path', default='./training_data', type=str,
                    help='The folder path')
parser.add_argument('--train_filename', default='./data_flist/train_shuffled.flist', type=str,
                    help='The output filename.')
parser.add_argument('--validation_filename', default='./data_flist/validation_shuffled.flist', type=str,
                    help='The output filename.')
parser.add_argument('--is_shuffled', default='1', type=int,
                    help='Needed to shuffle')

if __name__ == "__main__":

    args = parser.parse_args()

    # get the list of directories
    dirs = os.listdir(args.folder_path)
    dirs_name_list = []

    # make 2 lists to save file paths
    training_file_names = []
    validation_file_names = []

    # print all directory names
    for dir_item in dirs:
        # modify to full path -> directory
        dir_item = args.folder_path + "/" + dir_item
        # print(dir_item)

        training_folder = os.listdir(dir_item + "/training")
        for training_item in training_folder:
            training_item = dir_item + "/training" + "/" + training_item
            training_file_names.append(training_item)

        validation_folder = os.listdir(dir_item + "/validation")
        for validation_item in validation_folder:
            validation_item = dir_item + "/validation" + "/" + validation_item
            validation_file_names.append(validation_item)
    # print all file paths
    for i in training_file_names:
        print(i)
    for i in validation_file_names:
        print(i)

    # This would print all the files and directories

    # shuffle file names if set
    if args.is_shuffled == 1:
        shuffle(training_file_names)
        shuffle(validation_file_names)

    # make output file if not existed
    if not os.path.exists(args.train_filename):
        os.mknod(args.train_filename)

    if not os.path.exists(args.validation_filename):
        os.mknod(args.validation_filename)

    # write to file
    fo = open(args.train_filename, "w")
    fo.write("\n".join(training_file_names))
    fo.close()

    fo = open(args.validation_filename, "w")
    fo.write("\n".join(validation_file_names))
    fo.close()

    # print process
    print("Written file is: ", args.train_filename, ", is_shuffle: ", args.is_shuffled)



@TrinhQuocNguyen Thanks for you response. @AterLuna In addition, the format of file lists are attached. You can either use absolute path or relative path for training.

/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00027049.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00017547.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00023248.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00029613.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00007055.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00021404.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00008928.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00003579.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00010811.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00014556.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00015131.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00015634.png
...
...

@TrinhQuocNguyen Thank you for sharing your codes. I'll try it with my dataset.
@JiahuiYu Thank you for your example.

@AterLuna , Supposed you want to use my code, you need to make a new directory named training_data. In that directory, you make 2 more new directories: training and validation respectively and put your images in those.
@JiahuiYu You are super welcome, thank you for your contribution.

@TrinhQuocNguyen
In your code ,line 41,for validation_item in training_folder:
maybe it should be for validation_item in validation_folder?

I have modified a little bit for anyone needs it. Moreover, you may apply "data augmentation" and put them into other folders ( folder1, folder2,...) without being messy 😄
The directory tree should be looked like:

- model_logs
- neuralgym_logs
- training_data
  -- training
    --- <folder1>
    --- <folder2>
    --- .....
  -- validation
    --- <val_folder1>
    --- <val_folder2>
    --- .....
- <this_file.py>
import argparse
import os
from random import shuffle

parser = argparse.ArgumentParser()
parser.add_argument('--folder_path', default='./training_data', type=str,
                    help='The folder path')
parser.add_argument('--train_filename', default='./data_flist/train_shuffled.flist', type=str,
                    help='The train filename.')
parser.add_argument('--validation_filename', default='./data_flist/validation_shuffled.flist', type=str,
                    help='The validation filename.')
parser.add_argument('--is_shuffled', default='1', type=int,
                    help='Needed to be shuffled')

if __name__ == "__main__":

    args = parser.parse_args()

    # get the list of directories and separate them into 2 types: training and validation
    training_dirs = os.listdir(args.folder_path + "/training")
    validation_dirs = os.listdir(args.folder_path + "/validation")

    # make 2 lists to save file paths
    training_file_names = []
    validation_file_names = []

    # append all files into 2 lists
    for training_dir in training_dirs:
        # append each file into the list file names
        training_folder = os.listdir(args.folder_path + "/training" + "/" + training_dir)
        for training_item in training_folder:
            # modify to full path -> directory
            training_item = args.folder_path + "/training" + "/" + training_dir + "/" + training_item
            training_file_names.append(training_item)

    # append all files into 2 lists
    for validation_dir in validation_dirs:
        # append each file into the list file names
        validation_folder = os.listdir(args.folder_path + "/validation" + "/" + validation_dir)
        for validation_item in validation_folder:
            # modify to full path -> directory
            validation_item = args.folder_path + "/validation" + "/" + validation_dir + "/" + validation_item
            validation_file_names.append(validation_item)

    # print all file paths
    for i in training_file_names:
        print(i)
    for i in validation_file_names:
        print(i)

    # shuffle file names if set
    if args.is_shuffled == 1:
        shuffle(training_file_names)
        shuffle(validation_file_names)

    # make output file if not existed
    if not os.path.exists(args.train_filename):
        os.mknod(args.train_filename)

    if not os.path.exists(args.validation_filename):
        os.mknod(args.validation_filename)

    # write to file
    fo = open(args.train_filename, "w")
    fo.write("\n".join(training_file_names))
    fo.close()

    fo = open(args.validation_filename, "w")
    fo.write("\n".join(validation_file_names))
    fo.close()

    # print process
    print("Written file is: ", args.train_filename, ", is_shuffle: ", args.is_shuffled)

I have modified a little bit for anyone needs it. Moreover, you may apply "data augmentation" and put them into other folders ( folder1, folder2,...) without being messy smile
The directory tree should be looked like:

- model_logs
- neuralgym_logs
- training_data
  -- training
    --- <folder1>
    --- <folder2>
    --- .....
  -- validation
    --- <val_folder1>
    --- <val_folder2>
    --- .....
- <this_file.py>
import argparse
import os
from random import shuffle

parser = argparse.ArgumentParser()
parser.add_argument('--folder_path', default='./training_data', type=str,
                    help='The folder path')
parser.add_argument('--train_filename', default='./data_flist/train_shuffled.flist', type=str,
                    help='The train filename.')
parser.add_argument('--validation_filename', default='./data_flist/validation_shuffled.flist', type=str,
                    help='The validation filename.')
parser.add_argument('--is_shuffled', default='1', type=int,
                    help='Needed to be shuffled')

if __name__ == "__main__":

    args = parser.parse_args()

    # get the list of directories and separate them into 2 types: training and validation
    training_dirs = os.listdir(args.folder_path + "/training")
    validation_dirs = os.listdir(args.folder_path + "/validation")

    # make 2 lists to save file paths
    training_file_names = []
    validation_file_names = []

    # append all files into 2 lists
    for training_dir in training_dirs:
        # append each file into the list file names
        training_folder = os.listdir(args.folder_path + "/training" + "/" + training_dir)
        for training_item in training_folder:
            # modify to full path -> directory
            training_item = args.folder_path + "/training" + "/" + training_dir + "/" + training_item
            training_file_names.append(training_item)

    # append all files into 2 lists
    for validation_dir in validation_dirs:
        # append each file into the list file names
        validation_folder = os.listdir(args.folder_path + "/validation" + "/" + validation_dir)
        for validation_item in validation_folder:
            # modify to full path -> directory
            validation_item = args.folder_path + "/validation" + "/" + validation_dir + "/" + validation_item
            validation_file_names.append(validation_item)

    # print all file paths
    for i in training_file_names:
        print(i)
    for i in validation_file_names:
        print(i)

    # shuffle file names if set
    if args.is_shuffled == 1:
        shuffle(training_file_names)
        shuffle(validation_file_names)

    # make output file if not existed
    if not os.path.exists(args.train_filename):
        os.mknod(args.train_filename)

    if not os.path.exists(args.validation_filename):
        os.mknod(args.validation_filename)

    # write to file
    fo = open(args.train_filename, "w")
    fo.write("\n".join(training_file_names))
    fo.close()

    fo = open(args.validation_filename, "w")
    fo.write("\n".join(validation_file_names))
    fo.close()

    # print process
    print("Written file is: ", args.train_filename, ", is_shuffle: ", args.is_shuffled)

I created the folder tree, I put the images in the folders, I launch the file with your code and I return this:

./training_data/training/folder1/asd.jpg
./training_data/training/folder6/vdikkj.jpeg
./training_data/training/folder4/vdb.jpeg
./training_data/training/folder2/images.jpeg
./training_data/training/folder3/scv.jpeg
./training_data/training/folder5/waq.jpeg
./training_data/training/folder8/123.jpeg
./training_data/training/folder9/das.jpeg
./training_data/training/folder7/index.jpeg
./training_data/validation/val_folder6/vdikkj.jpeg
./training_data/validation/val_folder9/das.jpeg
./training_data/validation/val_folder1/asd.jpg
./training_data/validation/val_folder2/images.jpeg
./training_data/validation/val_folder7/index.jpeg
./training_data/validation/val_folder3/scv.jpeg
./training_data/validation/val_folder5/waq.jpeg
./training_data/validation/val_folder8/123.jpeg
./training_data/validation/val_folder4/vdb.jpeg
Traceback (most recent call last):
File "this_file.py", line 58, in
os.mknod(args.train_filename)
FileNotFoundError: [Errno 2] No such file or directory

Can you help me?please.

@ChiediVenia Please make sure that your input file exists. Normally it can be fixed by a careful check.

@ChiediVenia Please make sure that your input file exists. Normally it can be fixed by a careful check.

congratulations for the job ... and thanks for the reply.
What should be the input file?
I have created only the folders following the outline and the images of the sub folders. I havent created any other files or folders

@ChiediVenia I wonder if you already have the following files and folders.

parser.add_argument('--folder_path', default='./training_data', type=str,
                    help='The folder path')
parser.add_argument('--train_filename', default='./data_flist/train_shuffled.flist', type=str,
                    help='The output filename.')
parser.add_argument('--validation_filename', default='./data_flist/validation_shuffled.flist', type=str,
                    help='The output filename.')

@ChiediVenia I wonder if you already have the following files and folders.

parser.add_argument('--folder_path', default='./training_data', type=str,
                    help='The folder path')
parser.add_argument('--train_filename', default='./data_flist/train_shuffled.flist', type=str,
                    help='The output filename.')
parser.add_argument('--validation_filename', default='./data_flist/validation_shuffled.flist', type=str,
                    help='The output filename.')

Thank you for your help

os.mknod is not available to users on Mac OS, the solution can be found there Replaces os.mknod with portable equivalent

In short, replace mknod with open, like this:

os.open(args.train_filename, os.O_CREAT)
# os.mknod(args.train_filename)

@TerminatorSd I appreciate your information to this issue!

I have generated the flist file according to the previous q&a and changed its path to inaint.yml, but the output still seems to be incorrect, I hope you can help
$ python train.py
[2019-03-09 09:16:41 @init.py:79] Set root logger. Unset logger with neuralgym.unset_logger().
[2019-03-09 09:16:41 @init.py:80] Saving logging to file: neuralgym_logs/20190309091641423503.
[2019-03-09 09:16:45 @config.py:92] ---------------------------------- APP CONFIG ----------------------------------
[2019-03-09 09:16:45 @config.py:119] DATASET: celebahq
[2019-03-09 09:16:45 @config.py:119] RANDOM_CROP: False
[2019-03-09 09:16:45 @config.py:119] VAL: False
[2019-03-09 09:16:45 @config.py:119] LOG_DIR: full_model_celeba_hq_256
[2019-03-09 09:16:45 @config.py:119] MODEL_RESTORE:
[2019-03-09 09:16:45 @config.py:119] GAN: wgan_gp
[2019-03-09 09:16:45 @config.py:119] PRETRAIN_COARSE_NETWORK: False
[2019-03-09 09:16:45 @config.py:119] GAN_LOSS_ALPHA: 0.001
[2019-03-09 09:16:45 @config.py:119] WGAN_GP_LAMBDA: 10
[2019-03-09 09:16:45 @config.py:119] COARSE_L1_ALPHA: 1.2
[2019-03-09 09:16:45 @config.py:119] L1_LOSS_ALPHA: 1.2
[2019-03-09 09:16:45 @config.py:119] AE_LOSS_ALPHA: 1.2
[2019-03-09 09:16:45 @config.py:119] GAN_WITH_MASK: False
[2019-03-09 09:16:45 @config.py:119] DISCOUNTED_MASK: True
[2019-03-09 09:16:45 @config.py:119] RANDOM_SEED: False
[2019-03-09 09:16:45 @config.py:119] PADDING: SAME
[2019-03-09 09:16:45 @config.py:119] NUM_GPUS: 1
[2019-03-09 09:16:45 @config.py:119] GPU_ID: -1
[2019-03-09 09:16:45 @config.py:119] TRAIN_SPE: 10000
[2019-03-09 09:16:45 @config.py:119] MAX_ITERS: 1000000
[2019-03-09 09:16:45 @config.py:119] VIZ_MAX_OUT: 10
[2019-03-09 09:16:45 @config.py:119] GRADS_SUMMARY: False
[2019-03-09 09:16:45 @config.py:119] GRADIENT_CLIP: False
[2019-03-09 09:16:45 @config.py:119] GRADIENT_CLIP_VALUE: 0.1
[2019-03-09 09:16:45 @config.py:119] VAL_PSTEPS: 1000
[2019-03-09 09:16:45 @config.py:111] DATA_FLIST:
[2019-03-09 09:16:45 @config.py:119] celebahq: ['data/celeba_hq/train_shuffled.flist', 'data/celeba_hq/validation_static_view.flist']
[2019-03-09 09:16:45 @config.py:119] celeba: ['data/celeba/train_shuffled.flist', 'data/celeba/validation_static_view.flist']
[2019-03-09 09:16:45 @config.py:119] places2: ['data/places2/train_shuffled.flist', 'data/places2/validation_static_view.flist']
[2019-03-09 09:16:45 @config.py:119] imagenet: ['data/imagenet/train_shuffled.flist', 'data/imagenet/validation_static_view.flist']
[2019-03-09 09:16:45 @config.py:119] STATIC_VIEW_SIZE: 30
[2019-03-09 09:16:45 @config.py:119] IMG_SHAPES: [256, 256, 3]
[2019-03-09 09:16:45 @config.py:119] HEIGHT: 128
[2019-03-09 09:16:45 @config.py:119] WIDTH: 128
[2019-03-09 09:16:45 @config.py:119] MAX_DELTA_HEIGHT: 32
[2019-03-09 09:16:45 @config.py:119] MAX_DELTA_WIDTH: 32
[2019-03-09 09:16:45 @config.py:119] BATCH_SIZE: 16
[2019-03-09 09:16:45 @config.py:119] VERTICAL_MARGIN: 0
[2019-03-09 09:16:45 @config.py:119] HORIZONTAL_MARGIN: 0
[2019-03-09 09:16:45 @config.py:119] AE_LOSS: True
[2019-03-09 09:16:45 @config.py:119] L1_LOSS: True
[2019-03-09 09:16:45 @config.py:119] GLOBAL_DCGAN_LOSS_ALPHA: 1.0
[2019-03-09 09:16:45 @config.py:119] GLOBAL_WGAN_LOSS_ALPHA: 1.0
[2019-03-09 09:16:45 @config.py:119] LOAD_VGG_MODEL: False
[2019-03-09 09:16:45 @config.py:119] VGG_MODEL_FILE: data/model_zoo/vgg16.npz
[2019-03-09 09:16:45 @config.py:119] FEATURE_LOSS: False
[2019-03-09 09:16:45 @config.py:119] GRAMS_LOSS: False
[2019-03-09 09:16:45 @config.py:119] TV_LOSS: False
[2019-03-09 09:16:45 @config.py:119] TV_LOSS_ALPHA: 0.0
[2019-03-09 09:16:45 @config.py:119] FEATURE_LOSS_ALPHA: 0.01
[2019-03-09 09:16:45 @config.py:119] GRAMS_LOSS_ALPHA: 50
[2019-03-09 09:16:45 @config.py:119] SPATIAL_DISCOUNTING_GAMMA: 0.9
[2019-03-09 09:16:45 @config.py:94] --------------------------------------------------------------------------------
/bin/sh: nvidia-smi: command not found
[2019-03-09 09:16:45 @gpus.py:39] Error reading GPU information, set no GPU.
Traceback (most recent call last):
File "train.py", line 39, in
with open(config.DATA_FLIST[config.DATASET][0]) as f:
FileNotFoundError: [Errno 2] No such file or directory: 'data/celeba_hq/train_shuffled.flist'
(ysn) [409@mu01 generative_inpainting]$ python train.py
[2019-03-09 09:17:51 @init.py:79] Set root logger. Unset logger with neuralgym.unset_logger().
[2019-03-09 09:17:51 @init.py:80] Saving logging to file: neuralgym_logs/20190309091751386103.
[2019-03-09 09:17:55 @config.py:92] ---------------------------------- APP CONFIG ----------------------------------
[2019-03-09 09:17:55 @config.py:119] DATASET: celebahq
[2019-03-09 09:17:55 @config.py:119] RANDOM_CROP: False
[2019-03-09 09:17:55 @config.py:119] VAL: False
[2019-03-09 09:17:55 @config.py:119] LOG_DIR: full_model_celeba_hq_256
[2019-03-09 09:17:55 @config.py:119] MODEL_RESTORE:
[2019-03-09 09:17:55 @config.py:119] GAN: wgan_gp
[2019-03-09 09:17:55 @config.py:119] PRETRAIN_COARSE_NETWORK: False
[2019-03-09 09:17:55 @config.py:119] GAN_LOSS_ALPHA: 0.001
[2019-03-09 09:17:55 @config.py:119] WGAN_GP_LAMBDA: 10
[2019-03-09 09:17:55 @config.py:119] COARSE_L1_ALPHA: 1.2
[2019-03-09 09:17:55 @config.py:119] L1_LOSS_ALPHA: 1.2
[2019-03-09 09:17:55 @config.py:119] AE_LOSS_ALPHA: 1.2
[2019-03-09 09:17:55 @config.py:119] GAN_WITH_MASK: False
[2019-03-09 09:17:55 @config.py:119] DISCOUNTED_MASK: True
[2019-03-09 09:17:55 @config.py:119] RANDOM_SEED: False
[2019-03-09 09:17:55 @config.py:119] PADDING: SAME
[2019-03-09 09:17:55 @config.py:119] NUM_GPUS: 1
[2019-03-09 09:17:55 @config.py:119] GPU_ID: -1
[2019-03-09 09:17:55 @config.py:119] TRAIN_SPE: 10000
[2019-03-09 09:17:55 @config.py:119] MAX_ITERS: 1000000
[2019-03-09 09:17:55 @config.py:119] VIZ_MAX_OUT: 10
[2019-03-09 09:17:55 @config.py:119] GRADS_SUMMARY: False
[2019-03-09 09:17:55 @config.py:119] GRADIENT_CLIP: False
[2019-03-09 09:17:55 @config.py:119] GRADIENT_CLIP_VALUE: 0.1
[2019-03-09 09:17:55 @config.py:119] VAL_PSTEPS: 1000
[2019-03-09 09:17:55 @config.py:111] DATA_FLIST:
[2019-03-09 09:17:55 @config.py:119] celebahq: ['data/celeba_hq/train_shuffled.flist', 'data/celeba_hq/validation_static_view.flist']
[2019-03-09 09:17:55 @config.py:119] celeba: ['data/celeba/train_shuffled.flist', 'data/celeba/validation_static_view.flist']
[2019-03-09 09:17:55 @config.py:119] places2: ['data/places2/train_shuffled.flist', 'data/places2/validation_static_view.flist']
[2019-03-09 09:17:55 @config.py:119] imagenet: ['data/imagenet/train_shuffled.flist', 'data/imagenet/validation_static_view.flist']
[2019-03-09 09:17:55 @config.py:119] STATIC_VIEW_SIZE: 30
[2019-03-09 09:17:55 @config.py:119] IMG_SHAPES: [256, 256, 3]
[2019-03-09 09:17:55 @config.py:119] HEIGHT: 128
[2019-03-09 09:17:55 @config.py:119] WIDTH: 128
[2019-03-09 09:17:55 @config.py:119] MAX_DELTA_HEIGHT: 32
[2019-03-09 09:17:55 @config.py:119] MAX_DELTA_WIDTH: 32
[2019-03-09 09:17:55 @config.py:119] BATCH_SIZE: 16
[2019-03-09 09:17:55 @config.py:119] VERTICAL_MARGIN: 0
[2019-03-09 09:17:55 @config.py:119] HORIZONTAL_MARGIN: 0
[2019-03-09 09:17:55 @config.py:119] AE_LOSS: True
[2019-03-09 09:17:55 @config.py:119] L1_LOSS: True
[2019-03-09 09:17:55 @config.py:119] GLOBAL_DCGAN_LOSS_ALPHA: 1.0
[2019-03-09 09:17:55 @config.py:119] GLOBAL_WGAN_LOSS_ALPHA: 1.0
[2019-03-09 09:17:55 @config.py:119] LOAD_VGG_MODEL: False
[2019-03-09 09:17:55 @config.py:119] VGG_MODEL_FILE: data/model_zoo/vgg16.npz
[2019-03-09 09:17:55 @config.py:119] FEATURE_LOSS: False
[2019-03-09 09:17:55 @config.py:119] GRAMS_LOSS: False
[2019-03-09 09:17:55 @config.py:119] TV_LOSS: False
[2019-03-09 09:17:55 @config.py:119] TV_LOSS_ALPHA: 0.0
[2019-03-09 09:17:55 @config.py:119] FEATURE_LOSS_ALPHA: 0.01
[2019-03-09 09:17:55 @config.py:119] GRAMS_LOSS_ALPHA: 50
[2019-03-09 09:17:55 @config.py:119] SPATIAL_DISCOUNTING_GAMMA: 0.9
[2019-03-09 09:17:55 @config.py:94] --------------------------------------------------------------------------------
/bin/sh: nvidia-smi: command not found
[2019-03-09 09:17:55 @gpus.py:39] Error reading GPU information, set no GPU.
Traceback (most recent call last):
File "train.py", line 39, in
with open(config.DATA_FLIST[config.DATASET][0]) as f:
FileNotFoundError: [Errno 2] No such file or directory: 'data/celeba_hq/train_shuffled.flist'
(ysn) [409@mu01 generative_inpainting]$ python train.py
[2019-03-09 09:18:24 @init.py:79] Set root logger. Unset logger with neuralgym.unset_logger().
[2019-03-09 09:18:24 @init.py:80] Saving logging to file: neuralgym_logs/20190309091824142401.
[2019-03-09 09:18:28 @config.py:92] ---------------------------------- APP CONFIG ----------------------------------
[2019-03-09 09:18:28 @config.py:119] DATASET: celebahq
[2019-03-09 09:18:28 @config.py:119] RANDOM_CROP: False
[2019-03-09 09:18:28 @config.py:119] VAL: False
[2019-03-09 09:18:28 @config.py:119] LOG_DIR: full_model_celeba_hq_256
[2019-03-09 09:18:28 @config.py:119] MODEL_RESTORE:
[2019-03-09 09:18:28 @config.py:119] GAN: wgan_gp
[2019-03-09 09:18:28 @config.py:119] PRETRAIN_COARSE_NETWORK: False
[2019-03-09 09:18:28 @config.py:119] GAN_LOSS_ALPHA: 0.001
[2019-03-09 09:18:28 @config.py:119] WGAN_GP_LAMBDA: 10
[2019-03-09 09:18:28 @config.py:119] COARSE_L1_ALPHA: 1.2
[2019-03-09 09:18:28 @config.py:119] L1_LOSS_ALPHA: 1.2
[2019-03-09 09:18:28 @config.py:119] AE_LOSS_ALPHA: 1.2
[2019-03-09 09:18:28 @config.py:119] GAN_WITH_MASK: False
[2019-03-09 09:18:28 @config.py:119] DISCOUNTED_MASK: True
[2019-03-09 09:18:28 @config.py:119] RANDOM_SEED: False
[2019-03-09 09:18:28 @config.py:119] PADDING: SAME
[2019-03-09 09:18:28 @config.py:119] NUM_GPUS: 1
[2019-03-09 09:18:28 @config.py:119] GPU_ID: -1
[2019-03-09 09:18:28 @config.py:119] TRAIN_SPE: 10000
[2019-03-09 09:18:28 @config.py:119] MAX_ITERS: 1000000
[2019-03-09 09:18:28 @config.py:119] VIZ_MAX_OUT: 10
[2019-03-09 09:18:28 @config.py:119] GRADS_SUMMARY: False
[2019-03-09 09:18:28 @config.py:119] GRADIENT_CLIP: False
[2019-03-09 09:18:28 @config.py:119] GRADIENT_CLIP_VALUE: 0.1
[2019-03-09 09:18:28 @config.py:119] VAL_PSTEPS: 1000
[2019-03-09 09:18:28 @config.py:111] DATA_FLIST:
[2019-03-09 09:18:28 @config.py:119] celebahq: ['data_flist/train_shuffled.flist', 'data_flist/validation_static_view.flist']
[2019-03-09 09:18:28 @config.py:119] celeba: ['data/celeba/train_shuffled.flist', 'data/celeba/validation_static_view.flist']
[2019-03-09 09:18:28 @config.py:119] places2: ['data/places2/train_shuffled.flist', 'data/places2/validation_static_view.flist']
[2019-03-09 09:18:28 @config.py:119] imagenet: ['data/imagenet/train_shuffled.flist', 'data/imagenet/validation_static_view.flist']
[2019-03-09 09:18:28 @config.py:119] STATIC_VIEW_SIZE: 30
[2019-03-09 09:18:28 @config.py:119] IMG_SHAPES: [256, 256, 3]
[2019-03-09 09:18:28 @config.py:119] HEIGHT: 128
[2019-03-09 09:18:28 @config.py:119] WIDTH: 128
[2019-03-09 09:18:28 @config.py:119] MAX_DELTA_HEIGHT: 32
[2019-03-09 09:18:28 @config.py:119] MAX_DELTA_WIDTH: 32
[2019-03-09 09:18:28 @config.py:119] BATCH_SIZE: 16
[2019-03-09 09:18:28 @config.py:119] VERTICAL_MARGIN: 0
[2019-03-09 09:18:28 @config.py:119] HORIZONTAL_MARGIN: 0
[2019-03-09 09:18:28 @config.py:119] AE_LOSS: True
[2019-03-09 09:18:28 @config.py:119] L1_LOSS: True
[2019-03-09 09:18:28 @config.py:119] GLOBAL_DCGAN_LOSS_ALPHA: 1.0
[2019-03-09 09:18:28 @config.py:119] GLOBAL_WGAN_LOSS_ALPHA: 1.0
[2019-03-09 09:18:28 @config.py:119] LOAD_VGG_MODEL: False
[2019-03-09 09:18:28 @config.py:119] VGG_MODEL_FILE: data/model_zoo/vgg16.npz
[2019-03-09 09:18:28 @config.py:119] FEATURE_LOSS: False
[2019-03-09 09:18:28 @config.py:119] GRAMS_LOSS: False
[2019-03-09 09:18:28 @config.py:119] TV_LOSS: False
[2019-03-09 09:18:28 @config.py:119] TV_LOSS_ALPHA: 0.0
[2019-03-09 09:18:28 @config.py:119] FEATURE_LOSS_ALPHA: 0.01
[2019-03-09 09:18:28 @config.py:119] GRAMS_LOSS_ALPHA: 50
[2019-03-09 09:18:28 @config.py:119] SPATIAL_DISCOUNTING_GAMMA: 0.9
[2019-03-09 09:18:28 @config.py:94] --------------------------------------------------------------------------------
/bin/sh: nvidia-smi: command not found
[2019-03-09 09:18:28 @gpus.py:39] Error reading GPU information, set no GPU.
[2019-03-09 09:19:59 @dataset.py:26] --------------------------------- Dataset Info ---------------------------------
[2019-03-09 09:19:59 @dataset.py:36] file_length: 245047716
[2019-03-09 09:19:59 @dataset.py:36] random: False
[2019-03-09 09:19:59 @dataset.py:36] random_crop: False
[2019-03-09 09:19:59 @dataset.py:36] filetype: image
[2019-03-09 09:19:59 @dataset.py:36] shapes: [[256, 256, 3]]
[2019-03-09 09:19:59 @dataset.py:36] dtypes: [tf.float32]
[2019-03-09 09:19:59 @dataset.py:36] return_fnames: False
[2019-03-09 09:19:59 @dataset.py:36] batch_phs: [<tf.Tensor 'Placeholder:0' shape=(?, 256, 256, 3) dtype=float32>]
[2019-03-09 09:19:59 @dataset.py:36] enqueue_size: 32
[2019-03-09 09:19:59 @dataset.py:36] queue_size: 256
[2019-03-09 09:19:59 @dataset.py:36] nthreads: 16
[2019-03-09 09:19:59 @dataset.py:36] fn_preprocess: None
[2019-03-09 09:19:59 @dataset.py:36] index: 0
[2019-03-09 09:19:59 @dataset.py:37] --------------------------------------------------------------------------------
[2019-03-09 09:20:12 @inpaint_model.py:159] Set batch_predicted to x2.
[2019-03-09 09:20:12 @inpaint_ops.py:201] Use spatial discounting l1 loss.
[2019-03-09 09:20:12 @inpaint_ops.py:201] Use spatial discounting l1 loss.
[2019-03-09 09:20:22 @inpaint_model.py:241] Set L1_LOSS_ALPHA to 1.200000
[2019-03-09 09:20:22 @inpaint_model.py:242] Set GAN_LOSS_ALPHA to 0.001000
[2019-03-09 09:20:22 @inpaint_model.py:245] Set AE_LOSS_ALPHA to 1.200000
[2019-03-09 09:20:33 @inpaint_model.py:159] Set batch_predicted to x2.
[2019-03-09 09:20:33 @inpaint_ops.py:201] Use spatial discounting l1 loss.
[2019-03-09 09:20:33 @inpaint_ops.py:201] Use spatial discounting l1 loss.
[2019-03-09 09:20:34 @inpaint_model.py:241] Set L1_LOSS_ALPHA to 1.200000
[2019-03-09 09:20:34 @inpaint_model.py:242] Set GAN_LOSS_ALPHA to 0.001000
[2019-03-09 09:20:34 @inpaint_model.py:245] Set AE_LOSS_ALPHA to 1.200000
[2019-03-09 09:20:35 @trainer.py:61] ------------------------- Context Of Secondary Trainer -------------------------
[2019-03-09 09:20:35 @trainer.py:63] optimizer: <tensorflow.python.training.adam.AdamOptimizer object at 0x2ad6aa020e10>
[2019-03-09 09:20:35 @trainer.py:63] var_list: [<tf.Variable 'discriminator/discriminator_local/conv1/kernel:0' shape=(5, 5, 3, 64) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_local/conv1/bias:0' shape=(64,) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_local/conv2/kernel:0' shape=(5, 5, 64, 128) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_local/conv2/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_local/conv3/kernel:0' shape=(5, 5, 128, 256) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_local/conv3/bias:0' shape=(256,) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_local/conv4/kernel:0' shape=(5, 5, 256, 512) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_local/conv4/bias:0' shape=(512,) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_global/conv1/kernel:0' shape=(5, 5, 3, 64) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_global/conv1/bias:0' shape=(64,) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_global/conv2/kernel:0' shape=(5, 5, 64, 128) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_global/conv2/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_global/conv3/kernel:0' shape=(5, 5, 128, 256) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_global/conv3/bias:0' shape=(256,) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_global/conv4/kernel:0' shape=(5, 5, 256, 256) dtype=float32_ref>, <tf.Variable 'discriminator/discriminator_global/conv4/bias:0' shape=(256,) dtype=float32_ref>, <tf.Variable 'discriminator/dout_local_fc/kernel:0' shape=(32768, 1) dtype=float32_ref>, <tf.Variable 'discriminator/dout_local_fc/bias:0' shape=(1,) dtype=float32_ref>, <tf.Variable 'discriminator/dout_global_fc/kernel:0' shape=(65536, 1) dtype=float32_ref>, <tf.Variable 'discriminator/dout_global_fc/bias:0' shape=(1,) dtype=float32_ref>]
[2019-03-09 09:20:35 @trainer.py:63] graph_def: <function multigpu_graph_def at 0x2acd5c696e18>
[2019-03-09 09:20:35 @trainer.py:63] graph_def_kwargs: {'model': <inpaint_model.InpaintCAModel object at 0x2acd65667358>, 'data': <neuralgym.data.data_from_fnames.DataFromFNames object at 0x2acd969a1f28>, 'config': {}, 'loss_type': 'd'}
[2019-03-09 09:20:35 @trainer.py:63] feed_dict: {}
[2019-03-09 09:20:35 @trainer.py:63] max_iters: 5
[2019-03-09 09:20:35 @trainer.py:63] log_dir: /tmp/neuralgym
[2019-03-09 09:20:35 @trainer.py:63] spe: 1
[2019-03-09 09:20:35 @trainer.py:63] grads_summary: True
[2019-03-09 09:20:35 @trainer.py:63] log_progress: False
[2019-03-09 09:20:35 @trainer.py:64] --------------------------------------------------------------------------------
[2019-03-09 09:20:46 @inpaint_model.py:159] Set batch_predicted to x2.
[2019-03-09 09:20:46 @inpaint_ops.py:201] Use spatial discounting l1 loss.
[2019-03-09 09:20:46 @inpaint_ops.py:201] Use spatial discounting l1 loss.
[2019-03-09 09:21:12 @inpaint_model.py:241] Set L1_LOSS_ALPHA to 1.200000
[2019-03-09 09:21:12 @inpaint_model.py:242] Set GAN_LOSS_ALPHA to 0.001000
[2019-03-09 09:21:12 @inpaint_model.py:245] Set AE_LOSS_ALPHA to 1.200000
2019-03-09 09:21:25.130822: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-03-09 09:21:25.991171: I tensorflow/core/common_runtime/process_util.cc:69] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
[2019-03-09 09:21:40 @data_from_fnames.py:153] image is None, sleep this thread for 0.1s.
Exception in thread Thread-10:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-12:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-2:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-5:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-14:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-9:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-11:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-8:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-7:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-4:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-3:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-6:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-17:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-16:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-13:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

Exception in thread Thread-15:
Traceback (most recent call last):
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/feeding_queue_runner.py", line 194, in _run
data = func()
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 143, in
feed_dict_op=[lambda: self.next_batch()],
File "/home/409/anaconda3/envs/ysn/lib/python3.6/site-packages/neuralgym/data/data_from_fnames.py", line 182, in next_batch
img = cv2.resize(img, tuple(self.shapes[i][:-1][::-1]))
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

[2019-03-09 09:21:43 @trainer.py:59] -------------------------- Context Of Primary Trainer --------------------------
[2019-03-09 09:21:43 @trainer.py:63] optimizer: <tensorflow.python.training.adam.AdamOptimizer object at 0x2ad6aa020e10>
[2019-03-09 09:21:43 @trainer.py:63] var_list: [<tf.Variable 'inpaint_net/conv1/kernel:0' shape=(5, 5, 5, 32) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv1/bias:0' shape=(32,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv2_downsample/kernel:0' shape=(3, 3, 32, 64) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv2_downsample/bias:0' shape=(64,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv3/kernel:0' shape=(3, 3, 64, 64) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv3/bias:0' shape=(64,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv4_downsample/kernel:0' shape=(3, 3, 64, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv4_downsample/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv5/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv5/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv6/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv6/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv7_atrous/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv7_atrous/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv8_atrous/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv8_atrous/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv9_atrous/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv9_atrous/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv10_atrous/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv10_atrous/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv11/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv11/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv12/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv12/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv13_upsample/conv13_upsample_conv/kernel:0' shape=(3, 3, 128, 64) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv13_upsample/conv13_upsample_conv/bias:0' shape=(64,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv14/kernel:0' shape=(3, 3, 64, 64) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv14/bias:0' shape=(64,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv15_upsample/conv15_upsample_conv/kernel:0' shape=(3, 3, 64, 32) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv15_upsample/conv15_upsample_conv/bias:0' shape=(32,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv16/kernel:0' shape=(3, 3, 32, 16) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv16/bias:0' shape=(16,) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv17/kernel:0' shape=(3, 3, 16, 3) dtype=float32_ref>, <tf.Variable 'inpaint_net/conv17/bias:0' shape=(3,) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv1/kernel:0' shape=(5, 5, 5, 32) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv1/bias:0' shape=(32,) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv2_downsample/kernel:0' shape=(3, 3, 32, 32) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv2_downsample/bias:0' shape=(32,) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv3/kernel:0' shape=(3, 3, 32, 64) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv3/bias:0' shape=(64,) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv4_downsample/kernel:0' shape=(3, 3, 64, 64) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv4_downsample/bias:0' shape=(64,) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv5/kernel:0' shape=(3, 3, 64, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv5/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv6/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv6/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv7_atrous/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv7_atrous/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv8_atrous/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv8_atrous/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv9_atrous/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv9_atrous/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv10_atrous/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/xconv10_atrous/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv1/kernel:0' shape=(5, 5, 5, 32) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv1/bias:0' shape=(32,) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv2_downsample/kernel:0' shape=(3, 3, 32, 32) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv2_downsample/bias:0' shape=(32,) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv3/kernel:0' shape=(3, 3, 32, 64) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv3/bias:0' shape=(64,) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv4_downsample/kernel:0' shape=(3, 3, 64, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv4_downsample/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv5/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv5/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv6/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv6/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv9/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv9/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv10/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/pmconv10/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv11/kernel:0' shape=(3, 3, 256, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv11/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv12/kernel:0' shape=(3, 3, 128, 128) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv12/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv13_upsample/allconv13_upsample_conv/kernel:0' shape=(3, 3, 128, 64) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv13_upsample/allconv13_upsample_conv/bias:0' shape=(64,) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv14/kernel:0' shape=(3, 3, 64, 64) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv14/bias:0' shape=(64,) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv15_upsample/allconv15_upsample_conv/kernel:0' shape=(3, 3, 64, 32) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv15_upsample/allconv15_upsample_conv/bias:0' shape=(32,) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv16/kernel:0' shape=(3, 3, 32, 16) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv16/bias:0' shape=(16,) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv17/kernel:0' shape=(3, 3, 16, 3) dtype=float32_ref>, <tf.Variable 'inpaint_net/allconv17/bias:0' shape=(3,) dtype=float32_ref>]
[2019-03-09 09:21:43 @trainer.py:63] graph_def: <function multigpu_graph_def at 0x2acd5c696e18>
[2019-03-09 09:21:43 @trainer.py:63] gradient_processor: None
[2019-03-09 09:21:43 @trainer.py:63] graph_def_kwargs: {'model': <inpaint_model.InpaintCAModel object at 0x2acd65667358>, 'data': <neuralgym.data.data_from_fnames.DataFromFNames object at 0x2acd969a1f28>, 'config': {}, 'loss_type': 'g'}
[2019-03-09 09:21:43 @trainer.py:63] feed_dict: {}
[2019-03-09 09:21:43 @trainer.py:63] max_iters: 1000000
[2019-03-09 09:21:43 @trainer.py:63] log_dir: model_logs/20190309092022480276_mu01_celebahq_NORMAL_wgan_gp_full_model_celeba_hq_256
[2019-03-09 09:21:43 @trainer.py:63] spe: 10000
[2019-03-09 09:21:43 @trainer.py:63] grads_summary: False
[2019-03-09 09:21:43 @trainer.py:63] log_progress: True
[2019-03-09 09:21:43 @trainer.py:63] global_step: <tf.Variable 'global_step:0' shape=() dtype=int32_ref>
[2019-03-09 09:21:43 @trainer.py:63] global_step_add_one: Tensor("add_one_to_global_step:0", shape=(), dtype=int32_ref)
[2019-03-09 09:21:43 @trainer.py:63] sess_config: gpu_options {
allow_growth: true
}
allow_soft_placement: true

[2019-03-09 09:21:43 @trainer.py:63] sess: <tensorflow.python.client.session.Session object at 0x2ad6b1460f28>
[2019-03-09 09:21:43 @trainer.py:63] summary_writer: <tensorflow.python.summary.writer.writer.FileWriter object at 0x2ad6b1460f60>
[2019-03-09 09:21:43 @trainer.py:63] saver: <tensorflow.python.training.saver.Saver object at 0x2ad6b1470438>
[2019-03-09 09:21:43 @trainer.py:63] start_queue_runners: True
[2019-03-09 09:21:43 @trainer.py:63] global_variables_initializer: True
[2019-03-09 09:21:43 @trainer.py:64] --------------------------------------------------------------------------------
[2019-03-09 09:21:44 @logger.py:43] Trigger callback: Trigger WeightsViewer: logging model weights...
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv1/kernel:0, shape: [5, 5, 5, 32], size: 4000
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv1/bias:0, shape: [32], size: 32
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv2_downsample/kernel:0, shape: [3, 3, 32, 64], size: 18432
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv2_downsample/bias:0, shape: [64], size: 64
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv3/kernel:0, shape: [3, 3, 64, 64], size: 36864
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv3/bias:0, shape: [64], size: 64
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv4_downsample/kernel:0, shape: [3, 3, 64, 128], size: 73728
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv4_downsample/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv5/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv5/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv6/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv6/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv7_atrous/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv7_atrous/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv8_atrous/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv8_atrous/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv9_atrous/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv9_atrous/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv10_atrous/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv10_atrous/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv11/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv11/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv12/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv12/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv13_upsample/conv13_upsample_conv/kernel:0, shape: [3, 3, 128, 64], size: 73728
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv13_upsample/conv13_upsample_conv/bias:0, shape: [64], size: 64
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv14/kernel:0, shape: [3, 3, 64, 64], size: 36864
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv14/bias:0, shape: [64], size: 64
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv15_upsample/conv15_upsample_conv/kernel:0, shape: [3, 3, 64, 32], size: 18432
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv15_upsample/conv15_upsample_conv/bias:0, shape: [32], size: 32
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv16/kernel:0, shape: [3, 3, 32, 16], size: 4608
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv16/bias:0, shape: [16], size: 16
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv17/kernel:0, shape: [3, 3, 16, 3], size: 432
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/conv17/bias:0, shape: [3], size: 3
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv1/kernel:0, shape: [5, 5, 5, 32], size: 4000
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv1/bias:0, shape: [32], size: 32
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv2_downsample/kernel:0, shape: [3, 3, 32, 32], size: 9216
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv2_downsample/bias:0, shape: [32], size: 32
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv3/kernel:0, shape: [3, 3, 32, 64], size: 18432
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv3/bias:0, shape: [64], size: 64
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv4_downsample/kernel:0, shape: [3, 3, 64, 64], size: 36864
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv4_downsample/bias:0, shape: [64], size: 64
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv5/kernel:0, shape: [3, 3, 64, 128], size: 73728
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv5/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv6/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv6/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv7_atrous/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv7_atrous/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv8_atrous/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv8_atrous/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv9_atrous/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv9_atrous/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv10_atrous/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/xconv10_atrous/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv1/kernel:0, shape: [5, 5, 5, 32], size: 4000
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv1/bias:0, shape: [32], size: 32
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv2_downsample/kernel:0, shape: [3, 3, 32, 32], size: 9216
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv2_downsample/bias:0, shape: [32], size: 32
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv3/kernel:0, shape: [3, 3, 32, 64], size: 18432
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv3/bias:0, shape: [64], size: 64
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv4_downsample/kernel:0, shape: [3, 3, 64, 128], size: 73728
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv4_downsample/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv5/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv5/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv6/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv6/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv9/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv9/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv10/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/pmconv10/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv11/kernel:0, shape: [3, 3, 256, 128], size: 294912
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv11/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv12/kernel:0, shape: [3, 3, 128, 128], size: 147456
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv12/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv13_upsample/allconv13_upsample_conv/kernel:0, shape: [3, 3, 128, 64], size: 73728
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv13_upsample/allconv13_upsample_conv/bias:0, shape: [64], size: 64
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv14/kernel:0, shape: [3, 3, 64, 64], size: 36864
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv14/bias:0, shape: [64], size: 64
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv15_upsample/allconv15_upsample_conv/kernel:0, shape: [3, 3, 64, 32], size: 18432
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv15_upsample/allconv15_upsample_conv/bias:0, shape: [32], size: 32
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv16/kernel:0, shape: [3, 3, 32, 16], size: 4608
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv16/bias:0, shape: [16], size: 16
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv17/kernel:0, shape: [3, 3, 16, 3], size: 432
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: inpaint_net/allconv17/bias:0, shape: [3], size: 3
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_local/conv1/kernel:0, shape: [5, 5, 3, 64], size: 4800
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_local/conv1/bias:0, shape: [64], size: 64
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_local/conv2/kernel:0, shape: [5, 5, 64, 128], size: 204800
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_local/conv2/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_local/conv3/kernel:0, shape: [5, 5, 128, 256], size: 819200
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_local/conv3/bias:0, shape: [256], size: 256
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_local/conv4/kernel:0, shape: [5, 5, 256, 512], size: 3276800
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_local/conv4/bias:0, shape: [512], size: 512
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_global/conv1/kernel:0, shape: [5, 5, 3, 64], size: 4800
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_global/conv1/bias:0, shape: [64], size: 64
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_global/conv2/kernel:0, shape: [5, 5, 64, 128], size: 204800
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_global/conv2/bias:0, shape: [128], size: 128
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_global/conv3/kernel:0, shape: [5, 5, 128, 256], size: 819200
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_global/conv3/bias:0, shape: [256], size: 256
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_global/conv4/kernel:0, shape: [5, 5, 256, 256], size: 1638400
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/discriminator_global/conv4/bias:0, shape: [256], size: 256
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/dout_local_fc/kernel:0, shape: [32768, 1], size: 32768
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/dout_local_fc/bias:0, shape: [1], size: 1
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/dout_global_fc/kernel:0, shape: [65536, 1], size: 65536
[2019-03-09 09:21:44 @weights_viewer.py:43] - weight name: discriminator/dout_global_fc/bias:0, shape: [1], size: 1
[2019-03-09 09:21:44 @logger.py:43] Trigger callback: Total counts of trainable weights: 10674312.
[2019-03-09 09:21:44 @weights_viewer.py:60] Total size of trainable weights: 0G 10M 184K 136B (Assuming32-bit data type.)

I shall be very grateful to you

@YangSN0719 Hi, most likely that your file list is incorrect. The error means there is no image file with the file list you provided. Please check carefully. One suggestion is to use absolute path of file list.

Thank you very much for your reply. As a beginner, I am very sorry to disturb you. I will check your source file again @JiahuiYu

@TrinhQuocNguyen
First of all, thank you very much for your source code, but the file I generated is unusually large, and there are not many pictures. I only selected 5000 for training and 300 for testing. The generated file screenshots are as follows,I would appreciate your reply
QQ截图20190312205018

@TrinhQuocNguyen Sorry, as a beginner, I am very sorry to disturb you, I will look at your source file again, I hope you can send me your inpaint.yml file for my reference, thank you very much

@YangSN0719 I think you need line break for different image files, right? Each image file occupies one line.

@TrinhQuocNguyen Sorry, as a beginner, I am very sorry to disturb you, I will look at your source file again, I hope you can send me your inpaint.yml file for my reference, thank you very much

@YangSN0719 I think you need line break for different image files, right? Each image file occupies one line.

That is what I think: you need line break like the author has provided as follows:

/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00027049.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00017547.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00023248.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00029613.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00007055.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00021404.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00008928.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00003579.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00010811.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00014556.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00015131.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00015634.png
...
...

@TrinhQuocNguyen @JiahuiYu
Thank you very much for your reply.My problem has been solved and is under training,
please refer to the following link
https://blog.csdn.net/Gavinmiaoc/article/details/81250782

commented

Hi, should I use how many valid images in training process? Is it have any influence to training?

@lx120 The number of validation images does not affect training. They are for "validation" purpose.

Please i need help to train your model

I get error while training

commented

What is the number of training image channels?3 or 4?

commented

@TrinhQuocNguyen Thanks for you response. @AterLuna In addition, the format of file lists are attached. You can either use absolute path or relative path for training.

/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00027049.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00017547.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00023248.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00029613.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00007055.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00021404.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00008928.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00003579.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00010811.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00014556.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00015131.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00015634.png
...
...

thank u very much for your source code.my question is that the input for training need three pair images, it contains a raw image with mask, a mask and a inpainted image. but it seems that the file list you offered only have one type? what should i do if want to train with three-pairs input

commented

It's OK, the mask is auto generated in the code,thanks

@TrinhQuocNguyen Thanks for you response. @AterLuna In addition, the format of file lists are attached. You can either use absolute path or relative path for training.

/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00027049.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00017547.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00023248.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00029613.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00007055.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00021404.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00008928.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00003579.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00010811.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00014556.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00015131.png                                                                                                                                                                                                               
/home/jiahui.yu/data/celeba_hq/celeba_hq_images/img00015634.png
...
...

thank u very much for your source code.my question is that the input for training need three pair images, it contains a raw image with mask, a mask and a inpainted image. but it seems that the file list you offered only have one type? what should i do if want to train with three-pairs input

@TrinhQuocNguyen Thank you for sharing your codes. I'll try it with my dataset.
@JiahuiYu Thank you for your example.

Dear author, I want to ask if I need to prepare mask images for training? Thank you.

@TrinhQuocNguyen
Shell commands can achieve the same function to generate file lists. as following:
find folder/ -name "*.png" | sort > filepath.txt

Hi AterLuna,
You have to write code for yourself to generate the flist file. Here is my code:

#!/usr/bin/python

import argparse
import os
from random import shuffle

parser = argparse.ArgumentParser()
parser.add_argument('--folder_path', default='./training_data', type=str,
                    help='The folder path')
parser.add_argument('--train_filename', default='./data_flist/train_shuffled.flist', type=str,
                    help='The output filename.')
parser.add_argument('--validation_filename', default='./data_flist/validation_shuffled.flist', type=str,
                    help='The output filename.')
parser.add_argument('--is_shuffled', default='1', type=int,
                    help='Needed to shuffle')

if __name__ == "__main__":

    args = parser.parse_args()

    # get the list of directories
    dirs = os.listdir(args.folder_path)
    dirs_name_list = []

    # make 2 lists to save file paths
    training_file_names = []
    validation_file_names = []

    # print all directory names
    for dir_item in dirs:
        # modify to full path -> directory
        dir_item = args.folder_path + "/" + dir_item
        # print(dir_item)

        training_folder = os.listdir(dir_item + "/training")
        for training_item in training_folder:
            training_item = dir_item + "/training" + "/" + training_item
            training_file_names.append(training_item)

        validation_folder = os.listdir(dir_item + "/validation")
        for validation_item in validation_folder:
            validation_item = dir_item + "/validation" + "/" + validation_item
            validation_file_names.append(validation_item)
    # print all file paths
    for i in training_file_names:
        print(i)
    for i in validation_file_names:
        print(i)

    # This would print all the files and directories

    # shuffle file names if set
    if args.is_shuffled == 1:
        shuffle(training_file_names)
        shuffle(validation_file_names)

    # make output file if not existed
    if not os.path.exists(args.train_filename):
        os.mknod(args.train_filename)

    if not os.path.exists(args.validation_filename):
        os.mknod(args.validation_filename)

    # write to file
    fo = open(args.train_filename, "w")
    fo.write("\n".join(training_file_names))
    fo.close()

    fo = open(args.validation_filename, "w")
    fo.write("\n".join(validation_file_names))
    fo.close()

    # print process
    print("Written file is: ", args.train_filename, ", is_shuffle: ", args.is_shuffled)

Must it be divided into training set and validation set?

I write something easy to modify.

from random import shuffle
import os

train_filename = 'G:/image_inpainting/train_shuffled.flist'
validation_filename = 'G:/image_inpainting/validation_shuffled.flist'
training_file_names = []
validation_file_names = []
training_path = 'G:/image_inpainting/training'
validation_path = 'G:/image_inpainting/validation'
training_items = os.listdir(training_path)
validation_items = os.listdir(validation_path)

# loop
for training_item in training_items:
    training_item = training_path + training_item
    training_file_names.append(training_item)

for validation_item in validation_items:
    validation_item = validation_path + validation_item
    validation_file_names.append(validation_item)


shuffle(training_file_names)
shuffle(validation_file_names)
# write
fo = open(train_filename, "w")
fo.write("\n".join(training_file_names))
fo.close()

fo = open(validation_filename, "w")
fo.write("\n".join(validation_file_names))
fo.close()

Hi AterLuna,
You have to write code for yourself to generate the flist file. Here is my code:

#!/usr/bin/python

import argparse
import os
from random import shuffle

parser = argparse.ArgumentParser()
parser.add_argument('--folder_path', default='./training_data', type=str,
                    help='The folder path')
parser.add_argument('--train_filename', default='./data_flist/train_shuffled.flist', type=str,
                    help='The output filename.')
parser.add_argument('--validation_filename', default='./data_flist/validation_shuffled.flist', type=str,
                    help='The output filename.')
parser.add_argument('--is_shuffled', default='1', type=int,
                    help='Needed to shuffle')

if __name__ == "__main__":

    args = parser.parse_args()

    # get the list of directories
    dirs = os.listdir(args.folder_path)
    dirs_name_list = []

    # make 2 lists to save file paths
    training_file_names = []
    validation_file_names = []

    # print all directory names
    for dir_item in dirs:
        # modify to full path -> directory
        dir_item = args.folder_path + "/" + dir_item
        # print(dir_item)

        training_folder = os.listdir(dir_item + "/training")
        for training_item in training_folder:
            training_item = dir_item + "/training" + "/" + training_item
            training_file_names.append(training_item)

        validation_folder = os.listdir(dir_item + "/validation")
        for validation_item in validation_folder:
            validation_item = dir_item + "/validation" + "/" + validation_item
            validation_file_names.append(validation_item)
    # print all file paths
    for i in training_file_names:
        print(i)
    for i in validation_file_names:
        print(i)

    # This would print all the files and directories

    # shuffle file names if set
    if args.is_shuffled == 1:
        shuffle(training_file_names)
        shuffle(validation_file_names)

    # make output file if not existed
    if not os.path.exists(args.train_filename):
        os.mknod(args.train_filename)

    if not os.path.exists(args.validation_filename):
        os.mknod(args.validation_filename)

    # write to file
    fo = open(args.train_filename, "w")
    fo.write("\n".join(training_file_names))
    fo.close()

    fo = open(args.validation_filename, "w")
    fo.write("\n".join(validation_file_names))
    fo.close()

    # print process
    print("Written file is: ", args.train_filename, ", is_shuffle: ", args.is_shuffled)

Hello i am newbe
Where should i place this code same inpait.yml file or to creat new one?

Small detail here about the folder structure, maybe it helps: The code works as is, if the folder hierarchy is like this:

project_root/
├─ training_data/
│  ├─ name_of_db/
│  │  ├─ training/
│  │  ├─ validation/
commented

I want to fix my photos, but running inpaint_ops.py doesn't work,
Hope the author can more detailed information I want to learn

Hi @JiahuiYu , @TrinhQuocNguyen
The flist.txt file only contains path to normal images? Where do I put mask of that images? Thank you

嗨 AterLuna, 您必须自己编写代码来生成 flist 文件。这是我的代码:

#!/usr/bin/python

import argparse
import os
from random import shuffle

parser = argparse.ArgumentParser()
parser.add_argument('--folder_path', default='./training_data', type=str,
                    help='The folder path')
parser.add_argument('--train_filename', default='./data_flist/train_shuffled.flist', type=str,
                    help='The output filename.')
parser.add_argument('--validation_filename', default='./data_flist/validation_shuffled.flist', type=str,
                    help='The output filename.')
parser.add_argument('--is_shuffled', default='1', type=int,
                    help='Needed to shuffle')

if __name__ == "__main__":

    args = parser.parse_args()

    # get the list of directories
    dirs = os.listdir(args.folder_path)
    dirs_name_list = []

    # make 2 lists to save file paths
    training_file_names = []
    validation_file_names = []

    # print all directory names
    for dir_item in dirs:
        # modify to full path -> directory
        dir_item = args.folder_path + "/" + dir_item
        # print(dir_item)

        training_folder = os.listdir(dir_item + "/training")
        for training_item in training_folder:
            training_item = dir_item + "/training" + "/" + training_item
            training_file_names.append(training_item)

        validation_folder = os.listdir(dir_item + "/validation")
        for validation_item in validation_folder:
            validation_item = dir_item + "/validation" + "/" + validation_item
            validation_file_names.append(validation_item)
    # print all file paths
    for i in training_file_names:
        print(i)
    for i in validation_file_names:
        print(i)

    # This would print all the files and directories

    # shuffle file names if set
    if args.is_shuffled == 1:
        shuffle(training_file_names)
        shuffle(validation_file_names)

    # make output file if not existed
    if not os.path.exists(args.train_filename):
        os.mknod(args.train_filename)

    if not os.path.exists(args.validation_filename):
        os.mknod(args.validation_filename)

    # write to file
    fo = open(args.train_filename, "w")
    fo.write("\n".join(training_file_names))
    fo.close()

    fo = open(args.validation_filename, "w")
    fo.write("\n".join(validation_file_names))
    fo.close()

    # print process
    print("Written file is: ", args.train_filename, ", is_shuffle: ", args.is_shuffled)

May I ask which directory and folder to place this. flist file in after writing?

你好 AterLuna, 您必须自己编写代码来生成 flist 文件。这是我的代码:

#!/usr/bin/python

import argparse
import os
from random import shuffle

parser = argparse.ArgumentParser()
parser.add_argument('--folder_path', default='./training_data', type=str,
                    help='The folder path')
parser.add_argument('--train_filename', default='./data_flist/train_shuffled.flist', type=str,
                    help='The output filename.')
parser.add_argument('--validation_filename', default='./data_flist/validation_shuffled.flist', type=str,
                    help='The output filename.')
parser.add_argument('--is_shuffled', default='1', type=int,
                    help='Needed to shuffle')

if __name__ == "__main__":

    args = parser.parse_args()

    # get the list of directories
    dirs = os.listdir(args.folder_path)
    dirs_name_list = []

    # make 2 lists to save file paths
    training_file_names = []
    validation_file_names = []

    # print all directory names
    for dir_item in dirs:
        # modify to full path -> directory
        dir_item = args.folder_path + "/" + dir_item
        # print(dir_item)

        training_folder = os.listdir(dir_item + "/training")
        for training_item in training_folder:
            training_item = dir_item + "/training" + "/" + training_item
            training_file_names.append(training_item)

        validation_folder = os.listdir(dir_item + "/validation")
        for validation_item in validation_folder:
            validation_item = dir_item + "/validation" + "/" + validation_item
            validation_file_names.append(validation_item)
    # print all file paths
    for i in training_file_names:
        print(i)
    for i in validation_file_names:
        print(i)

    # This would print all the files and directories

    # shuffle file names if set
    if args.is_shuffled == 1:
        shuffle(training_file_names)
        shuffle(validation_file_names)

    # make output file if not existed
    if not os.path.exists(args.train_filename):
        os.mknod(args.train_filename)

    if not os.path.exists(args.validation_filename):
        os.mknod(args.validation_filename)

    # write to file
    fo = open(args.train_filename, "w")
    fo.write("\n".join(training_file_names))
    fo.close()

    fo = open(args.validation_filename, "w")
    fo.write("\n".join(validation_file_names))
    fo.close()

    # print process
    print("Written file is: ", args.train_filename, ", is_shuffle: ", args.is_shuffled)

Hello, I ran according to your code and encountered this error. How can I solve it? Isn't this code creating a . flit file?Traceback (most recent call last):
File ".\1.py", line 59, in
with open(args.train_filename, 'w') as f:
FileNotFoundError: [Errno 2] No such file or directory: './data_flist/train_shuffled.flist'