timctho / convolutional-pose-machines-tensorflow

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About the data loader

momo1986 opened this issue · comments

g = Ensemble_data_generator.ensemble_data_generator(FLAGS.train_img_dir,
                                                    FLAGS.bg_img_dir,
                                                    FLAGS.batch_size, FLAGS.input_size, True, True,
                                                    FLAGS.augmentation_config, FLAGS.hnm, FLAGS.do_cropping)
g_eval = Ensemble_data_generator.ensemble_data_generator(FLAGS.val_img_dir,
                                                         FLAGS.bg_img_dir,
                                                         FLAGS.batch_size, FLAGS.input_size, True, True,
                                                         FLAGS.augmentation_config, FLAGS.hnm, FLAGS.do_cropping)

Hello, thanks for sharing.
I have three questions:

  1. What is the meaning of bg_img_dir?
  2. For val_img_dir, train_img_di and bg_img_dir, it means the tf-records generated by create_cpm_tfr_fulljoints.py, isn't it?
  3. Since your data_loader is private, can you help define the data-type of ensemble_data_generator? Is there any similar API in Keras, MXNET or Tensorflow that I can refer?
    Thanks & regards!
    Neo

Hello, @timctho
Hello, Tim.
If you are online, could you help answer the question?
Thanks.

  1. I synthesized data by attaching hand to different background images
  2. Yes
  3. The data generator's goal is to provide numpy data pairs of (image, related_joints) of shape ([N, H, W, 3], [NumJoints, 2]), you can simply write a python generator to do that
  1. I synthesized data by attaching hand to different background images
  2. Yes
  3. The data generator's goal is to provide numpy data pairs of (image, related_joints) of shape ([N, H, W, 3], [NumJoints, 2]), you can simply write a python generator to do that

Hello, @timcho

  1. For background image, can I pass the None to it and still run the program? If not, is the label file like training data-set and validation data-set essential for supplemented background image to do the training?
  2. The return value of the data-generator is feature and label of the image with hands and corresponding joint-number, isn't it?
    Thanks!
    Momo
1. I synthesized data by attaching hand to different background images

2. Yes

3. The data generator's goal is to provide numpy data pairs of (image, related_joints) of shape ([N, H, W, 3], [NumJoints, 2]),  you can simply write a python generator to do that

Hi, Tim, can you explain the shape in detail?whats mean the N?when i run run_training.py,have some problem in data import.like [?,H,W,3]

@timctho hi ,if you online,can you reply my question?thank you very much

@luchen828 @timctho @xiaoyongzhu How can we test image using myself training model?It cann‘t load the model(.data-30000/.index/*.meta) when testing. How can we convert it into the pkl format?

1. I synthesized data by attaching hand to different background images

2. Yes

3. The data generator's goal is to provide numpy data pairs of (image, related_joints) of shape ([N, H, W, 3], [NumJoints, 2]),  you can simply write a python generator to do that

Hi, Tim, can you explain the shape in detail?whats mean the N?when i run run_training.py,have some problem in data import.like [?,H,W,3]

When I read the file named CPM_hand.py, I found that self.batch_size = tf.cast(tf.shape(self.input_images)[0], dtype=tf.float32), so the N is equal to the size of a batch.