PeterouZh / CIPS-3D

3D-aware GANs based on NeRF (arXiv).

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to train on my own dataset?

sunshineatnoon opened this issue · comments

Hi, Thanks for open-sourcing this awesome work. I would like to train the model on my own dataset. So far, I have pre-processed all images to size 256x256 by using the scripts/dataset_tool.py. Here are the issues I met when trying to train on my own images:

  • How to generate image list? I used the following command to generate a list but not sure if this is correct, I actually didn't see the datasets/ffhq/ffhq_256.txt file when training on the FFHQ dataset.
      python3 -m tl2.tools.get_data_list --source_dir datasets/my_images/downsample_ffhq_256x256/ --outfile datasets/my_images/ffhq_256.txt  --ext *.png
    
  • How to change the yaml file ffhq_exp.yaml to point to my own dataset directory?
  • How to pass hyperparameters to the model? I tried to use the training command in the old readme below:
    export CUDA_HOME=/usr/local/cuda-10.2/
    export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
    export PYTHONPATH=.
    python exp/dev/nerf_inr/scripts/train_v16.py \
        --port 8888 \
        --tl_config_file configs/train_ffhq.yaml \
        --tl_command train_ffhq \
        --tl_outdir results/train_ffhq \
        --tl_opts curriculum.new_attrs.image_list_file datasets/ffhq/images256x256_image_list.txt \
          D_first_layer_warmup True
    
    But I'm not sure how to train on 32x32 images (I'd like a quick tryout), or changing the batch_size, etc. I looked into the tl2 library but failed to find any documentation.
    Thanks for your time and any help would be appreciated!