RPM-Robotics-Lab / sRGB-TIR

Repository for synthetic RGB to Thermal Infrared translation module from "Edge-guided multidomain RGB to TIR translation", ICRA 2023 submission

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

how to process the dataset for training

flynightair opened this issue · comments

Thanks for sharing the wonderful work. I want to train the model for RGB-NIR translation, if the method is suitable and how can i process the dataset?
Looking forward for your reply, thanks.

Hello flynightair!
Thanks for your interest in your work.

Sorry for the delay in the response as I had to be in multiple conferences in a row (including London for ICRA23), so I was not able to answer the questions prompt enough.

For custom data training, first, make 4 sets of folders. Name them "TrainA", "TrainB", "TestA", 'TestB" then

put whichever data you want to translate into each folder e.g. TrainA should have training images from RGB and TrainB should have training images from NIR.

Afterwards, you gotta make relevant adjustments to the new config file.

Here are the relevant adjustments you need to make

data options

input_dim_a: 3 # change this if you NIR data is not in 3 channel iamges
input_dim_b: 3 # number of image channels [1/3]
num_workers: 8
new_size_a: 640 # change this according to your image dimension size
new_size_b: 400 # change this according to your dimension size

number of data loading threads

#new_size: 256 # ignore this part
crop_image_height: 400 # random crop image of this height. If you don't want random crop, just set it to same value as new_size_b
crop_image_width: 640 # random crop image of this width If you don't want random crop, just set it to same value as new_size_a
data_root: ./datasets/tir2rgb/ # Most importantly, set the dataset location. For me, I had all the Train and test folders under the datasets/tir2rgb directory. So you may do something similar.

And with your new images and config, just run the code with the new config as I have mentioned.

As for training configs, they are all included in the arXiv version of our paper: https://arxiv.org/pdf/2301.12689.pdf

I will link it to ReadMe for future references when I get back to office.

Please let me know if it works

Dong-Guw

Thanks for your reply. It's helpful for me.