RGB images for pertained model vs BGR images for Semantic segmentation
abhiagwl4262 opened this issue · comments
For Imagenet training This repo - https://github.com/HRNet/HRNet-Image-Classification was used in which data was loaded with torchvision.datasets.ImageFolder
which uses PIL to load the image which loads image in RGB mode.
While in this repo for segmentation, the data is being load using cv2, see this
HRNet-Semantic-Segmentation/lib/datasets/cocostuff.py
Lines 93 to 96 in 0bbb288
cv2 loads image in BGR format and Image loading of cv2 is slower than PIL as well.
My bad ...wrong observation. In this repo, the image is being read in BGR but again getting back converted to RGB in image_transform function. Its good to read with PIL if eventually training is going to happen on RGB images.