HRNet / HRNet-Semantic-Segmentation

The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The performance of cityscapes on Pytorch 1.8.1

tnhgiang opened this issue · comments

Hello everyone
I'm unable to reproduce the Cityscapes validation on Pytorch 1.8.1 and Cuda 11.3. With the seg_hrnet_w48_train_512x1024_sgd_lr1e-2_wd5e-4_bs_12_epoch484.yaml configuration and HRNetV2-W48 + OCR checkpoint, the mIoU is just 0.004.
Is there any problem with loading checkpoint or environment?

Thank you all!

Validation log

2022-06-01 18:06:40,966 Namespace(cfg='experiments/cityscapes/seg_hrnet_w48_train_512x1024_sgd_lr1e-2_wd5e-4_bs_12_epoch484.yaml', opts=['TEST.MODEL_FILE', 'pretrained_models/hrnet_ocr_cs_8162_torch11.pth'])
2022-06-01 18:06:40,966 {'AUTO_RESUME': False,
 'CUDNN': CfgNode({'BENCHMARK': True, 'DETERMINISTIC': False, 'ENABLED': True}),
 'DATASET': {'DATASET': 'cityscapes',
             'EXTRA_TRAIN_SET': '',
             'NUM_CLASSES': 19,
             'ROOT': 'data/',
             'TEST_SET': 'list/cityscapes/val.lst',
             'TRAIN_SET': 'train.lst'},
 'DEBUG': {'DEBUG': False,
           'SAVE_BATCH_IMAGES_GT': False,
           'SAVE_BATCH_IMAGES_PRED': False,
           'SAVE_HEATMAPS_GT': False,
           'SAVE_HEATMAPS_PRED': False},
 'GPUS': (0,),
 'LOG_DIR': 'log',
 'LOSS': {'BALANCE_WEIGHTS': [1],
          'CLASS_BALANCE': False,
          'OHEMKEEP': 131072,
          'OHEMTHRES': 0.9,
          'USE_OHEM': False},
 'MODEL': {'ALIGN_CORNERS': False,
           'EXTRA': {'FINAL_CONV_KERNEL': 1,
                     'STAGE1': {'BLOCK': 'BOTTLENECK',
                                'FUSE_METHOD': 'SUM',
                                'NUM_BLOCKS': [4],
                                'NUM_CHANNELS': [64],
                                'NUM_MODULES': 1,
                                'NUM_RANCHES': 1},
                     'STAGE2': {'BLOCK': 'BASIC',
                                'FUSE_METHOD': 'SUM',
                                'NUM_BLOCKS': [4, 4],
                                'NUM_BRANCHES': 2,
                                'NUM_CHANNELS': [48, 96],
                                'NUM_MODULES': 1},
                     'STAGE3': {'BLOCK': 'BASIC',
                                'FUSE_METHOD': 'SUM',
                                'NUM_BLOCKS': [4, 4, 4],
                                'NUM_BRANCHES': 3,
                                'NUM_CHANNELS': [48, 96, 192],
                                'NUM_MODULES': 4},
                     'STAGE4': {'BLOCK': 'BASIC',
                                'FUSE_METHOD': 'SUM',
                                'NUM_BLOCKS': [4, 4, 4, 4],
                                'NUM_BRANCHES': 4,
                                'NUM_CHANNELS': [48, 96, 192, 384],
                                'NUM_MODULES': 3}},
           'NAME': 'seg_hrnet',
           'NUM_OUTPUTS': 1,
           'OCR': {'DROPOUT': 0.05,
                   'KEY_CHANNELS': 256,
                   'MID_CHANNELS': 512,
                   'SCALE': 1},
           'PRETRAINED': '../../../../dataset/pretrained_models/hrnetv2_w48_imagenet_pretrained_top1_21.pth'},
 'OUTPUT_DIR': 'output',
 'PIN_MEMORY': True,
 'PRINT_FREQ': 100,
 'RANK': 0,
 'TEST': {'BASE_SIZE': 2048,
          'BATCH_SIZE_PER_GPU': 4,
          'FLIP_TEST': False,
          'IMAGE_SIZE': [2048, 1024],
          'MODEL_FILE': 'pretrained_models/hrnet_ocr_cs_8162_torch11.pth',
          'MULTI_SCALE': False,
          'NUM_SAMPLES': 0,
          'OUTPUT_INDEX': -1,
          'SCALE_LIST': [1]},
 'TRAIN': {'BASE_SIZE': 2048,
           'BATCH_SIZE_PER_GPU': 3,
           'BEGIN_EPOCH': 0,
           'DOWNSAMPLERATE': 1,
           'END_EPOCH': 484,
           'EXTRA_EPOCH': 0,
           'EXTRA_LR': 0.001,
           'FLIP': True,
           'FREEZE_EPOCHS': -1,
           'FREEZE_LAYERS': '',
           'IGNORE_LABEL': 255,
           'IMAGE_SIZE': [1024, 512],
           'LR': 0.01,
           'LR_FACTOR': 0.1,
           'LR_STEP': [90, 110],
           'MOMENTUM': 0.9,
           'MULTI_SCALE': True,
           'NESTEROV': False,
           'NONBACKBONE_KEYWORDS': [],
           'NONBACKBONE_MULT': 10,
           'NUM_SAMPLES': 0,
           'OPTIMIZER': 'sgd',
           'RANDOM_BRIGHTNESS': False,
           'RANDOM_BRIGHTNESS_SHIFT_VALUE': 10,
           'RESUME': True,
           'SCALE_FACTOR': 16,
           'SHUFFLE': True,
           'WD': 0.0005},
 'WORKERS': 4}
2022-06-01 18:06:41,409 => init weights from normal distribution
2022-06-01 18:06:46,600 
Total Parameters: 65,859,379
----------------------------------------------------------------------------------------------------------------------------------
Total Multiply Adds (For Convolution and Linear Layers only): 174.0439453125 GFLOPs
----------------------------------------------------------------------------------------------------------------------------------
Number of Layers
Conv2d : 307 layers   BatchNorm2d : 306 layers   ReLU : 269 layers   Bottleneck : 4 layers   BasicBlock : 104 layers   HighResolutionModule : 8 layers   
2022-06-01 18:06:48,799 processing: 0 images
2022-06-01 18:06:48,800 mIoU: 0.0000
2022-06-01 18:07:33,005 processing: 100 images
2022-06-01 18:07:33,006 mIoU: 0.0003
2022-06-01 18:08:17,309 processing: 200 images
2022-06-01 18:08:17,309 mIoU: 0.0003
2022-06-01 18:09:01,853 processing: 300 images
2022-06-01 18:09:01,854 mIoU: 0.0002
2022-06-01 18:09:46,928 processing: 400 images
2022-06-01 18:09:46,928 mIoU: 0.0003
2022-06-01 18:10:31,389 MeanIU:  0.0004, Pixel_Acc:  0.0071,             Mean_Acc:  0.0526, Class IoU: 
2022-06-01 18:10:31,389 [0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.00709307]
2022-06-01 18:10:31,390 Mins: 3
2022-06-01 18:10:31,390 Done

Finally, I'm able to reproduce the validation

Hi @tnhgiang ,

could you share your solution for this issue?

@GewelsJI There was the problem with loading checkpoint. You should use HRNetV2-W48 checkpoint with seg_hrnet_w48_train_512x1024_sgd_lr1e-2_wd5e-4_bs_12_epoch484.yaml configuration

Hope this helps!