biubug6 / Pytorch_Retinaface

Retinaface get 80.99% in widerface hard val using mobilenet0.25.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Fine-tuning Resnet 50 model

otsebriy opened this issue · comments

I have tried to fine-tune the Resnet50_Final.pth on my own data with annotations, but the results look like the model was trained from the beginning without uploading pretrained weights. What could be a problem?
Current config below:

train:
  training_dataset: ml/output/face_detection/random350_frames_faces/annotations_training/annotations.txt
  network: resnet50 # Backbone network mobile0.25 or resnet50
  num_workers: 4 # Number of workers used in dataloading
  lr: 0.001 # initial learning rate
  momentum: 0.9 # the gradient of the past steps to determine the direction to go
  resume_net: ./weights/pretrained/Resnet50_Final.pth # path to pretrained weights
  resume_epoch: 0 # for pretrained weights, epoch on which training was ended
  weight_decay: 0.0005 # Weight decay for SGD
  gamma: 0.1 # Gamma update for SGD
  save_folder: ml/experiments/face_detection/weights/train_results

  cfg_re50:
    name: Resnet50
    min_sizes: [[16, 32], [64, 128], [256, 512]]
    steps: [8, 16, 32]
    variance: [0.1, 0.2]
    clip: false
    loc_weight: 2.0
    gpu_train: true
    batch_size: 6
    ngpu: 1
    epoch: 100
    decay1: 70
    decay2: 90
    image_size: 840
    pretrain: true
    return_layers:
      layer2: 1
      layer3: 2
      layer4: 3
    in_channel: 256
    out_channel: 256

Hmm, actually, I didn't try it with resume_epoch, but I will.
Maybe someone has other thoughts?