junsukchoe / ADL

Attention-based Dropout Layer for Weakly Supervised Object Localization (CVPR 2019 Oral)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

VGG + CUB params

ahmdtaha opened this issue · comments

Thanks for sharing your code. Can you please share or at least confirm the params used to train VGG on CUB.
According to the readme example

  • base-lr 0.01 [Not mentioned in the paper]
  • batch 32 [Not mentioned in the paper]
  • epoch = 105 [Not mentioned in the paper, deduced from utils_args]
  • attdrop 3 4 53 [slight mentioned in the paper]
  • threshold 0.80 [clearly mentioned in the paper]
  • keep_prob 0.25 [clearly mentioned in the paper]

are these the right params?
I noticed you use different params in some of the closed issues
#2
epoch = 200
base-lr 0.001

Hi ahmdtaha,

I used these parameters:

  • base-lr: 0.01
  • batch: 128
  • attdrop: 3 4 53
  • threshold: 0.80
  • keep_prob: 0.25

See also #2.

Thanks for your reply and best regards with your future plans
Batch_size is different from utils_args, this is a minor change.
Can you confirm the number of epochs as well?

Thank you!

You don't need modify the number of epochs.

In utils_args.py, I set the appropriate the number of epochs for CUB.

if args.cub:
    args.laststride = 1
    args.stepscale = 5.0
    args.epoch = 105

So you will actually run the 525 epochs. Note that 200 epochs also work well.

Can you also share the params you used for resnet50 on CUB as well as Imagenet?

@won-bae

For ResNet, I use threshold 0.90 and keep_prob 0.25. Batch size is 128 for both datasets.

For ResNet50-SE what is the --attdrop? i.e., where are adl components inserted?
For Cam_vgg, attdrop is [3,4,53]. what is the list for resnet50-se

@ahmdtaha It's 31 41 5 for resent50-se