bnu-wangxun / Deep_Metric

Deep Metric Learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Can share the super parameters to reproduce the result on CUB-200-2011?

bjkite opened this issue · comments

I use the super parameters given in the run script to reproduce the work of WeightLoss, but I can not get the same result as reported. My best result as follow:

Epoch-130 0.6411 0.7437 0.8301 0.8981 0.9458 0.9731

Would you please give me some help?

Fisrt, the performance of CUB is not quite stable, you just need run more iteration with more times.
Second, the batch size should be 70-80, num_instance should be 5. Adam with 1e-5 learn rate. My result is always higher than 0.65 on CUB dataset.

And I suggest you try the SGD optimizer to replace ADAM, I try the SGD on Car, and a bit better performance than Adam.

How about the parameters for In-Shop?

Larger batchsize (> 200) is already enough. No other tricks.

We rerun our script and the performance of Emenbedding size of 512 is as below:
Epoch-200 0.6595 0.7601 0.8427 0.9067 0.9465 0.9738

@bjkite I think the problem of not get close performance exists in hard mining:
you should change the hard_mining in the loss: /losses/Weight.py

self.hard_mining = hard_mining

Make self.hard_mining not None.
Then you will get similar performance.
I will fix this problem in these days.