megvii-research / BBN

The official PyTorch implementation of paper BBN: Bilateral-Branch Network with Cumulative Learning for Long-Tailed Visual Recognition

Home Page:https://arxiv.org/abs/1912.02413

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

cifar10 results and imb_type

latstars opened this issue · comments

If I follow the default setting and run python main/train.py --cfg configs/cifar10.yaml, I cannot achieve similar results as reported in your paper.

I just achieve as below:
Epoch:200 Batch: 0/109 Batch_Loss:0.223 Batch_Accuracy:98.27%
Epoch:200 Batch:100/109 Batch_Loss:0.195 Batch_Accuracy:99.03%
---Epoch:200/200 Avg_Loss:0.220 Epoch_Accuracy:98.77% Epoch_Time: 0.13min---
------- Valid: Epoch:200 Valid_Loss:0.930 Valid_Acc:79.48%-------
--------------Best_Epoch:200 Best_Acc:79.48%--------------
-------------------Train Finished :BBN.CIFAR10.res32.200epoch-------------------
Would you like to help me to reproduce your results?
Furthermore,
What does mean imb_type in lib/dataset/imbalance_cifar.py? Does it be mentioned in your paper? What should I set to reproduce your results?

Looking forward to your reply!

Would you like to help me to reproduce your results?
Have you kept your setting as same as reported in the paper? It should be very easy to reproduce our results without any modification on the basic setting.
You could send the log to my email zhouboyan94@gmail.com.

What does mean imb_type in lib/dataset/imbalance_cifar.py? Does it be mentioned in your paper? What should I set to reproduce your results?
We use the codes of LDAM to produce imbalanced CIFAR datasets to ensure fairness. The imb_type means different ways to generate imbalancd datasets. (e.g. exp, step). Actually you could ignore the imb_type in the codes.

With the provide environments
torch == 1.0.1
torchvision == 0.2.2_post3
tensorboardX == 1.8
Python 3
, I have achieved Best_Acc:81.68% for long-tailed CIFAR-10 with 50 imbalance factor, thanks the author's reply.
I will try it again when I am free.