statsu1990 / yoto_class_balanced_loss

Unofficial implementation of YOTO (You Only Train Once) applied to Class balanced loss

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

yoto_class_balanced_loss

Unofficial implementation of YOTO (You Only Train Once) applied to Class balanced loss

YOU ONLY TRAIN ONCE: LOSS-CONDITIONAL TRAINING OF DEEP NETWORKS
https://openreview.net/pdf?id=HyxY6JHKwr

Class-Balanced Loss Based on Effective Number of Samples
https://arxiv.org/abs/1901.05555

Overview

Image classification was performed by applying YOTO to Class balanced loss. By using YOTO, I was able to select a model with good performance in the Major class or a model with good performance in the Minor class at the time of testing with only one model.

Verification

I converted Cifar 10 to unbalanced data and used it for validation. The number of data in classes 1, 3, 5, 7, and 9 of the training data was reduced to 1/10. The number of test data was not changed.

For the base model, I used ResNet18. By combining YOTO and Class balanced loss, we trained a model in which β, a hyper-parameter for minor class weights, can be changed at test time.

The classification accuracy for each data is shown in the figure below.

By changing the beta at the time of testing, we can see that one model can choose a model with good performance in the Major class or a model with good performance in the Minor class.
The performance of YOTO is high when 1-β is small. This is strange and I'm not sure why. When 1-β is large, the ratio of the weights of the Major and Minor classes is nearly 20 times larger. It is my personal expectation that learning becomes unstable when the hyperparameters are fixed, but it may be stable when YOTO is used.

More details

https://st1990.hatenablog.com/entry/2020/05/04/012738

About

Unofficial implementation of YOTO (You Only Train Once) applied to Class balanced loss

License:MIT License


Languages

Language:Python 100.0%