jiawei-ren / BalancedMSE

[CVPR 2022 Oral] Balanced MSE for Imbalanced Visual Regression https://arxiv.org/abs/2203.16427

Home Page:https://sites.google.com/view/balanced-mse/home

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to apply BalancedMSE for the d-dimensional regression task?

ChengBinJin opened this issue · comments

@jiawei-ren
I wondered how should I apply Balanced-MSE for the D-dimensional regression task? I checked the tutorial code, the example code is formatted for one-dimensional regression.

def bmc_loss(pred, target, noise_var):
    logits = - (pred - target.T).pow(2) / (2 * noise_var)
    loss = F.cross_entropy(logits, torch.arange(pred.shape[0]))
    loss = loss * (2 * noise_var).detach()  # optional: restore the loss scale

    return loss

For example, the pred shape is (N, D), and the target is also (N, D). N is the batch size, and D is the dimension for regressing tasks.
If following the above example, there is an error in the F.cross_entropy function.

Oops, my misunderstanding

@ChengBinJin It's great that you have figured it out! Meanwhile, we have updated the multi-dimensional implementation of Balanced MSE in synthetic_benchmark/loss.py for your reference.