why needs torch.distributions.utils.probs_to_logits?
tangyipeng100 opened this issue · comments
Yipeng Tang commented
I have noticed that there comes torch.distributions.utils.probs_to_logits after forward propagation:
train.py 115~117:
scores = model(points)#[b, 40960, 6]->[b, 14, 40960]
logp = torch.distributions.utils.probs_to_logits(scores, is_binary=False)
why needs torch.distributions.utils.probs_to_logits? Can I remove it and use scores directly to calculate the loss?