question about the loss
harlem867 opened this issue · comments
@xuannianz hi, thank you for this good project!
I have noticed that in your implementation, the gaussian loss is
pi = tf.constant(np.pi)
Z = (2 * pi * (sigma + sigma_const) ** 2) ** 0.5
probability_density = tf.exp(-0.5 * (x - mu) ** 2 / ((sigma + sigma_const) ** 2)) / Z
nll = -tf.log(probability_density + 1e-7)
and want to know how the mu and sigma will change during the training.
by my inference, maybe the mu should approach to the true value of x,y,w,h, and the sigmas should be reduced to 0. Is this right?
Absolutely.
According to the gaussian loss fuction of x,y,w,h, it can be a negative value.
How much training loss can be reduced during the traning using your gaussian loss?(sigma_const=0.3)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.