PaulAlbert31 / LabelNoiseCorrection

Official Implementation of ICML 2019 Unsupervised label noise modeling and loss correction

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

the code about Label Noise modeling

RumengYi opened this issue · comments

Hi, I am very interested in your work, but I encountered some problems when running your code. In the code about Label Noise modeling, why did I get lookup=0.9999 after running bmm_model.fit(loss_tr)?
Looking forward to your reply.
20200206200752

Hello RumengYi,

Thank you for your interest in our work. The lookup variable is a list that contains the probability values for each loss value given by the BMM (1 corresponding to 100% prob of being noisy). If you are training in a setup where the losses from clean and noisy samples are being separated, the lookup variable should only give you high probability values for high loss values (noisy samples).

From the information in the screenshot, I cannot tell you what is the reason your lookup variable looks this way. I suggest you run some of the experiments we provide and from there move to your setup.

Feel free to ask any further questions you might have.
Best,
Eric

Thanks. :)

Dear EricArazo:
I encountered another problem and hope to get your help:
In order to reproduce the curve in figure 2, the noisy-level =50 in cifar-10 after epoch=10 with standard corss-entropy, and use the following code but failed to draw the curve corresponding to the estimated BMM model in figure 2:
https://github.com/PaulAlbert31/LabelNoiseCorrection/blob/master/utils.py#L744
Can you tell me which code is used to draw the the curve corresponding to the estimated BMM model in figure 2?
Thank you.

Hello RumengYi,
The curves from figure 2 were done using python and matplotlib with the posterior probabilities given by the BMM for all the possible values of the loss, the curve corresponds to the addition of the prob. of being noisy and of being clean per each loss value. We do not have the code we used but it should be straightforward to replicate. Note that the loss was computed with a forward pass at the end of the epoch while training with 50% of label noise.

Thank you very much,I will try.