There are 5 repositories under noisy-labels topic.
A curated list of resources for Learning with Noisy Labels
Curated list of open source tooling for data-centric AI on unstructured data.
A curated (most recent) list of resources for Learning with Noisy Labels
The toolkit to test, validate, and evaluate your models and surface, curate, and prioritize the most valuable data for labeling.
Official Implementation of Early-Learning Regularization Prevents Memorization of Noisy Labels
NeurIPS'19: Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting (Pytorch implementation for noisy labels).
Code for ICCV2019 "Symmetric Cross Entropy for Robust Learning with Noisy Labels"
Noise-Tolerant Paradigm for Training Face Recognition CNNs [Official, CVPR 2019]
[ICML2020] Normalized Loss Functions for Deep Learning with Noisy Labels
The official implementation of the ACM MM'21 paper Co-learning: Learning from noisy labels with self-supervision.
[ICML2022 Long Talk] Official Pytorch implementation of "To Smooth or Not? When Label Smoothing Meets Noisy Labels"
NLNL: Negative Learning for Noisy Labels
ICML 2019: Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels
MoPro: Webly Supervised Learning
[ICLR2021] Official Pytorch implementation of "When Optimizing f-Divergence is Robust with Label noise"
Adaptive Early-Learning Correction for Segmentation from Noisy Annotations (CVPR 2022 Oral)
The official code for the paper "Delving Deep into Label Smoothing", IEEE TIP 2021
[NeurIPS 2020] Disentangling Human Error from the Ground Truth in Segmentation of Medical Images
PyTorch implementation of "Contrast to Divide: self-supervised pre-training for learning with noisy labels"
Official Implementation of Unweighted Data Subsampling via Influence Function - AAAI 2020
Deep Learning for Suicide and Depression Identification with Unsupervised Label Correction (ICANN 2021)
[CVPR'22] Official Implementation of the CVPR 2022 paper "UNICON: Combating Label Noise Through Uniform Selection and Contrastive Learning"
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)
Official data release to reproduce Confident Learning paper results
AAAI 2021: Beyond Class-Conditional Assumption: A Primary Attempt to Combat Instance-Dependent Label Noise
This is a summary of research on noisy correspondence. There may be omissions. If anything is missing please get in touch with us. Our emails: linyijie.gm@gmail.com yangmouxing@gmail.com qinyang.gm@gmail.com
Sample Prior Guided Robust Model Learning to Suppress Noisy Labels
Use Large Language Models like OpenAI's GPT-3.5 for data annotation and model enhancement. This framework combines human expertise with LLMs, employs Iterative Active Learning for continuous improvement, and integrates CleanLab (Confident Learning) to ensure high-quality datasets and better model performance
SSR: An Efficient and Robust Framework for Learning with Unknown Label Noise (BMVC2022)
Reinforcement Learning with Perturbed Reward, AAAI 2020