jwkanggist / automl-papers-in-practice

A collection of paper for the noisy label problems

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AutoML

This repository is for collecting AutoML papers

Noisy Label Problem

Problem Statements
  • 2014 | A Comprehensive Introduction to Label Noise | Benoît Frénay, et al. | ESANN | PDF
Adversarial ML
  • 2014 | Intriguing properties of neural networks | C. Szegedy, et al. | ICLR | PDF

  • 2015 | Explaining and harnessing adversarial examples | I. J. Goodfellow et al.| ICLR | PDF

  • 2017 | Adversarial machine learning at scale | A. Kurakin et al. | ICLR | PDF

  • 2017 | Adversarial examples in the physical world | A. Kurakin et al. | ICLR | PDF

  • 2017 | Towards Deep Learning Models Resistant to Adversarial Attacks | Aleksander Madry etal | ICLR | PDF

  • 2017 | Towards Evaluating the Robustness of Neural Networks | Nicholas Carlini, David Wagner | PDF

  • 2018 | Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples | Anish Athalye et al. | ICML | PDF

Memorization Effect from Corrupted Labels
  • 2017 | A Closer Look at Memorization in Deep Networks | D. Arpit, et al. | ICML | PDF
  • 2018 | Dimensionality-Driven Learning with Noisy Labels | X. Ma et al. | ICML | PDF
  • 2019 | Searching to Exploit Memorization Effect in Learning from Corrupted Labels | Hansi Yang et al | Not published | PDF
Curriculum Learning (model driven)
  • 2009 | Curriculum Learning | Yoshua Bengio et al. | ICML 2009 | PDF
  • 2018 | Mentornet:Learning data-driven curriculum for very deep neural networks on corrupted labels | Lu Jiang, et al. | ICML | PDF
  • 2018 | Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels | Bo Han, et al. | NIPS | PDF
  • 2018 | Curriculum Learning by Transfer Learning: Theory and Experiments with Deep Network | ICML 2018 | ArXiv
  • 2019 | Deep Self-Learning From Noisy Labels | Jiangfan Han et al. | CVPR 2019 | CVPROpenAccess
  • 2019 | Learning to Learn From Noisy Labeled Data | J. Li et al. | CVPR 2019 | CVPROpenAccess
Noisy Label Supervision (Loss based)
  • 2017 | Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach | G Partini | CVPR 2017 | ArXiv
  • 2017 | Training deep neural-networks using a noise adaptation layer | J
  • 2018 | Masking: A new perspective of noisy supervision | Bo Han et al. | NIPS 2018 | NIPSProc

  • 2018 | Learning from noisy singly-labeled data| Ashish Khetan, et al. | ICLR | PDF

  • 2018 | Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels | Zhilu Zhang, et al. | NIPS | PDF

  • 2019 | Learning with Bad Training Data via Iterative Trimmed Loss Minimization| Yanyao Shen, et al. | ICML | PDF

  • 2019 | Learning with Limited Data for Multilingual Reading Comprehension | Kyungjae Lee et al. | EMNLP2019 | EMNLP2019

Meta learning Approach
  • 2018 | Learning to Reweight Examples for Robust Deep Learning | Mengye Ren, et al. | ICML | PDF
Ensemble for Out of Distiribution
  • 2017 | (DeepMind) Simple and Scalable Predictive Uncertainty Estimation using Deep Ensemble | Balaji Lakshminarayanan et al. | NIPS | PDF

Active Learning

  • 2019 | Learning Loss for Active Learning | Donggeun Yoo et al. | CVPR | PDF Slide
  • 2010 | Active Learning Literature Survey - Chap1,2 | Burr Settles | University of Wisconsin–Madison | PDF
  • 2010 | Active Learning Literature Survey - Chap3 | Burr Settles | University of Wisconsin–Madison | PDF
  • 2017 | Learning Active Learning from Data

Data Profiling

  • 2012 | The Influence of Corpus Quality on Statistical Measurements on Language Resources | Thomas Eckart et al. | LREC 2012 proceeding

Acknowledgement

Special thanks to everyone who contributed to this project.

Name Bio
Jaewook Kang Research Scientist @Naver Clova

Contact & Feedback

If you have any suggestions (missing papers, new papers, key researchers or typos), feel free to pull a request. Also you can mail to:

About

A collection of paper for the noisy label problems