hwenjun18 / cleanlab

Finding label errors in datasets and learning with noisy labels.

Home Page:https://pypi.org/project/cleanlab/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

cleanlab is a machine learning python package for learning with noisy labels and finding label errors in datasets. cleanlab CLEANs LABels. It is powered by the theory of confident learning, published in this paper and explained in this blog. Using the confidentlearning-reproduce repo, cleanlab v0.1.0 reproduces results in the CL paper.

pypi py_versions build_status coverage

cleanlab documentation is available in this blog post.

So fresh, so cleanlab

cleanlab finds and cleans label errors in any dataset using state-of-the-art algorithms to find label errors, characterize noise, and learn in spite of it. cleanlab is fast: its built on optimized algorithms and parallelized across CPU threads automatically. cleanlab is powered by provable guarantees of exact noise estimation and label error finding in realistic cases when model output probabilities are erroneous. cleanlab supports multi-label, multiclass, sparse matrices, etc. By default, cleanlab requires no hyper-parameters.

cleanlab finds and cleans label errors in any dataset using state-of-the-art algorithms for learning with noisy labels by characterizing label noise. cleanlab is fast: its built on optimized algorithms and parallelized across CPU threads automatically. cleanlab implements the family of theory and algorithms called confident learning with provable guarantees of exact noise estimation and label error finding (even when model output probabilities are noisy/imperfect).

How does confident learning work? See: TUTORIAL: confident learning with just numpy and for-loops.

cleanlab supports multi-label, multiclass, sparse matrices, and more.

cleanlab is:

  1. fast - Single-shot, non-iterative, parallelized algorithms (e.g. < 1 second to find label errors in ImageNet)
  2. robust - Provable generalization and risk minimimzation guarantees, including imperfect probability estimation.
  3. general - Works with any probablistic classifier: PyTorch, Tensorflow, MxNet, Caffe2, scikit-learn, etc.
  4. unique - The only package for multiclass learning with noisy labels or finding label errors for any dataset / classifier.

Find label errors with PyTorch, Tensorflow, MXNet, etc. in 1 line of code.

Pre-computed out-of-sample predicted probabilities for CIFAR-10 train set are available here: [LINK].

Learning with noisy labels in 3 lines of code!

Check out these examples and tests (includes how to use pyTorch, FastText, etc.).

Installation

Python 2.7, 3.4, 3.5, and 3.6 are supported.

Stable release:

Developer (unstable) release:

To install the codebase (enabling you to make modifications):

Citations and Related Publications

If you use this package in your work, please cite the confident learning paper:

@misc{northcutt2019confidentlearning,
  title={Confident Learning: Estimating Uncertainty in Dataset Labels},
  author={Curtis G. Northcutt and Lu Jiang and Isaac L. Chuang},
  year={2019},
  eprint={1911.00068},
  archivePrefix={arXiv},
  primaryClass={stat.ML}

}

and the cleanlab code base here:

@misc{northcutt2019cleanlab,
  author = {Curtis Northcutt},
  title = {Clean Lab},
  year = {2019},
  howpublished = {\url{https://github.com/cgnorthcutt/cleanlab}},
  note = {commit xxxxxxx, version xxxx}
}

If used for binary classification, cleanlab also implements this paper:

@inproceedings{northcutt2017rankpruning,
 author={Northcutt, Curtis G. and Wu, Tailin and Chuang, Isaac L.},
 title={Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy Labels},
 booktitle = {Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence},
 series = {UAI'17},
 year = {2017},
 location = {Sydney, Australia},
 numpages = {10},
 url = {http://auai.org/uai2017/proceedings/papers/35.pdf},
 publisher = {AUAI Press},
} 

Reproducing Results in confident learning paper

See cleanlab/examples. You'll need to git clone confidentlearning-reproduce which contains the data and files needed to reproduce the CIFAR-10 results.

cleanlab: Find Label Errors in ImageNet

Use cleanlab to identify ~100,000 label errors in the 2012 ImageNet training dataset.

Top label issues in the 2012 ILSVRC ImageNet train set identified using cleanlab. Label Errors are boxed in red. Ontological issues in green. Multi-label images in blue.

cleanlab: Find Label Errors in MNIST

Use cleanlab to identify ~50 label errors in the MNIST dataset.

Label errors of the original MNIST train dataset identified algorithmically using cleanlab. Depicts the 24 least confident labels, ordered left-right, top-down by increasing self-confidence (probability of belonging to the given label), denoted conf in teal. The label with the largest predicted probability is in green. Overt errors are in red.

cleanlab Generality: View performance across 4 distributions and 9 classifiers.

Use cleanlab to learn with noisy labels regardless of dataset distribution or classifier.

Each sub-figure in the figure above depicts the decision boundary learned using cleanlab.classification.LearningWithNoisyLabels in the presence of extreme (~35%) label errors. Label errors are circled in green. Label noise is class-conditional (not simply uniformly random). Columns are organized by the classifier used, except the left-most column which depicts the ground-truth dataset distribution. Rows are organized by dataset used.

The code to reproduce this figure is available here.

Each figure depicts accuracy scores on a test set as decimal values:

  1. LEFT (in black): The classifier test accuracy trained with perfect labels (no label errors).
  2. MIDDLE (in blue): The classifier test accuracy trained with noisy labels using cleanlab.
  3. RIGHT (in white): The baseline classifier test accuracy trained with noisy labels.

As an example, this is the noise matrix (noisy channel) P(s | y) characterizing the label noise for the first dataset row in the figure. s represents the observed noisy labels and y represents the latent, true labels. The trace of this matrix is 2.6. A trace of 4 implies no label noise. A cell in this matrix is read like, "A random 38% of '3' labels were flipped to '2' labels."

p(s|y) y=0 y=1 y=2 y=3
s=0 0.55 0.01 0.07 0.06
s=1 0.22 0.87 0.24 0.02
s=2 0.12 0.04 0.64 0.38
s=3 0.11 0.08 0.05 0.54

Get started with easy, quick examples.

New to cleanlab? Start with:

  1. Visualizing confident learning
  2. A simple example of learning with noisy labels on the multiclass Iris dataset.

These examples show how easy it is to characterize label noise in datasets, learn with noisy labels, identify label errors, estimate latent priors and noisy channels, and more.

Use cleanlab with any model (Tensorflow, caffe2, PyTorch, etc.)

All of the features of the cleanlab package work with any model. Yes, any model. Feel free to use PyTorch, Tensorflow, caffe2, scikit-learn, mxnet, etc. If you use a scikit-learn classifier, all cleanlab methods will work out-of-the-box. It’s also easy to use your favorite model from a non-scikit-learn package, just wrap your model into a Python class that inherits the sklearn.base.BaseEstimator:

Want to see a working example? Here’s a compliant PyTorch MNIST CNN class

As you can see here, technically you don’t actually need to inherit from sklearn.base.BaseEstimator, as you can just create a class that defines .fit(), .predict(), and .predict_proba(), but inheriting makes downstream scikit-learn applications like hyper-parameter optimization work seamlessly. For example, the LearningWithNoisyLabels() model is fully compliant.

Note, some libraries exists to do this for you. For pyTorch, check out the skorch Python library which will wrap your pytorch model into a scikit-learn compliant model.

Documentation by Example

cleanlab Core Package Components

  1. cleanlab/classification.py - The LearningWithNoisyLabels() class for learning with noisy labels.
  2. cleanlab/latent_algebra.py - Equalities when noise information is known.
  3. cleanlab/latent_estimation.py - Estimates and fully characterizes all variants of label noise.
  4. cleanlab/noise_generation.py - Generate mathematically valid synthetic noise matrices.
  5. cleanlab/polyplex.py - Characterizes joint distribution of label noise EXACTLY from noisy channel.
  6. cleanlab/pruning.py - Finds the indices of the examples with label errors in a dataset.

Many of these methods have default parameters that won’t be covered here. Check out the method docstrings for full documentation.

Estimate the confident joint, the latent noisy channel matrix, P(s | y) and inverse, P(y | s), the latent prior of the unobserved, actual true labels, p(y), and the predicted probabilities.

s denotes a random variable that represents the observed, noisy label and y denotes a random variable representing the hidden, actual labels. Both s and y take any of the m classes as values. The cleanlab package supports different levels of granularity for computation depending on the needs of the user. Because of this, we support multiple alternatives, all no more than a few lines, to estimate these latent distribution arrays, enabling the user to reduce computation time by only computing what they need to compute, as seen in the examples below.

Throughout these examples, you’ll see a variable called confident_joint. The confident joint is an m x m matrix (m is the number of classes) that counts, for every observed, noisy class, the number of examples that confidently belong to every latent, hidden class. It counts the number of examples that we are confident are labeled correctly or incorrectly for every pair of obseved and unobserved classes. The confident joint is an unnormalized estimate of the complete-information latent joint distribution, Ps,y. Most of the methods in the cleanlab package start by first estimating the confident_joint. You can learn more about this in the confident learning paper.

Option 1: Compute the confident joint and predicted probs first. Stop if that’s all you need.

Option 2: Estimate the latent distribution matrices in a single line of code.

Option 3: Skip computing the predicted probabilities if you already have them.

Completely characterize label noise in a dataset:

The joint probability distribution of noisy and true labels, P(s,y), completely characterizes label noise with a class-conditional m x m matrix.

Methods to Standardize Research with Noisy Labels

cleanlab supports a number of functions to generate noise for benchmarking and standardization in research. This next example shows how to generate valid, class-conditional, unformly random noisy channel matrices:

For a given noise matrix, this example shows how to generate noisy labels. Methods can be seeded for reproducibility.

The Polyplex

The key to learning in the presence of label errors is estimating the joint distribution between the actual, hidden labels ‘y’ and the observed, noisy labels ‘s’. Using cleanlab and the theory of confident learning, we can completely characterize the trace of the latent joint distribution, trace(P(s,y)), given p(y), for any fraction of label errors, i.e. for any trace of the noisy channel, trace(P(s|y)).

You can check out how to do this yourself here: 1. Drawing Polyplices 2. Computing Polyplices

License

Copyright (c) 2017-2019 Curtis Northcutt. Released under the MIT License. See LICENSE for details.

About

Finding label errors in datasets and learning with noisy labels.

https://pypi.org/project/cleanlab/

License:Other


Languages

Language:Python 99.7%Language:Shell 0.3%