splink
implements Fellegi-Sunter's canonical model of record linkage in Apache Spark, including EM algorithm to estimate parameters of the model.
The aims of splink
are to:
-
Work at much greater scale than current open source implementations (100 million records +).
-
Get results faster than current open source implementations - with runtimes of less than an hour.
-
Have a highly transparent methodology, so the match scores can be easily explained both graphically and in words
-
Have accuracy similar to some of the best alternatives
splink
is a Python package. It uses the Spark Python API to execute data linking jobs in a Spark cluster. It has been tested in Apache Spark 2.3 and 2.4.
Install splink using
pip install splink
You can run demos of splink
in an interactive Jupyter notebook by clicking the button below:
The best documentation is currently a series of demonstrations notebooks in the splink_demos repo.
We also provide an interactive splink
settings editor and example settings here. A tool to generate custom m
and u
probabilities can be found here.
The statistical model behind splink
is the same as that used in the R fastLink package. Accompanying the fastLink package is an academic paper that describes this model. This is the best place to start for users wanting to understand the theory about how splink
works.
You can read a short blog post about splink
here.
You can find a short video introducing splink
and running though an introductory demo here.
A 'best practices and performance tuning' tutorial can be found here.
We are very grateful to ADR UK (Administrative Data Research UK) for providing funding for this work as part of the Data First project.
We are also very grateful to colleagues at the UK's Office for National Statistics for their expert advice and peer review of this work.