moj-analytical-services / splink

Fast, accurate and scalable probabilistic data linkage with support for multiple SQL backends

Home Page:https://moj-analytical-services.github.io/splink/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Splink 4] Should the db api distinsuigh between creating a table, and collecting results

RobinL opened this issue · comments

At the moment, when a result is required we do not distinguish between two cases:

  1. Cases where we want a table to be materialised in the backend database, but not sent to python.
  2. Cases where we want a result to be 'collected' from the database and sent to Python, but don't need a table to be materialised in the database

Currently we always use (1) and sometimes use methods to_pandas_dataframe on the resultant SplinkDatafame to subsequently send the result from the db to Python.

We have no way of doing (2).

One result of this lack of clarity is there's lots of fudging in Spark where dataframes work differently:

  • Spark is lazy, calculations only happen when they're 'collected' (which, in Spark, means when we trigger an action like save to parquet)
  • This means that when you execute SQL to 'create a table', you're just queueing up a DAG. Its not executed or materialiesd
  • Which means we have to manually intervene to tell Spark when we want tables to be physically created.

Quite a bit of this complexity might go away if we allow two forms of sql execution:

  • SQL statements where we want a table to be materialised, and a SplinkDataFrame returned (i.e. we don't immediately need the result in the Python client. e.g. predict()
  • SQL statements where we don't want a table to be materialised, but we do immediately need the result in the Python client (e.g. during EM training when we compute the new values of m and u)