Open Source Data Quality Monitoring.
Datachecks is an open-source data monitoring tool that helps to monitor the data quality of databases and data pipelines. It identifies potential issues, including in the databases and data pipelines. It helps to identify the root cause of the data quality issues and helps to improve the data quality.
Datachecks can generate several metrics, including row count, missing values, invalid values etc. from multiple data sources. Below are the list of supported data sources and metrics.
APM (Application Performance Monitoring) tools are used to monitor the performance of applications. AMP tools are mandatory part of dev stack. Without AMP tools, it is very difficult to monitor the performance of applications.
But for Data products regular APM tools are not enough. We need a new kind of tools that can monitor the performance of Data applications. Data monitoring tools are used to monitor the data quality of databases and data pipelines. It identifies potential issues, including in the databases and data pipelines. It helps to identify the root cause of the data quality issues and helps to improve the data quality.
Metric | Description |
---|---|
Reliability Metrics | Reliability metrics detect whether tables/indices/collections are updating with timely data |
Numeric Distribution Metrics | Numeric Distribution metrics detect changes in the numeric distributions i.e. of values, variance, skew and more |
Uniqueness Metrics | Uniqueness metrics detect when data constraints are breached like duplicates, number of distinct values etc |
Completeness Metrics | Completeness metrics detect when there are missing values in datasets i.e. Null, empty value |
Validity Metrics | Validity metrics detect whether data is formatted correctly and represents a valid value |
Install datachecks
with the command that is specific to the database.
To install all datachecks dependencies, use the below command.
pip install datachecks -U
To install only postgres data source, use the below command.
pip install datachecks 'datachecks[postgres]' -U
To install only opensearch data source, use the below command.
pip install datachecks 'datachecks[opensearch]' -U
Datachecks can be run using the command line interface. The command line interface takes the config file as input. The config file contains the data sources and the metrics to be monitored.
datachecks inspect -C config.yaml
Declare the data sources in the data_sources
section of the config file.
The data sources can be of type postgres
or opensearch
.
The configuration file can also use environment variables for the connection parameters. To use environment variables in the config file, use the !ENV
tag in the config file like !ENV ${PG_USER}
data_sources:
- name: content_datasource # Name of the data source
type: postgres # Type of the data source
connection: # Connection details of the data source
host: 127.0.0.1 # Host of the data source
port: 5431 # Port of the data source
username: !ENV ${PG_USER} # Username of the data source
password: !ENV ${OS_PASS} # Password of the data source
database: postgres # Database name of the data source
Metrics are defined in the metrics
section of the config file.
metrics:
content_datasource: # Reference of the data source for which the metric is defined
count_content_hat: # Name of the metric
metric_type: row_count # Type of the metric
table: example_table # Table name to check for row count
filter: # Optional Filter to apply on the table before applying the metric
where_clause: "category = 'HAT' AND is_valid is True"
Datachecks supports sql and search data sources. Below are the list of supported data sources.
Postgresql data source can be defined as below in the config file.
data_sources:
- name: content_datasource # Name of the data source
type: postgres # Type of the data source
connection: # Connection details of the data source
host: # Host of the data source
port: # Port of the data source
username: # Username of the data source
password: # Password of the data source
database: # Database name of the data source
schema: # Schema name of the data source
OpenSearch data source can be defined as below in the config file.
data_sources:
- name: content_datasource # Name of the data source
type: opensearch # Type of the data source
connection: # Connection details of the data source
host: # Host of the data source
port: # Port of the data source
username: # Username of the data source
password: # Password of the data source
- MySql
- MongoDB
- Elasticsearch
- GCP BigQuery
- AWS RedShift
Reliability metrics detect whether tables/indices/collections are updating with timely data and whether the data is being updated at the expected volume.
Metric | Description |
---|---|
row_count |
The number of rows in a table. |
document_count |
The number of documents in a document db or search index |
freshness |
Data freshness, sometimes referred to as data timeliness, is the frequency in which data is updated for consumption. It is an important data quality dimension and a pillar of data observability because recently refreshed data is more accurate, and thus more valuable |
For SQL data sources, the freshness metric can be defined as below.
<Datasource name>:
last_updated_row <Metrics name>:
metric_type: freshness # Type of metric is FRESHNESS
table: category_tabel # Table name to check for freshness check if datasource is sql type
field: last_updated # Field name to check for freshness check, this field should be a timestamp field
For Search data sources, the freshness metric can be defined as below.
<Datasource name>:
last_updated_doc <Metrics name>:
metric_type: freshness # Type of metric is FRESHNESS
index: category_index # Index name to check for freshness check if datasource is search index type
field: last_updated # Field name to check for freshness check, this field should be a timestamp field
By using a numeric metric to perform basic calculations on your data, you can more easily assess trends.
Metric | Description |
---|---|
row_count |
The number of rows in a table. |
document_count |
The number of documents in a document db or search index |
max |
Maximum value of a numeric column |
min |
Minimum value of a numeric column |
average |
Average value of a numeric column |
variance |
The statistical variance of the column. |
skew |
The statistical skew of the column |
kurtosis |
The statistical kurtosis of the column |
sum |
The sum of the values in the column |
percentile |
The statistical percentile of the column |
geometric_mean |
The statistical median of the column |
harmonic_mean |
The statistical harmonic mean of the column |
For SQL data sources, the numeric metric can be defined as below.
<Datasource name>:
<Metrics name>:
metric_type: <Metric type> # Type of NUMERIC metric
table: <Table name> # Table name to check for numeric metric
field: <Field name> # Field name to check for numeric metric
filter: # Optional Filter to apply on the table
where_clause: <Where clause> # SQL Where clause to filter the data before applying the metric
For Search data sources, the numeric metric can be defined as below.
<Datasource name>:
<Metrics name>:
metric_type: <Metric type> # Type of NUMERIC metric
index: <Index name> # Index name to check for numeric metric
field: <Field name> # Field name to check for numeric metric
filter: # Optional Filter to apply on the index
search_query: <Search Query> # Search Query to filter the data before applying the metric
Completeness metrics detect when there are missing values in datasets.
Metric | Description |
---|---|
null_count |
The count of rows with a null value in the column. |
null_percentage |
The percentage of rows with a null value in the column |
empty_string |
The count of rows with a 0-length string (i.e. "") as the value for the column. |
empty_string_percentage |
The percentage of rows with a 0-length string (i.e. "") as the value for the column. |
Uniqueness metrics detect when schema and data constraints are breached.
Metric | Description |
---|---|
distinct_count |
The count of distinct elements in the column. This metric should be used when you expect a fixed number of value options |
duplicate_count |
The count of rows with the same value for a particular column. |