[FEAT] Save out `SplinkDataFrame` metadata
ADBond opened this issue · comments
As mentioned in #1971, parquet format supports arbitrary key-value metadata. As SplinkDataFrame
s now support such metadata (in particularly used for storing table-creation thresholds), it would be nice if this could be written/read from parquet.
Backend notes:
- Supported in
duckdb
, though think only (currently,0.10.0
) using a literal struct in SQL (which would thus need to be carefully constructed) rather than via e.g. subquery - Doesn't appear to be directly supported in
spark
, could possibly go viapyarrow
athena
usesarrow
under-the-hood so should be okay.postgres
/sqlite
we don't currently have ato_parquet()
, but could look into implementing