ecmwf / cfgrib

A Python interface to map GRIB files to the NetCDF Common Data Model following the CF Convention using ecCodes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

`cfgrib` loads all chunks into memory when indexing

guidocioni opened this issue · comments

Related to dask/dask#9451 (and probably to fsspec/kerchunk#198).

When indexing (either sel or isel) over (lat, lon) GRIB files loaded with open_mfdataset (thus containing chunked data) cfrgib attempts to load all chunks into memory. This causes excessive RAM consumption and slow performance.

From the discussion we had the hypothesis is that cfgrib needs to scan the entire file to subset only in few dimensions.
Still, it should be possible not to load the entire dataset into memory when performing the opration.

I'm interested to this too. I am trying to extract a small subset from a ERA5-land file but - independently from the chunk size - xarray/dask tries to read the entire file in memory.

If I understand the problem correctly, this issue is partly because ecCodes can only read the whole message (field) from disk, even if you only want some meta-data. We have plans to improve that situation, but there is no firm time-frame for it yet. When we do, cfgrib should benefit enormously from it.