tdunning / t-digest

A new data structure for accurate on-line accumulation of rank-based statistics such as quantiles and trimmed means

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

how to implement sliding windows quantile?

lyupy opened this issue · comments

commented

how to implement sliding windows quantile, such as window size is 1000 on data stream?
how to remove element outside of window?

Pure sliding windows are probably not possible with a t-digest and basing windows on counts is a bit unusual as well. You can implement a form of exponential windowing pretty easily, but it becomes very difficult to understand the digest invariant if you do that.

Typically, what is done instead is to store a digest per time period in compressed form, typically for a minute or 5 minutes. At query time, you simply combine as many digests as necessary to cover the window you want. In many cases, you store many digests for each window so the aggregation involves multiple digests at each time point.

The key point here is that the total bandwidth of metrics is heavily compressed but accuracy is not lost. Suppose that you are storing 10,000 digests every minute, a few of which get a million values per second, most get thousands of samples per second and some get only a few values per minute. The hot digests will have nearly the maximum number of centroids, but will be bounded in size to a few kB (for compression = 100 or 200). The cold digests will only have a few centroids and thus will be considerably smaller. Overall, however, the overall number of bytes per second required to store your methods will be less than 250kB/s which is very modest for such a large amount of metrics. Moreover, a year of data at full resolution is less than 10TB which is (amazingly) now a relatively small amount of data.

If you add aggregated digests for each day then querying any time period will be very fast.

In the end, the question of windowed aggregates is why you need a truly windowed digest.