awolden / brakes

Hystrix compliant Node.js Circuit Breaker Library

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Timing the statInterval based on bucketNum & bucketSpan

djcuvcuv opened this issue · comments

In short, I am struggling to understand how to minimize redundancy in the data/stats within the "snapshot" events.

My goal is to ensure that each "snapshot" contains only data which has been measured after the previous "snapshot." In other words, I do not want a "snapshot" to report on data which has already been reported on in a previous "snapshot."

My issue is that I cannot seem to properly assess which bucket or buckets are included in each "snapshot" event.

For example, if I configured the brakes instance options to have say:
{ bucketNum: 6, bucketSpan: 60000, statInterval: 6 * 60000 }, would each "snapshot" event report on all the data measured by all 6 buckets in the last 6 minutes? Or would it report on only the data measured in the very last bucket?

Again, my goal is to select the above options such that my "snapshot" stats do not report data that has already been included in a previous "snapshot," while also not missing any unreported data in the time elapsed since the last "snapshot."

Any insight into this would be very much appreciated, and thank you in advance!

@djcuvcuv Unfortunately, the snapshot functionality isn't designed to directly lineup to the buckets expiration. If you were to do { bucketNum: 6, bucketSpan: 60000, statInterval: 6 * 60000 }, that would come close to what you are asking for, but there is no guarantee there wouldn't be some overlap in buckets at that point (i.e. one of the buckets could expire a second after the snapshot, which could change the numbers in the snapshot). The stats are aggregated across all buckets currently not expired.

@awolden Thanks for the response here. That is perfectly fine, I just wanted to confirm the nature and/or limitations of the functionality. This should not negatively affect my monitoring implementations much if at all. Closing this issue, and thanks again!