danni-m / redis-timeseries

Future development of redis-timeseries is at github.com/RedisLabsModules/redis-timeseries.

Home Page:https://github.com/RedisLabsModules/redis-timeseries

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Timestamp Too Old

TheMaverickProgrammer opened this issue · comments

"TSDB: timestamp is too old" errors are being thrown. I checked the timestamp via epoch conversion and they correctly reflect only two days ago. The TTL is set to 2 weeks. How can I resolve this error?

I deleted the keys and tried again with TTL 0. And it worked. However the timestamps were not 2 weeks old so this still should not be a problem.

@TheMaverickProgrammer this happens if you insert samples that contains a timestamp that is older than the maximum timestamp that is already exists in the key.

You can't, the data structure is based on the assumption that the inserts are ordered.
What you can do is to have a system that will buffer you input, reorder it and then insert it to redis.

curious as to why setting the retentionSecs to 0 worked fine.

@TheMaverickProgrammer retentionSecs=0 means you don't have any retention policy on the key.
This will make the key keep all the data.
I don't see why this would affect your issue of unordered samples.

Can we close this?

@TheMaverickProgrammer Please open/reopen ticket if you still have issues.

I've encountered the same limitation, but in my testing retentionSecs does not have any affect.

Needing to strictly order the data on the way in is a pretty significant limitation and makes concurrent writers difficult in my use case.

Is the technically feasible to support out of order inserts at all? Looking at chunk.c, I am thinking it would at the very least require a nasty (slow..) reordering of an entire chunk? I am not an experienced C programmer, but if it's possible I would be interested in investigating.

I understand that requiring strict ordering greatly simplifies the code, but I think it puts significant limitations on where it can be used, and is something that should be made clear in the readme.

An option I've explored to work around this was focusing on dropping the precision of my time (say bucketing to 60 seconds), but I've discovered that samples on the same timestamp are subject to a last write wins.

Same as above, is it technically feasible to support multiple samples that arrive on the same timestamp?

@trist4n What is your use case?
It sounds like you have multiple sensors reporting the same data(?).

If you are writing to the same key from two different clients, LWW is being used. actually, if you think about it, you are creating a race condition that the module cannot solve.
Maybe the new TS.INCERBY/TS.DECRBY with timestamp reset is a better fit?