akkadotnet / Akka.Persistence.Redis

Redis storage for Akka.NET Persistence

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

highestSequenceNr will leak for actors with finite lifetime

kantora opened this issue · comments

I'll describe the case.
I have an actor (PersistentFSM) that represents an order. While an order is active - everything is ok, but as soon as order closed (in either way), the actor is removed from the system and will never come back (archived order is quite another story).

Everything I can do is to call DeleteMessages(long.MaxValue) and DeleteSnapshots(SnapshotSelectionCriteria.Latest) to remove obsolete data from storage in actor removal procedure. But highestSequenceNr will still be left in storage to fulfill the JournalSpec.Journal_should_not_reset_HighestSequenceNr_after_journal_cleanup test.

It is by design, and we can't delete any data from *.highestSequenceNr records. You could raise the question on Akka JVM github.

On Akka JVM my issue was closed with no result :( and advice to set timeouts for records in redis (what was removed in this lib incarnation).

I don't like the idea to insert kludge in my application to set-up separate redis connection in order to remove unused keys.
Maybe the introduction of some special message to journal / snapshot store to clean up will be a better idea?

I have similar scenario with actors that are retired after a certain point in time. They will never be recreated and so they clean up their state from Redis. However, as described above the highestSequenceNr record remains.

It would be great to be able to indicate to the journal when it's acceptable to clean up the highestSequenceNr record. I understand the importance of not re-using the sequenceNr but in cases where that's guaranteed not to happen it seems sensible to be able to clean up and not leave behind a growing list of stale records.