kriszyp / weak-lru-cache

A cache using LRU and weak references to cache data in a way that works in harmony with garbage collection

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

What happens to the cache size if every setValue has expirationPriority = -1?

jeyendranbalakrishnan opened this issue · comments

If I set a fixed cacheSize with
const cache = new WeakLRUCache( { cacheSize : 1024 } );
and then keep adding the same value at distinct keys, like

var i = 0;
while (true) {
     cache.setValue(i, {}, -1);
     i++;
}

does the cache size (i.e., the size of the underlying Map superclass of WeakLRUCache) keep growing until Node runs out of memory, or is the pure LRU behavior strictly enforced to ensure that the cache size will never grow beyond the configured limit (1024 in my case)?

A expiration priority of -1 means that it will not use the LRU, and be guaranteed to stay in the Map (until it is replaced/changed/deleted). So yes, this code should keep growing until Node runs out of memory (and will crash with out of memory error).

Thanks for the speedy clarification.
Follow up question: How can I configure WeakLRUCache as a straight LRU cache, i.e., keep filling up the cache until it is full, then discard the oldest entry whenever (and only when) a new key (not in the cache) has to be added?

If you don't set the priority (default priority of 0), then it will use the LRU functionality of filling up the cache and expiring older entries. However, weak-lru-cache also employs multi-stage frequency caching, so entries that are accessed frequently get promoted to higher stage and are not immediately discarded. This is usually a better caching mechanism since frequently accessed entries are more likely to be accessed in the future. Also, when an entry expires it remains in the map with a weak reference until it is garbage collected. This doesn't inhibit GC, and allows you to continue to access an entry until the VM access collects and deletes object.

Are you wanting to disable the frequent-access multi-stage promotion to ensure FIFO caching behavior? Or are you wanting to disable the weak referencing deferred deletion behavior?

Thanks for the detailed explanation.
I'm looking for pure LRU behavior where entries stay in cache until and unless the cache is full. When the cache is full and a new entry with a distinct key must be added, then the oldest entry (in access order) should be converted to a weak reference, and the new entry always gets added.
So I'm looking for a "souped up" pure LRU cache, where the oldest expiring LRU entry doesn't immediately vanish but is still opportunistically available through the use of weak references. I believe this latter feature is what makes your creation so attractive.
In particular, unless the cache is full, entries that are added should live forever in the cache.

where the oldest expiring LRU entry doesn't immediately vanish but is still opportunistically available through the use of weak references.

Yes, that's the way weak-lru-cache works. I was just pointing out that it doesn't only use a "pure" LRU replacement/expiration policy, it also combines least-frequent use scheme (LFRU) to improve on the LRU policy for better caching efficiency and scan resistance. I'd recommend you try the default settings, as it should be more efficient than plain LRU. If you do need a way to disable the least frequent use partitioning/levels for plain LRU for some reason, I could add support for that.

That makes a lot of sense. I'll try your defaults and then experiment. Will reach out if I have questions afterwards.
Really appreciate your insights and responsiveness. Thank you.