Is it possible to disable the eviction?
erdihu opened this issue · comments
I'd like to start with a thank you for this library. I am evaluating it as a replacement to IMemoryCache from Microsoft and wondering whether it is possible to disable automatic trimming/eviction and handling it manually. My understanding is that if I allocate cache with big enough capacity which I know won't be fully utilized fully, will the trimming/eviction process not run at all?
Scenario:
- I have a need to cache a maximum of 500000 items.
- I know for the fact that, by the applications constraints, this limit won't ever be exceeded.
- A single thread will do the add/remove operations. There will be many threads which can read from the cache.
If I create a cache object with WithCapacity(1_000_000), will I effectively disable the automatic eviction? Also, are there any downsides allocating a capacity much larger than the application will ever use?
The eviction policy will be inactive until the capacity is exceeded - your understanding is correct; nothing will be evicted until you exceed the capacity. The only downside is memory usage.
If capacity is 1_000_000, for both ConcurrentLru
and ConcurrentLfu
the internal ConcurrentDictionary
will be pre-allocated to about 100_000 elements (it will choose a prime number close to 10% of the ctor capacity arg) and will grow by doubling in size using the ConcurrentDictionary
grow algorithm. The cache bookkeeping data structures will add about 20% overhead on top of using a raw ConcurrentDictionary
. If memory footprint is a concern it could be worth measuring this, but I wouldn't expect it to be of any consequence if you have 1 cache with 1 million elements.
Thank you for a quick and thorough answer!