Sometimes we really look for what kind of caching algorithm we should use in what situations. By default we use LFU. But many times we would like to know which one to use when. Here is a summary of the issues we face with different caching algorithms.
Least frequently Used (LFU) – LFU implicitly works based on the principle of number of times you access a particular key. At the face of it, it looks cool but as soon as you think about doing processing for each thread to count the number of times it is accessed, you’re implicitly inviting performance issues and handling concurrency issues. For high load, these issues really make a big difference.
The next issue was – in case we added a new key to the cache, for next request implicitly it will be least frequently used as number of times it’s accessed is just one. When you try to configure the minimum count, you again invite a trouble where some of the least accessed objects never get evicted because they always are below minimum threshold.
Least Recently Used (LRU) – This algorithm discards the least recently used item first. We already have a data structure in place in JDK called LinkedHashMap. LRU is generally implemented with it and is relatively more efficient than LFU. But here also we have a problem – even though a certain item was accessed a while ago but still it may be the one which is accessed more number of times than the one which is still there in cache. So in turn it may not be as efficient as you would like it to be. However this one is generally the default eviction algorithms most of the caching frameworks use.
Adaptive replacement cache (ARC) – It constantly balances between LRU and LFU, to improve the combined result and is supposed to be the most efficient algorithm.
For further studies you can take a look at http://en.wikipedia.org/wiki/Cache_algorithms, http://www.almaden.ibm.com/cs/people/dmodha/ARC.pdf and http://www.almaden.ibm.com/cs/people/dmodha/arcfast.pdf