Venue | Category |
---|---|
HotStorage'20 | Cache |
It's Time to Revisit LRU vs. FIFO1. SummaryMotivation of this paperLRU vs. FIFOImplementation and Evaluation2. Strength (Contributions of the paper)3. Weakness (Limitations of the paper)4. Some Insights (Future work)
Motivation
Conventional opinion
while FIFO is much easier to implement, the improved hit ratio of LRU outweighs this.
Changes
new caches such as front-ends to cloud storage have very large scales and this makes managing cache metadata in RAM no longer feasible
new workloads have emerged that possess different characteristics
Setting
An object storage service stores large, immutable objects in the cloud using a RESTful API.
FIFO
drawback: an item that was reused just recently might still be evicted from the cache if it was inserted a long time ago.
management simplicity: hold all the items in the cache in a queue and evict the next in line
key point: LRU can no longer be assumed to be better than FIFO.
Two main trend
A new scale to caches
New workloads
a large public cloud object store (a new semantic behavior)
previous empirical caching studies may no longer hold for this new workload.
158TB, IBM cloud object storage services (COS)
Large caches and cost model
cache metadata
the metadata refers to the information stored in order to find data in the cache and choose the right item to evict from the cache according to the cache policy.
LRU: require updates of the metadata upon a cache hit
FIFO: Upon a cache miss, it evicts one or more objects from the head and inserts a object into the tail of the queue.
cost model
a rough estimation of overall latency in an ideal setting where I/Os are performed sequentially
Trace
Evaluation
Hit rate comparison
LRU is often slightly better that FIFO in terms of pure hit rate
Cost comparison
on new traces, and under new media and cache settings, FIFO is actually a better choice than LRU.
Smart paging
Frequency + recency + locality