Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

1And in Conclusion\dots

Replacement policies:

Table 1:Comparison of cache replacement policies.

FeatureLRUFIFORandom
DescriptionReplace the entry that has not been used for the longest timeReplace the oldest block in the set (queue)Replace a random block
ImplementationBit counters for each cache block; see quick checkFIFO queue or similar approximationFlip a figurative hardware coin
AdvantageTemporal localityReasonable approximation to LRUEasy to implement
DisadvantageComplicated hardware to keep track of access historyTerrible if workload leverages high temporal locality

Write policies:

Table 2:Comparison of cache write policies.

FeatureWrite-throughWrite-back
DescriptionWrite to the cache and memory at the same timeWrite to the cache. Only write back to memory when the block is replaced.
ImplementationEasyMore complicated
“Synchronized” with memory?[6]YesNo. Sometimes cache will have the most current copy of data.
OptimizationInclude a dirty bit to only write back modified blocks.
Hit time for writesEach write to a cache block also requires a write to memory. Longer.Multiple writes within a cache block require only one write to memory (the latter happens only on replacement). Faster.
Miss penalty“Read” misses are fast and never result in a write to memory. Shorter.A “read” miss has variable time, since a write to memory may also be needed if the block to be replaced is dirty. Longer on average.
AMATLongerShorter

2Textbook Readings

P&H 5.1-5.4, 5.8, 5.9, 5.13

3Additional References

Amazing Illustrations by Ketrina (Yim) Thompson: CS Illustrated Cache Handouts

4Exercises

Check your knowledge!

4.1Short Exercises

Solution to Exercise 1 #

False.

Solution to Exercise 2 #

True. A direct-mapped cache needs to index every block of the cache, whereas a 4-way set associative cache needs to index every set of 4 blocks. The 4-way set associative cache will have 2 fewer index bits than the direct-mapped cache.

Solution to Exercise 3 #

False. Similar to the previous question, the impact depends on the program. If a program iterates through contiguous memory (like an array), having larger block sizes with fewer blocks may be beneficial as each block contains more contiguous data. For instance, if Cache A has 10 blocks and a block size of 8 bytes while Cache B has 20 block and a block size of 4 bytes, and we loop through an array of 80 characters, Cache A will experience 10 cache misses and 70 hits, while Cache B will have 20 misses and 60 hits.