1Learning Outcomes¶
Define hit rate, hit time, miss rate, and miss penalty.
Use the average memory access time (AMAT) formula to compare multi-level cache designs.
🎥 Lecture Video
Because performance is the major reason for a memory hierarchy, the time to service hits or misses is important. We therefore define the following terminology in Table 1:
Table 1:Key cache terminology
| Request Outcome | Rate | Time |
|---|---|---|
| Cache Hit | Hit rate: fraction of access that hit in the cache. | Hit time: time (latency) to access cache memory, including the time needed to determine whether the access is a hit or a miss. |
| Cache miss | Miss rate: 1 - hit rate. | Miss penalty: Time to replace a line with the corresponding line from a lower level of the memory hierarchy. |
Because the cache is smaller and built using faster memory parts, the hit time will be much smaller than the miss penalty, which includes the time to access the next level in the hierarchy.
2Average Memory Access Time¶
The time to access data for both hits and misses affects performance. Designers sometimes use average memory access time (AMAT) as a way to compare cache designs. From P&H 5.4:
Average memory access time is the average time to access memory considering both hits and misses and the frequency of different accesses.
We will use the following assumptions in this course:
On a cache miss, the total time to retrieve data is the sum of hit time plus miss penalty.
The miss rate of a lower-level cache (e.g., L2) is the fraction of misses from a higher-level cache (e.g., L1) that also miss in this lower-level cache.
The L1 and L2 cache design is 4 times as fast as the L1-only cache design!
3Reducing Miss Rate¶
We mentioned that AMAT is used to compare cache designs. The key performance hit to AMAT is miss rate. To reduce miss rate:
Get a larger cache. This is limited by cost and physical technology capabilities. Furthermore, bigger caches are slower. We would love for higher caches (like L1 cache) to have a hit time of less than the cycle time.
Place lines of the cache in a way that maximizes temporal and spatial locality as needed for the average program.
The latter technique is the core of cache design. Up next!