You have a 32-bit OS with 2^12 bytes (4KB) per page. This means the number of virtual pages is roughly (2^32/2^12 = 2^20) 2^20. Roughly, we're computing 4billion/4000 and getting 1 million. Your virtual memory (SSD) is running at 250,000 ns per access. Your main memory is only 16KB or 4 pages large and runs at 70ns per access. Your cache is only 8KB (2 pages) large and runs at 6ns per access. Given a program that accesses the following pages (these are not addresses this time, but rather page accesses), give the cache hit ratio and main memory hit ration. In addition, compute the average access time. Lastly, compute the average access time when ignoring 5 cache misses and 5 main memory misses. Replace missed pages with least recently used. You may assume that each of these accesses is a read and thus the dirty bit does not come into play for this program. Page access by the program: 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 3 3 3 3 3 5 5 5 5 5 4 4 4 4 4 1 1 1 1 1 There are 55 accesses in total. Discounting 5 misses means you're only considering 50 accesses. Answer: For each access I will post H for hit and M for miss in the cache followed by H for hit and M for miss in main memory. Thus, the first access is a miss in the cache and a miss in main memory and is written as MM The second access is a hit in the cache and is written just as H because main memory is not accessed. MM H H H H MM H H H H MM H H H H H H H H H H H H H H MM H H H H H H H H H H H H H H MM H H H H MH H H H H MM H H H H Given this, there are 7 cache misses in total out of the 55 accesses. There are 6 main memory misses in total out of 7 main memory accesses. To compute the average access time while discounting 5 misses we use: 48/50 hit ratio for the cache (thus 1-48/50 is 2/50). We use 1/2 hit ratio for main memory (thus 1-1/2 is 1/2 also). 48/50 * 6ns + (2/50) * ( 1/2 * 70ns + 1/2 * 250000ns) = 5007.16 ns (To do this without discounts, use 48/55 and 1/7). If all of these accesses were writes, then the main memory would need to be accessed to write back the blocks from the lower levels. Thus, every cache miss after the cache is full results in two main memory checks for the new block coming in and the new block being written out. I'll list the third access as - (not relevant) M H below after the cache hit or miss. M-M H H H H M-M H H H H MHM H H H H H H H H H H H H H H MHM H H H H H H H H H H H H H H MHM H H H H MHH H H H H MMM H H H H In this case, there are more main memory accesses because the dirty blocks must be written back out to disk. There are 12 main memory accesses in total with 5 hits. Discounting 5 misses gives us: 5/7 main memory hit ratio. This makes our average access time lower (better) because there are more accesses that hit main in total. However, to the user, things will still be slower because the cache misses require more time to resolve to deal with the dirty blocks. For example, take the best miss scenario: MHH In the 'read' case, this access time was MH or 70ns. In the 'write' case this access time is MHH or 70ns+70ns=140ns (actually it is probably lower than this because of parallelization, but this would be the serialized speed). We've been treating each of these accesses as separate but to the user it is only one memory lookup. In doing thhis analysis, it really shows how awful the last row of accesses is to the average access time. main memory misses devastate average memory access times because of the steep curve between memory accesses in the main memory and secondary storage.