This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
buzzword [2015/03/02 19:15] kevincha |
buzzword [2015/03/04 19:21] kevincha [Lecture 19 (03/02 Mon.)] |
||
---|---|---|---|
Line 744: | Line 744: | ||
* Classification of cache misses | * Classification of cache misses | ||
===== Lecture 19 (03/02 Mon.) ===== | ===== Lecture 19 (03/02 Mon.) ===== | ||
- | * Subblocks | + | * Subblocks |
- | * Victim cache | + | * Victim cache |
- | * Small, but fully assoc. cache behind the actual cache | + | * Small, but fully assoc. cache behind the actual cache |
- | * Cached misses cache block | + | * Cached misses cache block |
- | * Prevent ping-ponging | + | * Prevent ping-ponging |
- | * Pseudo associtivity | + | * Pseudo associtivity |
- | * Simpler way to implement associative cache | + | * Simpler way to implement associative cache |
- | * Skewed assoc. cache | + | * Skewed assoc. cache |
- | * Different hashing functions for each way | + | * Different hashing functions for each way |
- | * Restructure data access pattern | + | * Restructure data access pattern |
- | * Order of loop traversal | + | * Order of loop traversal |
- | * Blocking | + | * Blocking |
- | * Memory level parallelism | + | * Memory level parallelism |
- | * Cost per miss of a parallel cache miss is less costly compared to serial misses | + | * Cost per miss of a parallel cache miss is less costly compared to serial misses |
- | * MSHR | + | * MSHR |
- | * Keep track of pending cache | + | * Keep track of pending cache |
- | * Think of this as the load/store buffer-ish for cache | + | * Think of this as the load/store buffer-ish for cache |
- | * What information goes into the MSHR? | + | * What information goes into the MSHR? |
- | * When do you access the MSHR? | + | * When do you access the MSHR? |
- | * Memory banks | + | * Memory banks |
- | * Shared caches in multi-core processors | + | * Shared caches in multi-core processors |
+ | ===== Lecture 20 (03/04 Wed.) ===== | ||
+ | * Virtual vs. physical memory | ||
+ | * System's management on memory | ||
+ | * Benefits | ||
+ | * Problem: physical memory has limited size | ||
+ | * Mechanisms: indirection, virtual addresses, and translation | ||
+ | * Demand paging | ||
+ | * Physical memory as a cache | ||
+ | * Tasks of system SW for VM | ||
+ | * Serving a page fault | ||
+ | * Address translation | ||
+ | * Page table | ||
+ | * PTE (page table entry) | ||
+ | * Page replacement algorithm | ||
+ | * CLOCK algo. | ||
+ | * Inverted page table | ||
+ | * Page size trade-offs | ||
+ | * Protection | ||
+ | * Multi-level page tables | ||
+ | * x86 implementation of page table | ||
+ | * TLB | ||
+ | * Handling misses | ||
+ | * When to do address translation? | ||
+ | * Homonym and Synonyms | ||
+ | * Homonym: Same VA but maps to different PA with multiple processes | ||
+ | * Synonyms: Multiple VAs map to the same PA | ||
+ | * Shared libraries, shared data, copy-on-write | ||
+ | * Virtually indexed vs. physically indexed | ||
+ | * Virtually tagged vs. physically tagged | ||
+ | * Virtually indexed physically tagged | ||
+ | * Can these create problems when we have the cache | ||
+ | * How to eliminate these problems? | ||
+ | * Page coloring | ||
+ | * Interaction between cache and TLB |