This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
buzzword [2014/02/24 19:17] rachata |
buzzword [2014/03/24 18:16] rachata |
||
---|---|---|---|
Line 595: | Line 595: | ||
* Intel SSE --> Modern version of MMX | * Intel SSE --> Modern version of MMX | ||
+ | ===== Lecture 17 (2/26 Wed.) ===== | ||
+ | |||
+ | * GPU | ||
+ | * Warp/Wavefront | ||
+ | * A bunch of threads sharing the same PC | ||
+ | * SIMT | ||
+ | * Lanes | ||
+ | * FGMT + massively parallel | ||
+ | * Tolerate long latency | ||
+ | * Warp based SIMD vs. traditional SIMD | ||
+ | * SPMD (Programming model) | ||
+ | * Single program operates on multiple data | ||
+ | * can have synchronization point | ||
+ | * Many scientific applications are programmed in this manner | ||
+ | * Control flow problem (branch divergence) | ||
+ | * Masking (in a branch, mask threads that should not execute that path) | ||
+ | * Lower SIMD efficiency | ||
+ | * What if you have layers of branches? | ||
+ | * Dynamic wrap formation | ||
+ | * Combining threads from different warps to increase SIMD utilization | ||
+ | * This can cause memory divergence | ||
+ | * VLIW | ||
+ | * Wide fetch | ||
+ | * IA-64 | ||
+ | * Tradeoffs | ||
+ | * Simple hardware (no dynamic scheduling, no dependency checking within VLIW) | ||
+ | * A lot of loads at the compiler level | ||
+ | * Decoupled access/execute | ||
+ | * Limited form of OoO | ||
+ | * Tradeoffs | ||
+ | * How to street the instruction (determine dependency/stalling)? | ||
+ | * Instruction scheduling techniques (static vs. dynamic) | ||
+ | * Systoric arrays | ||
+ | * Processing elements transform data in chains | ||
+ | * Develop for image processing (for example, convolution) | ||
+ | * Stage processing | ||
+ | |||
+ | ===== Lecture 18 (2/28 Fri.) ===== | ||
+ | |||
+ | * Tradeoffs of VLIW | ||
+ | * Why does VLIW required static instruction scheduling | ||
+ | * Whose job it is? | ||
+ | * Compiler can rearrange basic blocks/instruction | ||
+ | * Basic block | ||
+ | * Benefits of having large basic block | ||
+ | * Entry/Exit | ||
+ | * Handling entries/exits | ||
+ | * Trace cache | ||
+ | * How to ensure correctness? | ||
+ | * Profiling | ||
+ | * Fixing up the instruction order to ensure correctness | ||
+ | * Dealing with multiple entries into the block | ||
+ | * Dealing with multiple exits into the block | ||
+ | * Super block | ||
+ | * How to form super blocks? | ||
+ | * Benefit of super block | ||
+ | * Tradeoff between not forming a super block and forming a super block | ||
+ | * Ambiguous branch (after profiling, both taken/not taken are equally likely) | ||
+ | * Cleaning up | ||
+ | * What scenario would make trace cache/superblock/profiling less effective? | ||
+ | * List scheduling | ||
+ | * Help figuring out which instructions VLIW should fetch | ||
+ | * Try to maximize instruction throughput | ||
+ | * How to assign priorities | ||
+ | * What if some instructions take longer than others | ||
+ | * Block structured ISA (BS-ISA) | ||
+ | * Problems with trace scheduling? | ||
+ | * What type of program will benefit from BS-ISA | ||
+ | * How to form blocks in BS-ISA? | ||
+ | * Combining basic blocks | ||
+ | * multiples of merged basic blocks | ||
+ | * How to deal with entries/exits in BS-ISA? | ||
+ | * undo the executed instructions from the entry point, then fetch the new block | ||
+ | * Advantages over trace cache | ||
+ | * Benefit of VLIW + Static instruction scheduling | ||
+ | * Intel IA-64 | ||
+ | * Static instruction scheduling and VLIW | ||
+ | |||
+ | ===== Lecture 19 (3/19 Wed.) ===== | ||
+ | |||
+ | * Ideal cache | ||
+ | * More capacity | ||
+ | * Fast | ||
+ | * Cheap | ||
+ | * High bandwidth | ||
+ | * DRAM cell | ||
+ | * Cheap | ||
+ | * Sense the purturbation through sense amplifier | ||
+ | * Slow and leaky | ||
+ | * SRAM cell (Cross coupled inverter) | ||
+ | * Expensice | ||
+ | * Fast (easier to sense the value in the cell) | ||
+ | * Memory bank | ||
+ | * Read access sequence | ||
+ | * DRAM: Activate -> Read -> Precharge (if needed) | ||
+ | * What dominate the access laatency for DRAM and SRAM | ||
+ | * Scaling issue | ||
+ | * Hard to scale the scale to be small | ||
+ | * Memory hierarchy | ||
+ | * Prefetching | ||
+ | * Caching | ||
+ | * Spatial and temporal locality | ||
+ | * Cache can exploit these | ||
+ | * Recently used data is likely to be accessed | ||
+ | * Nearby data is likely to be accessed | ||
+ | * Caching in a pipeline design | ||
+ | * Cache management | ||
+ | * Manual | ||
+ | * Data movement is managed manually | ||
+ | * Embedded processor | ||
+ | * GPU scratchpad | ||
+ | * Automatic | ||
+ | * HW manage data movements | ||
+ | * Latency analysis | ||
+ | * Based on the hit and miss status, next level access time (if miss), and the current level access time | ||
+ | * Cache basics | ||
+ | * Set/block (line)/Placement/replacement/direct mapped vs. associative cache/etc. | ||
+ | * Cache access | ||
+ | * How to access tag and data (in parallel vs serially) | ||
+ | * How do tag and index get used? | ||
+ | * Modern processors perform serial access for higher level cache (L3 for example) to save power | ||
+ | * Cost and benefit of having more associativity | ||
+ | * Given the associativity, which block should be replace if it is full | ||
+ | * Replacement poligy | ||
+ | * Random | ||
+ | * Least recently used (LRU) | ||
+ | * Least frequently used | ||
+ | * Least costly to refetch | ||
+ | * etc. | ||
+ | * How to implement LRU | ||
+ | * How to keep track of access ordering | ||
+ | * Complexity increases rapidly | ||
+ | * Approximate LRU | ||
+ | * Victim and next Victim policy | ||
+ | |||
+ | ===== Lecture 20 (3/21 Fri.) ===== | ||
+ | |||
+ | * Set thrashing | ||
+ | * Working set is bigger than the associativity | ||
+ | * Belady's OPT | ||
+ | * Is this optimal? | ||
+ | * Complexity? | ||
+ | * Similarity between cache and page table | ||
+ | * Number of blocks vs pages | ||
+ | * Time to find the block/page to replace | ||
+ | * Handling writes | ||
+ | * Write through | ||
+ | * Need a modified bit to make sure accesses to data got the updated data | ||
+ | * Write back | ||
+ | * Simpler, no consistency issues | ||
+ | * Sectored cache | ||
+ | * Use subblock | ||
+ | * lower bandwidth | ||
+ | * more complex | ||
+ | * Instruction vs data cache | ||
+ | * Where to place instructions | ||
+ | * Unified vs. separated | ||
+ | * In the first level cache | ||
+ | * Cache access | ||
+ | * First level access | ||
+ | * Second level access | ||
+ | * When to start the second level access | ||
+ | * Performance vs. energy | ||
+ | * Address translation | ||
+ | * Homonym and Synonyms | ||
+ | * Homonym: Same VA but maps to different PA | ||
+ | * With multiple processes | ||
+ | * Synonyms: Multiple VAs map to the same PA | ||
+ | * Shared libraries, shared data, copy-on-write | ||
+ | * I/O | ||
+ | * Can these create problems when we have the cache | ||
+ | * How to eliminate these problems? | ||
+ | * Page coloring | ||
+ | * Interaction between cache and TLB | ||
+ | * Virtually indexed vs. physically indexed | ||
+ | * Virtually tagged vs. physically tagged | ||
+ | * Virtually indexed physically tagged | ||
+ | * Virtual memory in DRAM | ||
+ | * Control where data is mapped to in channel/rank/bank | ||
+ | * More parallelism | ||
+ | * Reduce interference | ||
+ | |||
+ | ===== Lecture 21 (3/24 Mon.) ===== | ||
+ | |||
+ | |||
+ | |||
+ | * Different parameters that affect cache miss | ||
+ | * Thrashing | ||
+ | * Different types of cache misses | ||
+ | * Compulsory misses | ||
+ | * Can mitigate with prefetches | ||
+ | * Capacity misses | ||
+ | * More assoc | ||
+ | * Victim cache | ||
+ | * Conflict misses | ||
+ | * Hashing | ||
+ | * Large block vs. small block | ||
+ | * Subblocks | ||
+ | * Victim cache | ||
+ | * Small, but fully assoc. cache behind the actual cache | ||
+ | * Cached misses cache block | ||
+ | * Prevent ping-ponging | ||
+ | * Pseudo associtivity | ||
+ | * Simpler way to implement associative cache | ||
+ | * Skewed assoc. cache | ||
+ | * Different hashing functions for each way | ||
+ | * Restructure data access pattern | ||
+ | * Order of loop traversal | ||
+ | * Blocking | ||
+ | * Memory level parallelism | ||
+ | * Cost per miss of a parallel cache miss is less costly compared to serial misses | ||
+ | * MSHR | ||
+ | * Keep track of pending cache | ||
+ | * Think of this as the load/store buffer-ish for cache | ||
+ | * What information goes into the MSHR? | ||
+ | * When do you access the MSHR? |