This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
buzzword [2015/04/01 18:17] rachata |
buzzword [2015/04/20 18:16] rachata |
||
---|---|---|---|
Line 1121: | Line 1121: | ||
* How much did the prefetcher cause misses in the demand misses? | * How much did the prefetcher cause misses in the demand misses? | ||
* Hard to quantify | * Hard to quantify | ||
+ | |||
+ | |||
+ | ===== Lecture 26 (4/3 Fri.) ===== | ||
+ | |||
+ | * Feedback directed prefetcher | ||
+ | * Use the result of the prefetcher as a feedback to the prefetcher | ||
+ | * with accuracy, timeliness, polluting information | ||
+ | * Markov prefetcher | ||
+ | * Prefetch based on the previous history | ||
+ | * Use markov model to predict | ||
+ | * Pros: Can cover arbitary pattern (easy for link list traversal or trees) | ||
+ | * Downside: High cost, cannot help with compulsory misses (no history) | ||
+ | * Content directed prefetching | ||
+ | * Indentify the content in memory for pointers (which is used as the address to prefetch | ||
+ | * Not very efficient (hard to figure out which block is the pointer) | ||
+ | * Software can give hints | ||
+ | * Correlation table | ||
+ | * Address correlation | ||
+ | * Execution based prefetcher | ||
+ | * Helper thread/speculative thread | ||
+ | * Use another thread to pre-execute a program | ||
+ | * Can be a software based or hardware based | ||
+ | * Discover misses before the main program (to prefetch data in a timely manner) | ||
+ | * How do you construct the helper thread | ||
+ | * Preexecute instruction (one example of how to initialize a speculative thread), slide 9 | ||
+ | * Thread-based pre-execution | ||
+ | * Error tolerance | ||
+ | * Solution to errors | ||
+ | * Tolerate errors | ||
+ | * New interface, new design | ||
+ | * Eliminate or minimize errors | ||
+ | * New technology, system-wide rethinking | ||
+ | * Embrace errors | ||
+ | * Map data that can tolerate errors to error-prone area | ||
+ | * Hybrid memory systesm | ||
+ | * Combining multiple memory technology together | ||
+ | * What can emerging technology help? | ||
+ | * Scalability | ||
+ | * Lower the cost | ||
+ | * Energy efficiency | ||
+ | * Possible solutions to the scaling problem | ||
+ | * Less leakage DRAM | ||
+ | * Heterogeneous DRAM (TL-DRAM, etc.) | ||
+ | * Add more functionality to DRAM | ||
+ | * Denser design (3D stack) | ||
+ | * Different technology | ||
+ | * NVM | ||
+ | * Charge vs. resistice memory | ||
+ | * How data is written? | ||
+ | * How to read the data? | ||
+ | * Non volatile memory | ||
+ | * Resistive memory | ||
+ | * PCM | ||
+ | * Inject current to change the phase | ||
+ | * Scales better than DRAM | ||
+ | * Multiple bits per cell | ||
+ | * Wider resistence range | ||
+ | * No refresh is needed | ||
+ | * Downside: Latency and write endurance | ||
+ | * STT-MRAM | ||
+ | * Inject current to change the polarity | ||
+ | * Memristor | ||
+ | * Inject current to change the structure | ||
+ | * Pros and cons between different technologies | ||
+ | * Persistency - data stay there even without power | ||
+ | * Unified memory and storage management (persistent data structure) - Single level store | ||
+ | * Improve energy and performance | ||
+ | * Simplify programming model | ||
+ | * Different design options for DRAM + NVM | ||
+ | * DRAM as a cache | ||
+ | * Place some data in DRAM and other in PCM | ||
+ | * Based on the characteristics | ||
+ | * Frequently accessed data that need lower write latency in DRAM | ||
+ | | ||
+ | |||
+ | ===== Lecture 27 (4/6 Mon.) ===== | ||
+ | * Flynn's taxonomy | ||
+ | * Parallelism | ||
+ | * Reduces power consumption (P ~ CV^2F) | ||
+ | * Better cost efficiency and easier to scale | ||
+ | * Improves dependability (in case the other core is faulty | ||
+ | * Different types of parallelism | ||
+ | * Instruction level parallelism | ||
+ | * Data level parallelism | ||
+ | * Task level parallelism | ||
+ | * Task level parallelism | ||
+ | * Partition a single, potentially big, task into multiple parallel sub-task | ||
+ | * Can be done explicitly (parallel programming by the programmer) | ||
+ | * Or implicitly (hardware partitions a single thread speculatively) | ||
+ | * Or, run multiple independent tasks (still improves throughput, but the speedup of any single tasks is not better, also simpler to implement) | ||
+ | * Loosely coupled multiprocessor | ||
+ | * No shared global address space | ||
+ | * Message passing to communicate between different sources | ||
+ | * Simple to manage memory | ||
+ | * Tightly coupled multiprocessor | ||
+ | * Shared global address space | ||
+ | * Need to ensure consistency of data | ||
+ | * Programming issues | ||
+ | * Hardware-based multithreading | ||
+ | * Coarse grained | ||
+ | * Find grained | ||
+ | * Simultaneous: Dispatch instruction from multiple threads at the same time | ||
+ | * Parallel speedup | ||
+ | * Superlinear speedup | ||
+ | * Utilization, Redundancy, Efficiency | ||
+ | * Amdahl's law | ||
+ | * Maximum speedup | ||
+ | * Parallel portion is not perfect | ||
+ | * Serial bottleneck | ||
+ | * Synchronization cost | ||
+ | * Load balance | ||
+ | * Some threads has more work, requires more time to hit the sync. point | ||
+ | * Critical sections | ||
+ | * Enforce mutually exclusive access to shared data | ||
+ | * Issues in parallel programming | ||
+ | * Correctness | ||
+ | * Synchronization | ||
+ | * Consistency | ||
+ | |||
+ | |||
+ | ===== Lecture 28 (4/8 Wed.) ===== | ||
+ | * Ordering of instructions | ||
+ | * Maintaining memory consistency when there are multiple threads and shared memory | ||
+ | * Need to ensure the semantic is not changed | ||
+ | * Making sure the shared data is properly locked when used | ||
+ | * Support mutual exclusion | ||
+ | * Ordering depends on when each processor is executed | ||
+ | * Debugging is also difficult (non-deterministic behavior) | ||
+ | * Dekker's algorithm | ||
+ | * Inconsistency -- the two processors did NOT see the same order of operations to memory | ||
+ | * Sequential consistency | ||
+ | * Multiple correct global orders | ||
+ | * Two issues: | ||
+ | * Too conservative/strict | ||
+ | * Performance limiting | ||
+ | * Weak consistency: global ordering when sync | ||
+ | * programmer hints where the synchronizations are | ||
+ | * Memory fence | ||
+ | * More burden on the programmers | ||
+ | * Cache coherence | ||
+ | * Can be done in the software level or hardware level | ||
+ | * Snoop-based coherence | ||
+ | * A simple protocol with two states by broadcasting reads/writes on a bus | ||
+ | * Maintaining coherence | ||
+ | * Needs to provide 1) write propagation and 2) write serialization | ||
+ | * Update vs. Invalidate | ||
+ | * Two cache coherence methods | ||
+ | * Snoopy bus | ||
+ | * Bus based, single point of serialization | ||
+ | * More efficient with small number of processors | ||
+ | * Processors snoop other caches read/write requests to keep the cache block coherent | ||
+ | * Directory | ||
+ | * Single point of serialization per block | ||
+ | * Directory coordinates the coherency | ||
+ | * More scalable | ||
+ | * The directory keeps track of where the copies of each block resides | ||
+ | * Supplies data on a read | ||
+ | * Invalidates the block on a write | ||
+ | * Has an exclusive state | ||
+ | |||
+ | ===== Lecture 29 (4/10 Fri.) ===== | ||
+ | * MSI coherent protocol | ||
+ | * The problem: unnecessary broadcasts of invalidations | ||
+ | * MESI coherent protocol | ||
+ | * Add the exclusive state: this is the only cache copy and it is a clean state to MSI | ||
+ | * Multiple invalidation tradeoffs | ||
+ | * Problem: memory can be unnecessarily updated | ||
+ | * A possible owner state (MOESI) | ||
+ | * Tradeoffs between snooping and directory based coherence protocols | ||
+ | * Slide 31 has a good summary | ||
+ | * Directory: data structures | ||
+ | * Bit vectors vs. linked lists | ||
+ | * Scalability of directories | ||
+ | * Size? Latency? Thousand of nodes? Best of both snooping and directory? | ||
+ | |||
+ | | ||
+ | ===== Lecture 30 (4/13 Mon.) ===== | ||
+ | * In-memory computing | ||
+ | * Design goals of DRAM | ||
+ | * DRAM structures | ||
+ | * Banks | ||
+ | * Capacitors and sense amplifiers | ||
+ | * Trade-offs b/w number of sense amps and cells | ||
+ | * Width of bank I/O vs. row size | ||
+ | * DRAM operations | ||
+ | * ACTIVATE, READ/WRITE, and PRECHARGE | ||
+ | * Trade-offs | ||
+ | * Latency | ||
+ | * Bandwidth: Chip vs. rank vs. bank | ||
+ | * What's the benefit of having 8 chips? | ||
+ | * Parallelism | ||
+ | * RowClone | ||
+ | * What are the problems? | ||
+ | * Copying b/w two rows that share the same sense amplifier | ||
+ | * System software support | ||
+ | * Bitwise AND/OR | ||
+ | |||
+ | ===== Lecture 31 (4/15 Wed.) ===== | ||
+ | |||
+ | * Application slowdown | ||
+ | * Interference between different applications | ||
+ | * Applications' performance depends on other applications that they are running with | ||
+ | * Predictable performance | ||
+ | * Why are they important? | ||
+ | * Applications that need predictibility | ||
+ | * How to predict the performance? | ||
+ | * What information are useful? | ||
+ | * What need to be guarantee? | ||
+ | * How to estimate the performance when running with others? | ||
+ | * Easy, just measure the performance while it is running. | ||
+ | * How to estimate the performance when the application is running by itself. | ||
+ | * Hard if there is no profiling. | ||
+ | * The relationship between memory service rate and the performance. | ||
+ | * Key assumption: applications are memory bound | ||
+ | * Behavior of memory-bound applications | ||
+ | * With and without interference | ||
+ | * Memory phase vs. compute phase | ||
+ | * MISE | ||
+ | * Estimating slowdown using request service rate | ||
+ | * Inaccuracy when measuring request service rate alone | ||
+ | * Non-memory-bound applications | ||
+ | * Control slowdown and provide soft guarantee | ||
+ | * Taking into account of the shared cache | ||
+ | * MISE model + cache resource management | ||
+ | * Aug tag store | ||
+ | * Separate tag store for different cores | ||
+ | * Cache access rate alone and shared as the metric to estimate slowdown | ||
+ | * Cache paritiioning | ||
+ | * How to determine partitioning | ||
+ | * Utility based cache partitioning | ||
+ | * Others | ||
+ | * Maximum slowdown and fairness metric | ||
+ | | ||
+ | |||
+ | |||
+ | ===== Lecture 32 (4/20 Mon.) ===== | ||
+ | |||
+ | * Heterogeneous systems | ||
+ | * Assymmetric cores: different types of cores on the chip | ||
+ | * Each of these cores are optimized for different workloads/requirements/goals | ||
+ | * Multiple special purpose processors | ||
+ | * Flexible and can adapt to workload behavior | ||
+ | * Disadvantages: complex and high overhead | ||
+ | * Examples: CPU-GPU systems, heterogeneity in execution models | ||
+ | * Heterogeneous resources | ||
+ | * Example: reliable and non-reliable DRAM in the same system | ||
+ | * Key problems in modern systems | ||
+ | * Memory system | ||
+ | * Efficiency | ||
+ | * Predictability | ||
+ | * Assymmetric design can help solving these problems | ||
+ | * Serialized code sections | ||
+ | * Bottleneck in multicore execution | ||
+ | * Parallelizable vs. serial portion | ||
+ | * Accelerate critical section | ||
+ | * Cache ping-ponging | ||
+ | * Synchronization latency | ||
+ | * Symmetric vs. assymmetric design | ||
+ | * Large cores + small cores | ||
+ | * Core assymmetry | ||
+ | * Amdahl's law with heterogeneous cores | ||
+ | * Parallel bottlenecks | ||
+ | * Resource contention | ||
+ | * Depends on what are running | ||
+ | * Accelerated critical section | ||
+ | * Ship critical sections to large cores | ||
+ | * Small modifications and low overhead | ||
+ | * False serialization might become the bottleneck | ||
+ | * Can reduce parallel throughput | ||
+ | * Effect on private cache misses and shared cache misses | ||
+ | | ||
+ | | ||
+ | |||
+ | | ||
+ | |