User Tools

Site Tools


buzzword

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
buzzword [2015/02/12 19:18]
kevincha [Lecture 11 (2/11 Mon.)]
buzzword [2015/02/18 19:22]
rachata
Line 460: Line 460:
     * Advantages?     * Advantages?
  
 +===== Lecture 12 (2/13 Fri.) =====
 +
 +  * Renaming
 +  * Register renaming table
 +  * Predictor (branch predictor, cache line predictor ...)
 +  * Power budget (and its importance)
 +  * Architectural state, precise state
 +  * Memory dependence is known dynamically
 +  * Register state is not shared across threads/​processors
 +  * Memory state is shared across threads/​processors
 +  * How to maintain speculative memory states
 +  * Write buffers (helps simplify the process of checking the reorder buffer)
 +  * Overall OoO mechanism
 +    * What are other ways of eliminating dispatch stalls
 +    * Dispatch when the sources are ready
 +    * Retired instructions make the source available
 +    * Register renaming
 +    * Reservation station
 +      * What goes into the reservation station
 +      * Tags required in the reservation station
 +    * Tomasulo'​s algorithm
 +    * Without precise exception, OoO is hard to debug
 +    * Arch. register ID
 +    * Examples in the slides
 +      * Slides 28 --> register renaming
 +      * Slides 30-35 --> Exercise (also on the board)
 +        * This will be usefull for the midterm
 +    * Register aliasing table
 +    * Broadcasting tags
 +    * Using dataflow
 +
 +
 +===== Lecture 13 (2/16 Mon.) =====
 +
 +  * OoO --> Restricted Dataflow
 +    * Extracting parallelism
 +    * What are the bottlenecks?​
 +      * Issue width
 +      * Dispatch width
 +      * Parallelism in the program
 +    * What does it mean to be restricted data flow
 +      * Still visible as a Von Neumann model
 +    * Where does the efficiency come from?
 +    * Size of the scheduling windors/​reorder buffer. Tradeoffs? What make sense?
 +  * Load/store handling
 +    * Would like to schedule them out of order, but make them visible in-order
 +    * When do you schedule the load/store instructions?​
 +    * Can we predict if load/store are dependent?
 +    * This is one of the most complex structure of the load/store handling
 +    * What information can be used to predict these load/store optimization?​
 +  * Centralized vs. distributed?​ What are the tradeoffs?
 +  * How to handle when there is a misprediction/​recovery
 +    * OoO + branch prediction?
 +    * Speculatively update the history register
 +      * When do you update the GHR?
 +  * Token dataflow arch.
 +    * What are tokens?
 +    * How to match tokens
 +    * Tagged token dataflow arch.
 +    * What are the tradeoffs?
 +    * Difficulties?​
 +
 +===== Lecture 14 (2/18 Wed.) =====
 +
 +  * SISD/​SIMD/​MISD/​MIMD
 +  * Array processor
 +  * Vector processor
 +  * Data parallelism
 +    * Where does the concurrency arise?
 +  * Differences between array processor vs. vector processor
 +  * VLIW
 +  * Compactness of an array processor ​
 +  * Vector operates on a vector of data (rather than a single datum (scalar))
 +    * Vector length (also applies to array processor)
 +    * No dependency within a vector --> can have a deep pipeline
 +    * Highly parallel (both instruction level (ILP) and memory level (MLP))
 +    * But the program needs to be very parallel
 +    * Memory can be the bottleneck (due to very high MLP)
 +    * What does the functional units look like? Deep pipelin and simpler control.
 +    * CRAY-I is one of the examples of vector processor
 +    * Memory access pattern in a vector processor
 +      * How do the memory accesses benefit the memory bandwidth?
 +      * Memory level parallelism
 +      * Stride length vs. the number of banks
 +        * stride length should be relatively prime to the number of banks
 +      * Tradeoffs between row major and column major --> How can the vector processor deals with the two
 +    * How to calculate the efficiency and performance of vector processors
 +    * What if there are multiple memory ports?
 +    * Gather/​Scatter allows vector processor to be a lot more programmable (i.e. gather data for parallelism)
 +      * Helps handling sparse metrices
 +    * Conditional operation
 +    * Structure of vector units
 +    * How to automatically parallelize code through the compiler?
 +      * This is a hard problem. Compiler does not know the memory address.
 +  * What do we need to ensure for both vector and array processor?
 +  * Sequential bottleneck ​
 +    * Amdahl'​s law  ​
 +  * Intel MMX --> An example of Intel'​s approach to SIMD
 +    * No VLEN, use OpCode to define the length
 +    * Stride is one in MMX
 +    * Intel SSE --> Modern version of MMX
  
buzzword.txt ยท Last modified: 2015/04/27 18:20 by rachata