User Tools

Site Tools


buzzword

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
buzzword [2015/02/06 19:21]
albert lecture 10 words added
buzzword [2015/02/16 19:16]
rachata
Line 412: Line 412:
     * VLIW     * VLIW
     * SuperScalar     * SuperScalar
 +
 +
 +===== Lecture 11 (2/11 Wed.) =====
 +
 +  * Geometric GHR length for branch prediction
 +  * Perceptron branch predictor
 +  * Multi-cycle executions (Different functional units take different number of cycles)
 +    * Instructions can retire out-of-order
 +      * How to deal with this case? Stall? Throw exceptions if there are problems?
 +  * Exceptions and Interrupts
 +    * When they are handled? ​
 +    * Why are some interrupts should be handled right away?
 +  * Precise exception
 +    * arch. state should be consistent before handling the exception/​interrupts
 +      * Easier to debug (you see the sequential flow when the interrupt occurs)
 +        * Deterministic
 +      * Easier to recover from the exception
 +      * Easier to restart the processes
 +    * How to ensure precise exception?
 +    * Tradeoffs between each method
 +  * Reorder buffer
 +    * Reorder results before they are visible to the arch. state
 +      * Need to presearve the sequential sematic and data
 +    * What are the informatinos in the ROB entry
 +    * Where to get the value from (forwarding path? reorder buffer?)
 +      * Extra logic to check where the youngest instructions/​value is
 +      * Content addressible search (CAM)
 +        * A lot of comparators
 +    * Different ways to simplify the reorder buffer
 +    * Register renaming
 +      * Same register refers to independent values (lacks of registers)
 +    * Where does the exception happen (after retire)
 +  * History buffer
 +    * Update the register file when the instruction complete. Unroll if there is an exception.
 +  * Future file (commonly used, along with reorder buffer)
 +    * Keep two set of register files
 +      * An updated value (Speculative),​ called future file
 +      * A backup value (to restore the state quickly
 +    * Double the cost of the regfile, but reduce the area as you don't have to use a content addressible memory (compared to ROB alone)
 +  * Branch misprediction resembles Exception
 +    * The difference is that branch misprediction is not visible to the software
 +      * Also much more common (say, divide by zero vs. a mispredicted branch)
 +    * Recovery is similar to exception handling
 +  * Latency of the state recovery
 +  * What to do during the state recovery
 +  * Checkpointing
 +    * Advantages?
 +
 +===== Lecture 12 (2/13 Fri.) =====
 +
 +  * Renaming
 +  * Register renaming table
 +  * Predictor (branch predictor, cache line predictor ...)
 +  * Power budget (and its importance)
 +  * Architectural state, precise state
 +  * Memory dependence is known dynamically
 +  * Register state is not shared across threads/​processors
 +  * Memory state is shared across threads/​processors
 +  * How to maintain speculative memory states
 +  * Write buffers (helps simplify the process of checking the reorder buffer)
 +  * Overall OoO mechanism
 +    * What are other ways of eliminating dispatch stalls
 +    * Dispatch when the sources are ready
 +    * Retired instructions make the source available
 +    * Register renaming
 +    * Reservation station
 +      * What goes into the reservation station
 +      * Tags required in the reservation station
 +    * Tomasulo'​s algorithm
 +    * Without precise exception, OoO is hard to debug
 +    * Arch. register ID
 +    * Examples in the slides
 +      * Slides 28 --> register renaming
 +      * Slides 30-35 --> Exercise (also on the board)
 +        * This will be usefull for the midterm
 +    * Register aliasing table
 +    * Broadcasting tags
 +    * Using dataflow
 +
 +
 +===== Lecture 13 (2/16 Mon.) =====
 +
 +  * OoO --> Restricted Dataflow
 +    * Extracting parallelism
 +    * What are the bottlenecks?​
 +      * Issue width
 +      * Dispatch width
 +      * Parallelism in the program
 +    * What does it mean to be restricted data flow
 +      * Still visible as a Von Neumann model
 +    * Where does the efficiency come from?
 +    * Size of the scheduling windors/​reorder buffer. Tradeoffs? What make sense?
 +  * Load/store handling
 +    * Would like to schedule them out of order, but make them visible in-order
 +    * When do you schedule the load/store instructions?​
 +    * Can we predict if load/store are dependent?
 +    * This is one of the most complex structure of the load/store handling
 +    * What information can be used to predict these load/store optimization?​
 +  * Centralized vs. distributed?​ What are the tradeoffs?
 +  * How to handle when there is a misprediction/​recovery
 +    * OoO + branch prediction?
 +    * Speculatively update the history register
 +      * When do you update the GHR?
 +  * Token dataflow arch.
 +    * What are tokens?
 +    * How to match tokens
 +    * Tagged token dataflow arch.
 +    * What are the tradeoffs?
 +    * Difficulties?​
 +
  
buzzword.txt ยท Last modified: 2015/04/27 18:20 by rachata