User Tools

Site Tools


buzzword

This is an old revision of the document!


Buzzwords

Buzzwords are terms that are mentioned during lecture which are particularly important to understand thoroughly. This page tracks the buzzwords for each of the lectures and can be used as a reference for finding gaps in your understanding of course material.

Lecture 1 (1/12 Mon.)

  • Level of transformation
    • Algorithm
    • System software
    • Compiler
  • Cross abstraction layers
  • Tradeoffs
  • Caches
  • DRAM/memory controller
  • DRAM banks
  • Row buffer hit/miss
  • Row buffer locality
  • Unfairness
  • Memory performance hog
  • Shared DRAM memory system
  • Streaming access vs. random access
  • Memory scheduling policies
  • Scheduling priority
  • Retention time of DRAM
  • Process variation
  • Retention time profile
  • Power consumption
  • Bloom filter
  • Hamming code
  • Hamming distance
  • DRAM row hammer

Lecture 2 (1/14 Wed.)

  • Moore's Law
  • Algorithm –> step-by-step procedure to solve a problem
  • in-order execution
  • out-of-order execution
  • technologies that are available on cellphones
  • new applications that are made available through new computer architecture techniques
    • more data mining (genomics/medical areas)
  • lower power (cellphones)
  • smaller cores (cellphones/computers)
  • etc.
  • Performance bottlenecks in a single thread/core processors
    • multi-core as an alternative
  • Memory wall (a part of scaling issue)
  • Scaling issue
    • Transister are getting smaller
  • Key components of a computer
  • Design points
    • Design processors to meet the design points
  • Software stack
  • Design decisions
  • Datacenters
  • Reliability problems that cause errors
  • Analogies from Kuhn's “The Structure of Scientific Revolutions” (Recommended book)
    • Pre paradigm science
    • Normal science
    • Revolutionalry science
  • Components of a computer
    • Computation
      • Communication
      • Storage
        • DRAM
        • NVRAM (Non-volatile memory): PCM, STT-MRAM
        • Storage (Flash/Harddrive)
  • Von Neumann Model (Control flow model)
    • Stored program computer
      • Properties of Von Neumann Model: Stored program, sequential instruction processing
      • Unified memory
        • When does an instruction is being interpreted as an instruction (as oppose to a datum)?
      • Program counter
      • Examples: x86, ARM, Alpha, IBM Power series, SPARC, MIPS
  • Data flow model
    • Data flow machine
      • Data flow graph
    • Operands
    • Live-outs/Live-ins
      • DIfferent types of data flow nodes (conditional/relational/barrier)
    • How to do transactional transaction in dataflow?
      • Example: bank transactions
  • Tradeoffs between control-driven and data-driven
    • What are easier to program?
      • Which are easy to compile?
      • What are more parallel (does that mean it is faster?)
      • Which machines are more complex to design?
    • In control flow, when a program is stop, there is a pointer to the current state (precise state).
  • ISA vs. Microarchitecture
    • Semantics in the ISA
      • uArch should obey the ISA
      • Changing ISA is costly, can affect compatibility.
  • Instruction pointers
  • uArch techniques: common and powerful techniques break Vonn Neumann model if done at the ISA level
    • Conceptual techniques
      • Pipelining
      • Multiple instructions at a time
      • Out-of-order executions
      • etc.
        • Design techniques
          • Adder implementation (Bit serial, ripple carry, carry lookahead)
          • Connection machine (an example of a machine that use bit serial to tradeoff latency for more parallelism)
  • Microprocessor: ISA + uArch + circuits
  • What are a part of the ISA? Instructions, memory, etc.
    • Things that are visible to the programmer/software
  • What are not a part of the ISA? (what goes inside: uArch techniques)
    • Things that are not suppose to be visible to the programmer/software but typically make the processor faster and/or consumes less power and/or less complex

Lecture 3 (1/17 Fri.)

  • Microarchitecture
  • Three major tradeoffs of computer architecture
  • Macro-architecture
  • LC-3b ISA
  • Unused instructions
  • Bit steering
  • Instruction processing style
  • 0,1,2,3 address machines
  • Stack machine
  • Accumulator machine
  • 2-operand machine
  • 3-operand machine
  • Tradeoffs between 0,1,2,3 address machines
  • Postfix notation
  • Instructions/Opcode/Operade specifiers (i.e. addressing modes)
  • Simply vs. complex data type (and their tradeoffs)
  • Semantic gap and level
  • Translation layer
  • Addressability
  • Byte/bit addressable machines
  • Virtual memory
  • Big/little endian
  • Benefits of having registers (data locality)
  • Programmer visible (Architectural) state
  • Programmers can access this directly
  • What are the benefits?
  • Microarchitectural state
  • Programmers cannot access this directly
  • Evolution of registers (from accumulators to registers)
  • Different types of instructions
  • Control instructions
  • Data instructions
  • Operation instructions
  • Addressing modes
  • Tradeoffs (complexity, flexibility, etc.)
  • Orthogonal ISA
  • Addressing modes that are orthogonal to instruction types
  • I/O devices
  • Vectored vs. non-vectored interrupts
  • Complex vs. simple instructions
  • Tradeoffs
  • RISC vs. CISC
  • Tradeoff
  • Backward compatibility
  • Performance
  • Optimization opportunity
  • Translation

Lecture 4 (1/21 Wed.)

  • Fixed vs. variable length instruction
  • Huffman encoding
  • Uniform vs. non-uniform decode
  • Registers
    • Tradeoffs between number of registers
  • Alignments
    • How does MIPS load words across alignment the boundary

Lecture 5 (1/26 Mon.)

  • Tradeoffs in ISA: Instruction length
    • Uniform vs. non-uniform
  • Design point/Use cases
    • What dictates the design point?
  • Architectural states
  • uArch
    • How to implement the ISA in the uArch
  • Different stages in the uArch
  • Clock cycles
  • Multi-cycle machine
  • Datapath and control logic
    • Control signals
  • Execution time of instructions/program
    • Metrics and what do they means
  • Instruction processing
    • Fetch
    • Decode
    • Execute
    • Memory fetch
    • Writeback
  • Encoding and semantics
  • Different types of instructions (I-type, R-type, etc.)
  • Control flow instructions
  • Non-control flow instructions
  • Delayed slot/Delayed branch
  • Single cycle control logic
  • Lockstep
  • Critical path analysis
    • Critical path of a single cycle processor
  • What is in the control signals?
    • Combinational logic & Sequential logic
  • Control store
  • Tradeoffs of a single cycle uarch
  • Design principles
    • Common case design
    • Critical path design
    • Balanced designs
    • Dynamic power/Static power
      • Increases in power due to frequency

Lecture 6 (1/28 Mon.)

  • Design principles
    • Common case design
    • Critical path design
    • Balanced designs
  • Multi cycle design
  • Microcoded/Microprogrammed machines
    • States
    • Translation from one state to another
    • Microinstructions
    • Microsequencing
    • Control store - Product control signals
    • Microsequencer
    • Control signal
      • What do they have to control?
  • Instruction processing cycle
  • Latch signals
  • State machine
  • State variables
  • Condition code
  • Steering bits
  • Branch enable logic
  • Difference between gating and loading? (write enable vs. driving the bus)
  • Memory mapped I/O
  • Hardwired logic
    • What control signals come from hardwired logic?
  • Variable latency memory
  • Handling interrupts
  • Difference betwen interrupts and exceptions
  • Emulator (i.e. uCode allots minimal datapath to emulate the ISA)
  • Updating machine behavior
  • Horizontal microcode
  • Vertical microcode
  • Primitives

Lecture 7 (1/30 Fri.)

  • Emulator (i.e. uCode allots minimal datapath to emulate the ISA)
  • Updating machine behavior
  • Horizontal microcode
  • Vertical microcode
  • Primitives
  • nanocode and millicode
    • what are the differences between nano/milli/microcode
  • microprogrammed vs. hardwire control
  • Pipelining
  • Limitations of the multi-programmed design
    • Idle resources
  • Throughput of a pipelined design
    • What dictacts the throughput of a pipelined design?
  • Latency of the pipelined design
  • Dependency
  • Overhead of pipelining
    • Latch cost?
  • Data forwarding/bypassing
  • What are the ideal pipeline?
  • External fragmentation
  • Issues in pipeline designs
    • Stalling
      • Dependency (Hazard)
        • Flow dependence
        • Output dependence
        • Anti dependence
        • How to handle them?
    • Resource contention
    • Keeping the pipeline full
    • Handling exception/interrupts
    • Pipeline flush
    • Speculation

Lecture 8 (2/2 Mon.)

  • Interlocking
  • Multipath execution
  • Fine grain multithreading
  • No-op (Bubbles in the pipeline)
  • Valid bits in the instructions
  • Branch prediction
  • Different types of data dependence
  • Pipeline stalls
    • bubbles
    • How to handle stalls
    • Stall conditions
    • Stall signals
    • Dependences
      • Distant between dependences
    • Data forwarding/bypassing
    • Maintaining the correct dataflow
  • Different ways to design data forwarding path/logic
  • Different techniques to handle interlockings
    • SW based
    • HW based
  • Profiling
    • Static profiling
    • Helps from the software (compiler)
      • Superblock optimization
      • Analyzing basic blocks
  • How to deal with branches?
    • Branch prediction
    • Delayed branching (branch delay slot)
    • Forward control flow/backward control flow
    • Branch prediction accuracy
  • Profile guided code positioning
    • Based on the profile info. position the code based on it
    • Try to make the next sequential instruction be the next inst. to be executed
  • Predicate combining (combine predicate for a branch instruction)
  • Predicated execution (control dependence becomes data dependence)

Lecture 9 (2/4 Wed.)

  
* Predicate combining (combine predicate for a branch instruction)
* Predicated execution (control dependence becomes data dependence)
* Definition of basic blocks
* Control flow graph
* Delayed branching
  * benefit?
  * What does it eliminates?
  * downside?
  * Delayed branching in SPARC (with squashing)
  * Backward compatibility with the delayed slot
  * What should be filled in the delayed slot
  * How to ensure correctness
* Fine-grained multithreading
  * fetch from different threads 
  * What are the issues (what if the program doesn't have many threads)
  * CDC 6000
  * Denelcor HEP
  * No dependency checking
  * Inst. from different thread can fill-in the bubbles
  * Cost?
* Simulteneuos multithreading
* Branch prediction
  * Guess what to fetch next.
  * Misprediction penalty
  * Need to guess the direction and target
  * How to perform the performance analysis?
    * Given the branch prediction accuracy and penalty cost, how to compute a cost of a branch misprediction.
    * Given the program/number of instructions, percent of branches, branch prediction accuracy and penalty cost, how to compute a cost coming from branch mispredictions.
      * How many extra instructions are being fetched?
      * What is the performance degredation?
  * How to reduce the miss penalty?
  * Predicting the next address (non PC+4 address)
  * Branch target buffer (BTB)
    * Predicting the address of the branch
  * Global branch history - for directions
  * Can use compiler to profile and get more info
    * Input set dictacts the accuracy
    * Add time to compilation
  * Heuristics that are common and doesn't require profiling.
    * Might be inaccurate
    * Does not require profiling
  * Static branch prediction
    * Pregrammer provides pragmas, hinting the likelihood of taken/not taken branch
    * For example, x86 has the hint bit
  * Dynamic branch prediction
    * Last time predictor
    * Two bits counter based prediction
      * One more bit for hysteresis
buzzword.1423077824.txt.gz · Last modified: 2015/02/04 19:23 by rachata