User Tools

Site Tools


buzzword

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
buzzword [2014/01/17 19:20]
rachata
buzzword [2015/01/10 21:03]
kevincha [Lecture 1 (1/13 Mon.)]
Line 1: Line 1:
 ====== Buzzwords ====== ====== Buzzwords ======
  
-Buzzwords are terms that are mentioned during lecture which are particularly important to understand thoroughly. ​ This page tracks the buzzwords for each of the lectures and can be used as a reference for finding gaps in your understanding of course material.+Buzzwords are terms that are mentioned during lecture which are particularly important to understand thoroughly. This page tracks the buzzwords for each of the lectures and can be used as a reference for finding gaps in your understanding of course material.
  
-===== Lecture 1 (1/13 Mon.) ===== +===== Lecture 1 (1/12 Mon.) =====
- +
-  * Level of transformation +
-    * Algorithm +
-    * System software +
-    * Compiler +
-  * Cross abstraction layers +
-    * Expose an interface +
-  * Tradeoffs +
-  * Caches +
-  * Multi-thread +
-  * Multi-core +
-  * Unfairness +
-  * DRAM controller/​Memory controller +
-  * Memory hog +
-  * Row buffer hit/miss +
-  * Row buffer locality +
-  * Streaming access/ Random access +
-  * DRAM refresh +
-  * Retention time +
-  * Profiling DRAM retention time +
-  * Power consumption +
-  * Wimpy cores +
-  * Bloom filter +
-    * Pros/Cons +
-    * False Positive +
-  * Simulation +
-  * Memory performance attacks +
-  * RTL design +
- +
-===== Lecture 2 (1/15 Wed.) ===== +
- +
-  * Optimizing for energy/ Optimizing for the performance +
-    * Generally you should optimize for the users +
-  * state-of-the-art +
-  * RTL Simulation +
-    * Long, slow and can be costly +
-  * High level simulation +
-    * What should be employed? +
- * Important to get the idea of how they are implemented in RTL +
- * Allows designer to filter out techniques that do not work well +
-  * Design points +
-    * Design processors to meet the design points +
-  * Software stack +
-  * Design decisions +
-  * Datacenters +
-  * MIPS R2000 +
-    * What are architectural techniques that improve the performance of a processor over MIPS 2000 +
-  * Moore'​s Law +
-  * in-order execution +
-  * out-of-order execution +
-  * technologies that are available on cellphones +
-  * new applications that are made available through new computer architecture techniques +
-    * more data mining (genomics/​medical areas) +
- * lower power (cellphones) +
- * smaller cores (cellphones/​computers) +
- * etc. +
-  * Performance bottlenecks in a single thread/core processors +
-    * multi-core as an alternative +
-  * Memory wall (a part of scaling issue) +
-  * Scaling issue +
-    * Transister are getting smaller +
-  * Reliability problems that cause errors +
-  * Analogies from Kuhn's "The Structure of Scientific Revolutions"​ (Recommended book) +
-    * Pre paradigm science +
-    * Normal science +
-    * Revolutionalry science +
-  * Components of a computer +
-    * Computation +
-      * Communication +
-        * Storage +
-          * DRAM +
-          * NVRAM (Non-volatile memory): PCM, STT-MRAM +
-          * Storage (Flash/​Harddrive) +
-  * Von Neumann Model (Control flow model) +
-    * Stored program computer +
-        * Properties of Von Neumann Model: Stored program, sequential instruction processing +
-        * Unified memory +
-          * When does an instruction is being interpreted as an instruction (as oppose to a datum)? +
-        * Program counter +
-        * Examples: x86, ARM, Alpha, IBM Power series, SPARC, MIPS +
-  * Data flow model +
-    * Data flow machine +
-      * Data flow graph +
-    * Operands +
-    * Live-outs/​Live-ins +
-      * DIfferent types of data flow nodes (conditional/​relational/​barrier) +
-    * How to do transactional transaction in dataflow? +
-      * Example: bank transactions  +
-  * Tradeoffs between control-driven and data-driven +
-    * What are easier to program? +
-    * Which are easy to compile? +
-    * What are more parallel (does that mean it is faster?) +
-    * Which machines are more complex to design? +
-    * In control flow, when a program is stop, there is a pointer to the current state (precise state). +
-  * ISA vs. Microarchitecture +
-    * Semantics in the ISA +
-    * uArch should obey the ISA +
-    * Changing ISA is costly, can affect compatibility. +
-  * Instruction pointers +
-  * uArch techniques: common and powerful techniques break Vonn Neumann model if done at the ISA level +
-    * Conceptual techniques +
-      * Pipelining  +
-      * Multiple instructions at a time +
-      * Out-of-order executions +
-      * etc. +
-    * Design techniques +
-      * Adder implementation (Bit serial, ripple carry, carry lookahead) +
-      * Connection machine (an example of a machine that use bit serial to tradeoff latency for more parallelism) +
-  * Microprocessor:​ ISA + uArch + circuits +
-  * What are a part of the ISA? Instructions,​ memory, etc. +
-    * Things that are visible to the programmer/​software +
-  * What are not a part of the ISA? (what goes inside: uArch techniques) +
-    * Things that are not suppose to be visible to the programmer/​software but typically make the processor faster and/or consumes less power and/or less complex +
- +
-===== Lecture 3 (1/17 Fri.) ===== +
- +
-  * Design tradeoffs +
-  * Macro Architectures +
-  * Reconfiguribility vs. specialized designs +
-  * Parallelism (instructions,​ data parallel) +
-  * Uniform decode (Example: Alpha) +
-  * Steering bits (Sub-opcode) +
-  * 0,1,2,3 address machines +
-    * Stack machine +
-    * Accumulator machine +
-    * 2-operand machine +
-    * 3-operand machine +
-    * Tradeoffs between 0,1,2,3 address machines +
-  * Instructions/​Opcode/​Operade specifiers (i.e. addressing modes) +
-  * Simply vs. complex data type (and their tradeoffs) +
-  * Semantic gap +
-  * Translation layer +
-  * Addressability +
-  * Byte/bit addressable machines +
-  * Virtual memory +
-  * Big/little endian +
-  * Benefits of having registers (data locality) +
-  * Programmer visible (Architectural) state +
-    * Programmers can access this directly +
-    * What are the benefits? +
-  * Microarchitectural state +
-    * Programmers cannot access this directly +
-  * Evolution of registers (from accumulators to registers) +
-  * Different types of instructions +
-    * Control instructions +
-    * Data instructions +
-    * Operation instructions +
-  * Addressing modes +
-    * Tradeoffs (complexity,​ flexibility,​ etc.) +
-  * Orthogonal ISA +
-    * Addressing modes that are orthogonal to instructino types +
-  * Vectors vs. non vectored interrupts +
-  * Complex vs. simple instructions +
-    * Tradeoffs +
-  * RISC vs. CISC +
-    * Tradeoff +
-    * Backward compatibility +
-    * Performance +
-    * Optimization opportunity+
  
buzzword.txt · Last modified: 2015/04/27 18:20 by rachata