Upload
others
View
12
Download
0
Embed Size (px)
Citation preview
Course Intro and Computer Performance
Computer Architectures 521480S
Overview
• Instructor: Janne Haverinen– office: TS349– email: [email protected]– phone: 553 2801
• Exercises: Fad Seydou– Office: TS350– email: [email protected]
• Lectures: Mon 12-15 and Tue 14-16 in TS101• Book: Patterson D., Hennessy J. (1996), Computer
Architecture: A Quantitative Approach (2nd ed.)• Webpage:
http://www.ee.oulu.fi/research/tklab/courses/521480S/
Grading
• Exam 70-100%• Voluntary home assignments 0-30%
– points from the home assignments are added to the e xam points only, if the student gets 40% of the maximum exam p oints.
– Every week students get home assignments. Solutions are given during the exercises on Mondays 15-17.
– Home assignments are returned by email to the inst ructor» Subject field: “TKA-X Olli Opiskelija”, where X is the
number of the assignment and “Olli Opiskelija” is yo ur name.
» Use .DOC or .PDF format for the attachment.– Academic honesty: do not cheat! If cheating is disc overed, all
participators will received NO credits from the home assignments.
Course Objective
Understanding the design techniques, machine structures, technology factors, evaluation methods that will determine the form of computers in 21st Century
Technology ProgrammingLanguages
OperatingSystems History
ApplicationsInterface Design
(ISA)
Measurement & Evaluation
Parallelism
Computer Architecture:• Instruction Set Design• Organization• Hardware
Topic CoverageTextbook: Hennessy and Patterson, Computer Architecture: A Quantitative Approach, 2nd Ed., 1996.
• Fundamentals of Computer Architecture (Chapter 1)• Instruction Set Architecture (Chapter 2, Appendix D )• Pipelining (Chapter 3)• Instruction Level Parallelism (Chapters 4)• Memory Hierarchy (Chapter 5)
Computer Architecture’s Changing Definition
• 1950s to 1960s: Computer Architecture Course: Computer Arithmetic
• 1970s to mid 1980s: Computer Architecture Course: Instruction Set Design, especially ISA appropriate for compilers
• 1990s: Computer Architecture Course:Design of CPU, memory system, I/O system, Multiprocessors, Networks
• 2010s: Computer Architecture Course: Self adapting systems? Self organizing structures?DNA Systems/Quantum Computing?
What is Computer Architecture?
Technology
ApplicationsComputer Architect
Interfaces
Machine Organization
Measurement &Evaluation
ISA
AP
I
Link
I/O C
h.
Regs
IR
Computer Architecture Topics
Instruction Set Architecture
Pipelining, Hazard Resolution,Superscalar, Reordering, ILP, Branch Prediction, Speculation
Cache DesignBlock size, Associativity
L1 Cache
L2 Cache
DRAM
Disks and Tape
Coherence,Bandwidth,Latency
Emerging Technologies,Interleaving
RAID
VLSI
Input/Outputand Storage
MemoryHierarchy
Processor Design
Addressing modes, formats
• Application Area– Special Purpose / General Purpose– Scientific (FP intensive) / Commercial (Integer Int ensive)
• Level of Software Compatibility Required– Object Code / Binary (instruction format) Compatibl e
» Direct support for older programs– Assembly Language (instruction set compatible)
» A new compiler needed– Programming Language
» Support for high-level language structures
Context for Designing New Architectures
• Operating System Requirements– Size of Address Space– Memory Management / Protection
» virtual memory» MMU (Memory Management Unit)
– Context Switch Capability– Interrupts and Traps
• Standards: – IEEE 754 Floating Point – I/O Bus – Networks
• Technology Improvements– Increased Processor Performance– Larger Memory and I/O devices– Software/Compiler Innovations
Context for Designing New Architectures
Technology Trends: Microprocessor Capacity
Memory Capacity (Single Chip DRAM)
Moore’s Law for Memory: Memory capacity increases by 4x every 3 years
Technology Improvements
Capacity Speed
Logic 2x in 3 years 2x in 3 years
DRAM 4x in 3 years 1.4x in 10 years
Disk 2x in 3 years 1.4x in 10 years
Speed increases of memory and I/O have not kept pace with processor speed increases .
Processor Performance(1.35X before, 1.55X now)
Processor PerformanceTrends
Why Such Change in 10 years?
• Performance– Technology Advances
» CMOS VLSI dominates older technologies (TTL, ECL) i n cost AND performance
– Computer architecture advances improves low-end » RISC, superscalar, RAID, …
• Price: Lower costs due to …– Simpler development
» CMOS VLSI: smaller systems, fewer components– Higher volumes
» CMOS VLSI : same dev. cost 10,000 vs. 10,000,000 uni ts – Lower margins by class of computer, due to fewer se rvices
• Function– Rise of networking/local interconnection technology
1988 Computer Food Chain
2001 Computer Food Chain
Processor Perspective• Putting performance growth in perspective:
Pentium 3 (Coppermine) Cray YMPType Desktop SupercomputerYear 2000 1988Clock 1130 MHz 167 MHzMIPS > 1000 MIPS < 50 MIPSCost $2,000 $1,000,000Cache 256 KB 0.25 KBMemory 512 MB 256 MB
Where Has This Performance Improvement Come From?
• Technology– More transistors per chip– Faster logic
• Machine Organization/Implementation– Deeper pipelines– More instructions executed in parallel
• Instruction Set Architecture– Reduced Instruction Set Computers (RISC)– Multimedia extensions– Explicit parallelism
• Compiler technology– Finding more parallelism in code– Greater levels of optimization
• Greater instruction level parallelism (ILP)• Bigger caches, and more levels of cache• Multiple processors per chip• Complete systems on a chip • High performance interconnect
What is Ahead?
Computers Today
• Technology– Very large dynamic RAM: 1 GB and beyond– Large fast static RAM: 1 MB, 10ns– Very large disks: Approaching 100 GB
• Complete systems on a chip– 20+ Million Transistors
• Parallelism – Superscalar, VLIW (Very Long Instruction Word Compu ter
Architecture)– Superpipeline– Explicitely Parallel (e.g. VLIW)– Multiprocessors– Distributed systems
Computers Today
• Low Power– 60% of PCs portable by 2002– Performance per watt is now of interest
• Parallel I/O– Many applications I/O limited, not computation limit ed– Processors speeds increase faster than memory and I /O
• Multimedia– New interface technologies– Video, speech, handwriting, virtual reality, …
• Embedded systems extremely important– 90% of computers (processors) manufactured and 50% of
processor revenue is in the embedded market (e.g., microcontrollers, DSPs, graphics processors, etc.)
Hardware Technology1980 1990 2001
Memory chips 64 KB 4 MB 256 MB
Clock Rate 1-2 MHz 20-40 MHz 700-1200 M Hz
Hard disks 40 M 1 G 40 G
Floppies .256 M 1.5 M 0.5-2 G
Computing in the 21st century
• Continue quadrupling memory about every 3 years• Single-chip multiprocessor systems• High-speed communication networks• These improvements will create the need for new
and innovative computer systems.
Measurement and Evaluation
Architecture is an iterative process:• Search the possible design space• Make selections • Evaluate the selections made
Good measurement tools are required to accurately evaluate the selections.
Measurement Tools
• Benchmarks– e.g. SPEC CPU2000
» CINT2000: testing integer arithmetic, with programs such as compilers, interpreters, word processors, chess pro grams etc.
» CFP2000: testing floating point performance, with physical simulations, 3D graphics, image processing, computa tional chemistry etc.
• Simulation (many levels)– ISA, RTL (Register Transfer Language), Gate, Transi stor
• Cost, Delay, Area, and Power Estimates• Queuing Theory
– The study of the waiting times, lengths, and other properties ofqueues
– Used to analyze response time and throughput of a I /O system.• Rules of Thumb
– e.g. “Make a common case fast”• Fundamental Laws
– e.g. Amdahl’s law.
The Bottom Line: Performance (and Cost)
• Time to run the task (Execution Time)– Execution time, response time, latency
• Tasks per day, hour, week, sec, ns … (Performance)– Throughput, bandwidth
Plane
Boeing 747
BAC/Sud Concorde
Speed
610 mph
1350 mph
DC to Paris
6.5 hours
3 hours
Passengers
470
132
Throughput (pmph)
286,700
178,200
Performance andExecution Time
Execution time and performance are reciprocalsExTime(Y) Performance(X)
--------- = ---------------
ExTime(X) Performance(Y)
• Speed of Concorde vs. Boeing 747
• Throughput of Boeing 747 vs. Concorde
Performance Terminology
“X is n% faster than Y” means:ExTime(Y) Performance(X) n
--------- = -------------- = 1 + -----
ExTime(X) Performance(Y) 100
n = 100(Performance(X) - Performance(Y))
Performance(Y)
Example: Y takes 15 seconds to complete a task, X takes 10 seconds. What % faster is X?
n = 100(ExTime(Y) - ExTime(X))
ExTime(X)
Example
Example: Y takes 15 seconds to complete a task, X takes 10 seconds. What % faster is X?
n = 100(ExTime(Y) - ExTime(X))ExTime(X)
n = 100(15 - 10)10
n = 50%
Amdahl's LawSpeedup due to enhancement E:
ExTime w/o E Performance w/ E
Speedup(E) = ------------- = -------------------
ExTime w/ E Performance w/o E
Suppose that enhancement E accelerates a fraction Fraction enhanced of the task by a factor Speedup enhanced , and the remainder of the task is unaffected.
What are the new execution time and the overall speedup due to the enhancement?
Amdahl’s Law
ExTime new = ExTime old x (1 - Fraction enhanced ) + Fraction enhanced
Speedup overall =ExTime old
ExTime new
Speedup enhanced
=
1
(1 - Fraction enhanced ) + Fraction enhanced
Speedup enhanced
Example of Amdahl’s Law
• Floating point instructions improved to run 2X; but only 10% of the time was spent on these instructions.
Speedup overall =
ExTime new =
Example of Amdahl’s Law
• Floating point instructions improved to run 2X; but only 10% of the time was spent on these instructions.
Speedup overall = ExTime old = 10.95
= 1.053
ExTime new = ExTime old x (0.9 + 0.5 x 0.1) = 0.95 x ExTime old
ExTime new
• The new machine is 5.3% faster for this mix of instructions .
Make The Common Case Fast
• All instructions require an instruction fetch, only a fraction require a data fetch/store.
– Optimize instruction access over data access
• Programs exhibit locality- 90% of time in 10% of code- Temporal Locality (items referenced recently)
- Spatial Locality (items referenced nearby)
• Access to small memories is faster– Provide a storage hierarchy such that the most frequent
accesses are to the smallest (closest) memories.
Reg'sCache
Memory Disk / Tape
Hardware/Software Partitioning
• The simple case is usually the most frequent and the easiest to optimize!
• Do simple, fast things in hardware and be sure the rest can be handled correctly in software.
Would you handled these in hardware or software:• Integer addition?• Accessing data from disk?• Floating point square root?
Performance Factors
• The number of instructions/program is called the instruction count (IC).
• The average number of cycles per instructionis called the CPI.
• The number of seconds per cycle is the clock period .
• The clock rate is the multiplicative inverse of the clock period and is given in cycles per second (or MHz).
• For example, if a processor has a clock period of 5 ns, what is it’s clock rate?
CPU time = Seconds = Instructions x Cycl es x Seconds
Program Program Instruction Cycle
CPU time = Seconds = Instructions x Cycl es x Seconds
Program Program Instruction Cycle
Aspects of CPU Performance
CPU time = Seconds = Instructions x Cycl es x Seconds
Program Program Instruction Cycle
CPU time = Seconds = Instructions x Cycl es x Seconds
Program Program Instruction Cycle
Instr. Cnt CPI Clock RateProgram
Compiler
Instr. Set
Organization
Technology
Aspects of CPU PerformanceCPU time = Seconds = Instructions x Cycles x Seconds
Program Program Instruction Cycle
CPU time = Seconds = Instructions x Cycles x Seconds
Program Program Instruction Cycle
Inst Count CPI Clock RateProgram X X
Compiler X X
Inst. Set. X X
Organization X X
Technology X
Marketing Metrics
MIPS = Instruction Count / (Time * 10^6) = Clock Rate / (C PI * 10^6)
• Not effective for machines with different instructi on sets
• Not effective for programs with different instructi on mixes
• Uncorrelated with performance
MFLOPs = FP Operations / Time * 10^6
• Machine dependent
• Often not where time is spent
•Peak - maximum able to achieve
•Native - average for a set of benchmarks
•Relative - compared to another platform
Normalized MFLOPS:
add,sub,compare,mult 1
divide, sqrt 4
exp, sin, . . . 8
Normalized MFLOPS:
add,sub,compare,mult 1
divide, sqrt 4
exp, sin, . . . 8
Problem with MIPS
Typical Mix
Base Machine (Reg / Reg)Op Freq Cycles CPI(i) (% Time)ALU 50% 1 .5 (33%)Load 20% 2 .4 (27%)Store 10% 2 .2 (13%)Branch 20% 2 .4 (27%)
1.5
• Clock rate: 300MHz ⇒ MIPS = 200.• Optimize away 50% of ALU operations
– New Freq. Of ALU, Load, Store and Branch?
– Average CPI of Optimized code ?– MIPS rating of optimized Code? (MIPS = 180).
• MIPS rating could dropped for the optimized code.
Problem with MIPS
New Mix (50% of ALU operations optimized away)
Base Machine (Reg / Reg)Op Freq Cycles CPI(i) (% Time)ALU 33% 1 .33 (33%)Load 27% 2 .54 (27%)Store 13% 2 .26 (13%)Branch 27% 2 .54 (27%)
1.67
• Clock rate: 300MHz ⇒ MIPS = 180.=> MIPS rating dropped for the optimized code
=> MIPS can be tailored using compiler tricks, for example (e.g.compiling without optimization in the above example).
Programs to Evaluate Processor Performance
• (Toy) Benchmarks– 10-100 line program– e.g.: sieve, puzzle, quicksort
• Synthetic Benchmarks– Attempt to match average frequencies of real worklo ads– e.g., Whetstone, dhrystone
• Kernels– Time critical blocks of code
• Real Benchmarks– Real programs
Benchmarking?
• How do we evaluate difference?• Provide a target (benchmarks,large classes of important
programs)– Could help other programs
• Good Benchmarks accelerate the progress• Programs to evaluate processor performance:
– Real programs: gcc, jpeg, etc.– Synthetic benchmarks: Dhrystone (no FPs), Whetsone (FPs), SPEC.– Kernels– Toy benchmarks: small programs.
SPEC: System Performance Evaluation Cooperative
• First Round SPEC CPU89– 10 programs yielding a single number
• Second Round SPEC CPU92– SPEC CINT92 (6 integer programs) and SPEC CFP92 (14 floating
point programs)– Compiler flags can be set differently for different programs
• Third Round SPEC CPU95– new set of programs: SPEC CINT95 (8 integer program s) and
SPEC CFP95 (10 floating point) – Single flag setting for all programs
• Fourth Round SPEC CPU2000– new set of programs: SPEC CINT2000 (12 integer pro grams) and
SPEC CFP2000 (14 floating point) – Single flag setting for all programs– Programs in C, C++, Fortran 77, and Fortran 90
SPEC 2000• 12 integer programs:
– 2 Compression– 2 Circuit Placement and Routing– C Programming Language
Compiler– Combinatorial Optimization– Chess, Word Processing– Computer Visualization– PERL Programming Language– Group Theory Interpreter– Object-oriented Database. – Written in C (11) and C++ (1)
• 14 floating point programs:– Quantum Physics – Shallow Water Modeling– Multi-grid Solver– 3D Potential Field– Parabolic / Elliptic PDEs– 3-D Graphics Library– Computational Fluid Dynamics– Image Recognition– Seismic Wave Simulation– Image Processing– Computational Chemistry– Number Theory / Primality Testing– Finite-element Crash Simulation– High Energy Nuclear Physics– Pollutant Distribution– Written in Fortran (10) and C (4)
Other SPEC Benchmarks• JVM98:
– Measures performance of Java Virtual Machines
• SFS97: – Measures performance of network file server (NFS)
protocols
• Web99: – Measures performance of World Wide Web applications
• HPC96:– Measures performance of large, industrial applicatio ns
• APC, MEDIA, OPC– Meausre performance of graphics applications
• For more information about the SPEC benchmarks see: http://www.spec.org.
Performance Evaluation• For better or worse, benchmarks shape a field• Good products created when have:
– Good benchmarks– Good ways to summarize performance
• If benchmarks/summary inadequate, then choose between improving product for real programs vs. improving product (and MIPS) to get more sales;Sales almost always wins!
• Execution time is the measure of computer performance!
Conclusions on Performance
• A fundamental rule in computer architecture is to make the common case fast.
• The most accurate measure of performance is the execution time of representative real programs (benchmarks).
• Execution time is dependent on the number of instructions per program, the number of cycles per instruction, and the clock rate.
• When designing computer systems, both cost and performance need to be taken into account.