36
Architecture & Technology Implications for Future Computing Dr Fayé A. Briggs, Intel Fellow

Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Architecture & Technology

Implications for Future Computing

Dr Fayé A. Briggs, Intel Fellow

Page 2: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Summary

Agenda

Demands for Exascale Computing What is it and what problems does it solve?

Trends in Top 500

Programmability & Tools for IA Exascale Analytical intelligence framework

Six Pillars of Technology to Drive Scalable Computing Micro-architecture performance drivers

Technology Drivers

Applying Moore’s Law to Future Datacenters Inflection points and sustaining performance CAGR

Page 3: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Future Computing Enables Better Fidelity

Goal: 1-ExaFlops (1018) within 20-40 MW by End of Decade

Enabling solutions to yet impossible life changing problems

Makes PFlop HPC computing affordable and ubiquitous

10 PFlops

1 PFlops

100 TFlops

10 TFlops

1 TFlops

100 GFlops

10 GFlops

1 GFlops

100 MFlops

100 PFlops

10 EFlops

1 EFlops

100 EFlops

1993 2017 1999 2005 2011 2023

1 ZFlops

2029

Weather Prediction

Medical Imaging

Genomics Research

Forecast

1993 2017 1999 2005 2011

Page 4: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Trends to Exascale Performance

Roughly 10x performance every 4 years.

Predicts that we’ll hit Exascale performance in 2018-19

33.9PF Tianhe-2

Page 5: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Six Pillars of Maintaining Scalability of

Computing Needs A Multi-disciplinary Approach

Power Management

Microprocessor

Parallel Software Interconnect

Reliability and Resiliency

Memory

Page 6: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Micro-architecture Performance Parameters

• Core capability (IPC)

• Vector FLOP Density

• Core-count

• On-die Interconnect

• Frequency

• Cache Hierarchy(densities & latencies)

• Memory Latency, Bandwidth & Size

• Memory Technologies

• Considerations for efficient power/performance scalability

Many Knobs for Optimizing Performance within Power & Costs Constraints

Performance = Frequency * Instructions Per Cycle

(IPC)

Power α V * V * Frequency

Frequency α voltage,

so frequency reduction coupled with voltage

reduction results in cubic reduction in power.

Page 7: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Cores & Uncores: Performance Drivers

DRAM

QPI

Core

Uncore

C O R E

C O R E

C O R E

Memory Controller QPI Power &

Clock

QPI

Last Level Cache

Shown: Xeon 2600

Architecture

EP, WS 2S

QPI

QPI

SNB

Core

Xeon 2600

SNB

Core

Xeon 2600 DDR3

ESI

PCIe

PCIe

PCIe

PCIe

PCIe

x8

ESI

PCIe

PCIe

PCIe

PCIe

PCIe

DDR3

DDR3

DDR3

DDR3

DDR3

DDR3

DDR3

PCIe

Page 8: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Cumulative Performance Drivers

8

SPECint Rate TPC-C

Core-count drives the cumulative increase in perf more than any other factor.

Page 9: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

MPKI in HPC Workloads

• Most HPC workloads benefit from caches

• Less than 20 MPKI for 1M-4M caches

MPKI MPKI

Page 10: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Caches save power

• Caches save power since memory communication avoided • Caches 8x-45x better at BW/Watt compared to memory

• Power break-even point around 11% hit rate (L2 cache)

0

5

10

15

20

25

30

35

40

45

50

Memory BW L2 Cache BW L1 Cache BW

Relative BW

Relative BW/Watt

Relative

BW/Watt

Page 11: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Memory Bandwidth Does Not Scale With Moore’s Law

• Significant increase in CPU computing power

• Memory BW not keeping up

• Gap must be closed to support scalability of future computing

Source: Exascale Computing Study: Technology Challenges in achieving Exascale Systems (2008)

Source: Intel Forecast

Traditional CPU

BW demand

HE-WS/HPC BW demand

BW Trend

DDR3

Assuming DDR4

BW Projections

0

50

100

150

200

2008 2009 2010 2011 2012 2013 2014 2015 2016 2017

GB

/S (

Pe

r S

kt)

0

50

100

150

200

250

300

3502

00

6

20

08

20

10

20

12

20

14

20

16

20

18

Relative

Growth in

Mem BW

# of cores

Flops/cycle/

Socket

Expon.

(Required

Memory BW)

Page 12: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Technology Scaling Outlook

1

2

0

10

20

30

40R

ela

tiv

e

Transistor Density

1.75 – 2X

0

0.5

1

1.5

Re

lati

ve

Frequency

Almost flat

0

0.2

0.4

0.6

0.8

1

1.2

Re

lati

ve

Supply Voltage

Almost flat

0.001

0.01

0.1

1

Re

lati

ve

Energy

Some scaling

Ideal

Page 13: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Energy per Compute Operation

1

3

0

50

100

150

200

250

300

45nm 32nm 22nm 14nm 10nm 7nm

En

erg

y (

pJ)

pJ/bit

Com

pJ/bit

DRAM

FP Op

DRAM

Communication

Operands

75 pJ/bit

25 pJ/bit

10 pJ/bit

100 pJ/bit

Source: Intel

Page 14: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Wide Dynamic Range

14

NTV

EN

ER

GY

E

FF

ICE

NC

Y

HIGH

LOW

VOLTAGE ZERO MAX

~5x

Demonstrated

Normal operating range Subthreshold

Ultra-low

Power

Energy

Efficient

High

Performance

280 mV 0.45 V 1.2 V

3 MHz 60 MHz 915 MHz

2 mW 10 mW 737 mW

1500

Mips/W

5830

Mips/W

1240 Mips/W

Page 15: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

A Number of Challenges to Overcome

4 8 12 16 20 24 28 32 36 40

0

100

200

300

400

500

600

700

Tera Flops/Socket

Memory BW/Socket (GB/s)

DRAM data pins/memory (@ 10Gb/s)

Pin count low, but power high

Power low , but pin count high

~10pJ

per

Byte

Memory:

~150pJ per

Byte

Chip to

chip:

~100pJ per

Byte ’93 ‘95 ‘97 ‘99 ‘01 ‘03 ‘05 ‘07 ‘09

1E+02

1E+03

1E+04

1E+05

1E+06

1E+07

Top System Concurrency Trend

Peak Performance (TF)

Memory

(TB)

1 10 100 1000 10000

10000

1000

100

10

1

Peak Performance of some of the most recent Top 10 machines

Cost/Area Power Extreme Parallelism

Byte/Flop Ratio

Source: Exascale Computing Study: Technology Challenges in achieving Exascale Systems (2008),

Page 16: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Consider New Levels of Memory Hierarchy

On CPU SRAM DRAM Storage (HDD) NVM Storage

Latency 1x ~=20x ~=20,000x ~=10,000,000x

$$/bit 1x ~=0.05x ~=0.005x ~=0.0005x

Scaling Challenges with DRAM and NAND

Source: Memory Technology Trend and Future Challenges – Singjoo Hong; Hynix Semiconductor

Page 17: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Memory Power Challenges

• Need to balance bandwidth

with capacity and power

1 ExaFlop Machine

~200-300 PB/sec

~2-3 pJ/bit

• GDDR will be out of steam

• Periphery connected

solutions will run into pin

count issues

Memory bandwidth is critical to HPC performance

Existing technology trends leave 3-4x gap on pJ/bit

Page 18: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Package

3D-Integration of DRAM and Logic

Logic Buffer

Logic Buffer Chip

Technology optimized for:

High speed signaling

Energy efficient logic circuits

Implement intelligence

DRAM Stack

Technology optimized for:

Memory density

Lower cost

3D Integration provides best of both worlds

Page 19: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

1Tb/s HMC DRAM Prototype

• 3D integration technology • 1Gb DRAM Array • 512 MB total DRAM/cube • 128GB/s Bandwidth • <10 pj/bit energy

Bandwidth Energy

Efficiency

DDR-3 (Today) 10.66

GB/Sec

50-75 pJ/bit

Hybrid Memory

Cube

128

GB/Sec

8 pJ/bit

10X higher bandwidth, 10X lower energy Source: Micron

Page 20: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Memory: Approach Forward

• Significant power consumed in Memory

– Need to drive 20 pj/bit to 2-3 pJ/bit

• Balancing BW, capacity and power is hard problem

• More hierarchical memories

• Progressively more integration

CPUDie

Direct Attach Usage

Logic Die

Multi-chip Package Usage

2A

CPUDie

Multi-package Usage

2A

CPU

Logic Die

package

Memory Memory Memory

Page 21: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Develop Innovative Packaging & IO Solutions

CPU with 100s

of cores

Heat Extraction

DRAM or

NVRAM die

Interconnect substrate

Through Silicon Via

CPU with

100s of cores

DRAM die

Through

Silicon Via

• Pins required + IO Power limits the use of traditional

packaging

• Tighter integration between memory and CPU

• High BW and low latency using memory locality

Source: Exascale Computing Study: Technology Challenges in achieving Exascale Systems (2008)

More cores per chip will slow some

programs [red] unless there’s a big

boost in memory bandwidth

[yellow].

“Multicore Is Bad News For Supercomputers”,

IEEE Spectrum Nov 2008

Page 22: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

22

Reliability Challenges

From Peta to Exa Reliability Issues

1,000X

parallelism

More hardware for something to

go wrong

>1,000X intermittent faults due

to soft errors

Aggressive Vcc

scaling to reduce

power/energy

Gradual faults due to increased

variations

More susceptible to Vcc droops

(noise)

More susceptible to dynamic

temp variations

Exacerbates intermittent

faults—soft errors

Deeply scaled

technologies

Aging related faults

Lack of burn-in?

Variability increases

dramatically

Resiliency will be the corner-stone

Page 23: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Soft Errors and Reliability

23

0

0.2

0.4

0.6

0.8

1

0.5 1 1.5 2

Voltage (V)

n-S

ER

/ce

ll (

se

a-l

ev

el)

65nm

90nm

130nm

180nm

250nm

Soft error/bit reduces each generation

Nominal impact of NTV on soft error rate

1

10

180 130 90 65 45 32

Technology (nm)

Rela

tive t

o 1

30n

m

Memory

Latch

Assuming 2X bit/latch

count increase per generation

Soft error at the system level will

continue to increase

Positive impact of NTV on reliability

Low V lower E fields, low power lower temperature

Device aging effects will be less of a concern

Lower electromigration related defects

N. Seifert et al, "Radiation-Induced Soft Error Rates of Advanced CMOS Bulk Devices", 44th Reliability Physics Symposium, 2006

Page 24: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Platform Reliability

2006 2008 2010 2012 2014 2016

1E+04

1E+05

1E+06

1E+07

1E+08

2004 2006 2008 2010 2012 2015

0

1

10

100

1000

Me

an

Tim

e t

o In

terr

up

t (h

ou

rs)

0..001

0..01

0..1

Failures per socket per year

1997 1999 2001 2003 2005 2007 2009 2011 2013 2015

Ap

plic

ati

on

Uti

liza

tio

n (

%)

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

0.1

0.01

0.001

Failures per socket per year

Source: Exascale Computing Study: Technology Challenges in achieving Exascale Systems (2008)

Page 25: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Today’s HW System Architecture

2

5

Socket

4GB DRAM

Socket

4GB DRAM

Socket

4GB DRAM

Socket

4GB DRAM

Coherent domain

1.5 TF Peak

660 pJ/F 660 MW/Exa

10 mB of DRAM/F

...

Non-coherent domain

Today’s programming model

comprehends this system architecture

Core X8 FP

Small RF

32K I$

32K D$

128K I$

128K D$

16 MB L3

Core X8 FP

Small RF

32K I$

32K D$

128K I$

128K D$

…X 16…

~ 3GHz

~100 W

384 GF/s Peak

260 pJ/F 260 MW/Exa

55 mB of local memory/F

Processor

Page 26: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Straw-man Exascale Processor

2

6

C C C C

C C C C

Logi

c

Shared Cache

First level of hierarchy Next level of hierarchy

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

Next level cache

Interconnect ……

…..

……

…..

600K Transistors

Simplest Core

*

RF

Logic

Technology 7nm, 2018

Die area 500 mm2

Cores 2048

Frequency 4.2 GHz

TFLOPs 17.2

Power 600 Watts

E Efficiency 34 pJ/Flop

Computations alone consume 34 MW for Exascale

Processor

Next level of hierarchyPE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

Next level cache

Interconnect………..

………..

Next level of hierarchyPE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

Next level cache

Interconnect………..

………..

Next level of hierarchyPE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

Next level cache

Interconnect………..

………..

Next level of hierarchyPE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

Next level cache

Interconnect………..

………..

………..

………..

Last level cache

……

…..

……

…..

Interconnect

Page 27: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Straw-man Architecture at NTV

2

7

Full Vdd 50% Vdd

Technology 7nm, 2018

Die area 500 mm2

Cores 2048

Frequency 4.2 GHz 600 MHz

TFLOPs 17.2 2.5

Power 600 Watts 37 Watts

E Efficiency 34 pJ/Flop 15 pJ/Flop

Compute energy efficiency close to Exascale goal

600K Transistors

Simplest Core

*

RF

Logic

C C C C

C C C C

Lo

gic

Shared Cache

First level of hierarchy Next level of hierarchyPE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

Next level cache

Interconnect………..

………..

Processor

Next level of hierarchyPE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

Next level cache

Interconnect………..

………..

Next level of hierarchyPE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

Next level cache

Interconnect………..

………..

Next level of hierarchyPE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

Next level cache

Interconnect………..

………..

Next level of hierarchyPE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

Next level cache

Interconnect………..

………..

………..

………..

Last level cache

………..

………..

Interconnect

Reduced frequency

and flops

Reduced power and

improved E-efficiency

Page 28: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Over-provisioning, Introspection, Self-awareness

28

F F

M F

S

S

F S S

Addressing variations

1. Provide more compute HW

2. Law of large numbers

3. Static profile

M

S

S

S M M F

1. Schedule threads based on objectives and resources

2. Dynamically control and manage resources

3. Identify sensors, functions in HW for implementation

System SW implements introspective execution model

F F

M F

S

S

F S S

M

S

S

S M M F

Dynamic reconfiguration:

1. Energy efficiency

2. Latency

3. Dynamic resource

management

Fine grain resource mgmt

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

8MB Shared LLC

Interconnect

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

8MB Shared LLC

Interconnect………..

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

8MB Shared LLC

Interconnect

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

PE PE PE PE

PE PE PE PESe

rvic

e C

ore

1MB L2

………..

8MB Shared LLC

Interconnect………..

64MB Shared LLC

………..

………..

Interconnect

Processor Chip (16 Clusters)

Sensors for introspection

1. Energy consumption

2. Instantaneous power

3. Computations

4. Data movement

Page 29: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Over-provisioned Introspectively Resource Managed System

29

Over-provisioned in design

Dynamically tuned for the given objective

0

20

40

60

80

100

120

140

45nm 32nm 22nm 14nm 10nm 7nm 5nm

MW

System Power Data MovementPower w/o DMSys Goal

20 MW

0

2

4

6

8

10

12

14

45nm 32nm 22nm 14nm 10nm 7nm 5nm

MW

Data Movement Power System

Cabinet

Boards

Die

Clusters

Islands

40 Gbps Photonic links @ 10 pJ/b

3.23 MW

Page 30: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

1

2

3

Many challenges to continue to scale computing Exciting times Important that innovations have broader applicability

Computing efficiencies will continue to improve w/ IA cores Without changing programming model or ISA

Potential more integration over time Reduces power, increases reliability, increases scalability

Summary

Page 31: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Q&A

THIS PAGE IS REQUIRED DO NOT DELETE

Page 32: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Legal Disclaimer

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY

ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN

INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL

DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR

WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT,

COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.

• A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in personal injury or

death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL

INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND

EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES

ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY

WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE

DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS.

• Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or

characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition and shall have no

responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without

notice. Do not finalize a design with this information.

• The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from

published specifications. Current characterized errata are available on request.

• Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across

different processor families. Go to: http://www.intel.com/products/processor_number.

• Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.

• Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-

548-4725, or go to: http://www.intel.com/design/literature.htm

• [Add any code names from previous pages] and other code names featured are used internally within Intel to identify products that are in

development and not yet publicly announced for release. Customers, licensees and other third parties are not authorized by Intel to use code

names in advertising, promotion or marketing of any product or services and any such use of Intel's internal code names is at the sole risk of the

user

• Intel, [Add words with TM or R from previous pages..ie Xeon, Core, etc] and the Intel logo are trademarks of Intel Corporation in the United

States and other countries.

• *Other names and brands may be claimed as the property of others.

• Copyright ©2012 Intel Corporation.

THIS PAGE IS REQUIRED DO NOT DELETE

Page 33: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Risk Factors

THIS PAGE IS REQUIRED DO NOT DELETE

The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the future are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,” “intends,” “plans,” “believes,” “seeks,” “estimates,” “may,” “will,” “should” and their variations identify forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Many factors could affect Intel’s actual results, and variances from Intel’s current expectations regarding such factors could cause actual results to differ materially from those expressed in these forward-looking statements. Intel presently considers the following to be the important factors that could cause actual results to differ materially from the company’s expectations. Demand could be different from Intel's expectations due to factors including changes in business and economic conditions, including supply constraints and other disruptions affecting customers; customer acceptance of Intel’s and competitors’ products; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Uncertainty in global economic and financial conditions poses a risk that consumers and businesses may defer purchases in response to negative financial events, which could negatively affect product demand and other related matters. Intel operates in intensely competitive industries that are characterized by a high percentage of costs that are fixed or difficult to reduce in the short term and product demand that is highly variable and difficult to forecast. Revenue and the gross margin percentage are affected by the timing of Intel product introductions and the demand for and market acceptance of Intel's products; actions taken by Intel's competitors, including product offerings and introductions, marketing programs and pricing pressures and Intel’s response to such actions; and Intel’s ability to respond quickly to technological developments and to incorporate new features into its products. Intel is in the process of transitioning to its next generation of products on 22nm process technology, and there could be execution and timing issues associated with these changes, including products defects and errata and lower than anticipated manufacturing yields. The gross margin percentage could vary significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products for sale; changes in revenue levels; product mix and pricing; the timing and execution of the manufacturing ramp and associated costs; start-up costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials or resources; product manufacturing quality/yields; and impairments of long-lived assets, including manufacturing, assembly/test and intangible assets. The majority of Intel’s non-marketable equity investment portfolio balance is concentrated in companies in the flash memory market segment, and declines in this market segment or changes in management’s plans with respect to Intel’s investments in this market segment could result in significant impairment charges, impacting restructuring charges as well as gains/losses on equity investments and interest and other. Intel's results could be affected by adverse economic, social, political and physical/infrastructure conditions in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Expenses, particularly certain marketing and compensation expenses, as well as restructuring and asset impairment charges, vary depending on the level of demand for Intel's products and the level of revenue and profits. Intel’s results could be affected by the timing of closing of acquisitions and divestitures. Intel's results could be affected by adverse effects associated with product defects and errata (deviations from published specifications), and by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust and other issues, such as the litigation and regulatory matters described in Intel's SEC reports. An unfavorable ruling could include monetary damages or an injunction prohibiting us from manufacturing or selling one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual property. A detailed discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the report on Form 10-Q for the quarter ended Oct. 1, 2011.

Rev. 1/19/12

Page 34: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Back-Up Materials

Page 35: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

35

Page 36: Architecture & Technology Implications for Future Computing · • Pins required + IO Power limits the use of traditional packaging • Tighter integration between memory and CPU

Architecture & Technology

Implications for Future Computing

Dr Fayé A. Briggs, Intel Fellow