24
How SSDs Fit in Different Data Center Applications Tahmid Rahman Senior Technical Marketing Engineer NVM Solutions Group Flash Memory Summit 2012 Santa Clara, CA 1 Tuesday, August 21, 12

How SSDs Fit in Different Data Center Applications

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: How SSDs Fit in Different Data Center Applications

How SSDs Fit in Different Data Center Applications

Tahmid Rahman Senior Technical Marketing Engineer

NVM Solutions Group

Flash Memory Summit 2012Santa Clara, CA 1

Tuesday, August 21, 12

Page 2: How SSDs Fit in Different Data Center Applications

Agenda

§ SSD market momentum and drivers§ Placement in server/storage applications§ Application specific requirements and

workload characteristics § Proof points with SSDs in transaction

processing, IT, virtualization§ Call to action

Tuesday, August 21, 12

Page 3: How SSDs Fit in Different Data Center Applications

2008 2009 2010 2011 2012 2013 2014 2015

165664.3000

122760.7000

82984.8500

49616.4600

Mill

ions

We are here

Strong SSD Momentum in the Market

Source: Gartner Q1’12

Worldwide SSD Unit Sales1

Tuesday, August 21, 12

Page 4: How SSDs Fit in Different Data Center Applications

2008 2009 2010 2011 2012 2013 2014 2015

165664.3000

122760.7000

82984.8500

49616.4600

Mill

ions

We are here

Strong SSD Momentum in the Market

Source: Gartner Q1’12

Worldwide SSD Unit Sales1

Tuesday, August 21, 12

Page 5: How SSDs Fit in Different Data Center Applications

2008 2009 2010 2011 2012 2013 2014 2015

165664.3000

122760.7000

82984.8500

49616.4600

Mill

ions

We are here

Strong SSD Momentum in the Market

Source: Gartner Q1’12

Worldwide SSD Unit Sales1

Tuesday, August 21, 12

Page 6: How SSDs Fit in Different Data Center Applications

2008 2009 2010 2011 2012 2013 2014 2015

165664.3000

122760.7000

82984.8500

49616.4600

Mill

ions

We are here

Strong SSD Momentum in the Market

Source: Gartner Q1’12

Worldwide SSD Unit Sales1

Tuesday, August 21, 12

Page 7: How SSDs Fit in Different Data Center Applications

Drivers Behind Data Center Storage

Flash Memory Summit 2012Santa Clara, CA 4

§ Architectural Changes – § Big data§ Cloud§ Software innovation for caching, tiering

§ Server Side Innovations-§ De-duplication, compression§ Thin-provisioning§ Virtualization

§ Interface transitions§ SATA/SAS to PCIe§ AHCI based to NVMe

§ SSD endurance and performance grades§ Endurance classes – high, medium, standard§ Optimization for access – read intensive, write intensive,

mixed workload§ Different “out of the factory” spare area level

Tuesday, August 21, 12

Page 8: How SSDs Fit in Different Data Center Applications

Usage Applications Compute(Servers)

External Storage

Cache

(Low, Deterministic Latency, $/IOP

IPDC Web 2.0Volume HPCVirtualized SAN/NASFSI

Persistent cache:(Block Cache, User space buffer cache, NFS v4 cache)

Persistent Cache:(e.g. OS block, metadata, de-dupe, etcPerformance

($/IOP/GB) IPDC web2.0Volume HPCOLTPOLAPCDNVOD

Hot Application Data(Web, Database, Email, Search, Videos, IPDC etc)

Hot Application Data(Database, Email, etc)

Capacity

($/TB, Watt/TB) Data WarehouseBackupsArchives

Luke-warm Application Data(Web, Email, Videos, etc)

Cold/Luke-warm

Boot ($/GB) All Server Applications

Local boot data(Operating System, Hypervisor, SWAP, VM, Application Image)

Local boot Data: (SAN/NAS image)

SSD Placement in Server/Storage Application

Tuesday, August 21, 12

Page 9: How SSDs Fit in Different Data Center Applications

Highest Requirements for Data Center SSDs

§ Data Integrity§ True End to End data protection§ Power Loss Protection§ Power loss cap self test§ Protection of internal memory with ECC and parity

§ Predictable Performance§ IOPS variation needs to be within a narrow range§ Latency outliers should be within a max value

§ High Endurance Requirement§ Two primary endurance evolving

§ Standard endurance 0.1-1 DWD § High endurance 10 DWD

Tuesday, August 21, 12

Page 10: How SSDs Fit in Different Data Center Applications

Data Center Application Workload Characteristics

Applications Transfer Size % Random % Read Write.

EnduranceQuality of

service

Media Streaming 64KB Low High Med Med

Web-server Logging 8KB Low Low Med Med

Search Engine 4KB/8KB/16KB High High Low High

Video-On-Demand 128KB High High Low High

Caching 512KB High High Low Med

Decision Support 64KB High High Low High

Content Delivery Network 16KB/32KB High Mixed High High

Database OLTP(On Line

Transaction Processing)

4KB/8KB High Mixed High High

7

Sequential

Random Read

Mixed Random

Source: Industry Standard Benchmarks and Customer Engagement DataPatterns will vary for unique customers

Tuesday, August 21, 12

Page 11: How SSDs Fit in Different Data Center Applications

SSDs For Virtual Storage in the Cloud

Challenges§ Reversed server to data store ratio ( multiple VMs running on single array)§ Adding storage and cache is cost prohibitive

Solutions§High Performance SSD 3x8 RAID5 meeting multiple VM random IOPS of ~100K w/ SW SAN solution

Impact§Expanded performance at a lower cost >75% TCO reduction

– 450 15K RPM HDDs vs. 24 Intel 710 SSDs– IT professional would spend $43K instead of $200K+

Tuesday, August 21, 12

Page 12: How SSDs Fit in Different Data Center Applications

SSDs for Transaction Processing

§ TPoX* (Transactional Processing over XML*) is an application-based benchmark that mimics a storage-bound online transaction procession over XML data for brokerage

§ Intel® SSD 910 Series, reveals a replacement ratio of 1 to 180 with Standard Magnetic Drive Solutions

§ 1 TB database can be compressed in one single PCIe card and meet the performance of 180 magnetic storage 15K RPM SAS drives

PCIe SSD Magnetic Storage

15K RPM SAS

Drives

Number of drives 1 180TPS score (steady-state run) 13,516 13,742Latency (msec) 5.93 7.15Drive Cost 3,859 59,000Storage Subsystem Cost X 14,000

Configuration: Intel® Xeon® Processor X5680 (3.33 GHz, 6.40 GT/s Intel® QPI) platforms with Intel® 7500 Chipset, 72GB (18x4GB), 800MHz DDR3 memory, SUSE SLES 11 SP1 operating system, DB2 9.7, and TPoX 2.0 using “M” factor scale (1 TB data size). Hitachi* HUS151P1 CLAR146 146GB SAS 15K RPM drives.

Server:Exercising Application Load

4 x 4Gb/s

“Fiber Channel”

Database Storage:1 PCIe SSD

Server:Exercising Application Load

PCIe SSD Based SolutionHDD Based Solution

http://www.intel.com/content/dam/www/public/us/en/documents/technology-briefs/ssd-910-tpox-brief.pdf

Tuesday, August 21, 12

Page 13: How SSDs Fit in Different Data Center Applications

Transaction Processing And Importance of Latency QoS

§ Transaction processing requires dense IO (Higher IOPS/GB)

§ Systems tune to have no “storage bottleneck”

§ No Mercy for latency outliers and occasional drops of IOPS

0%

25.00%

50.00%

75.00%

100.00%

0.9999 0.99999 0.999999

99.12%94.71%

72.30%

% T

rans

actio

ns p

er u

nit t

ime

% of workload completed within 90 msec

0%

25.00%

50.00%

75.00%

100.00%

10 20 30 40 50 90

72.30%

84.53%88.51%92.72%96.77%98.99%

% T

rans

actio

ns p

er u

nit t

ime

99.99 percentile latency (msec)

Acknowledgement: Terry Pricket, Jeff Smit, Intel SSG Lab

Tuesday, August 21, 12

Page 14: How SSDs Fit in Different Data Center Applications

Transaction Processing And Importance of Latency QoS

§ Transaction processing requires dense IO (Higher IOPS/GB)

§ Systems tune to have no “storage bottleneck”

§ No Mercy for latency outliers and occasional drops of IOPS

0%

25.00%

50.00%

75.00%

100.00%

0.9999 0.99999 0.999999

99.12%94.71%

72.30%

% T

rans

actio

ns p

er u

nit t

ime

% of workload completed within 90 msec

0%

25.00%

50.00%

75.00%

100.00%

10 20 30 40 50 90

72.30%

84.53%88.51%92.72%96.77%98.99%

% T

rans

actio

ns p

er u

nit t

ime

99.99 percentile latency (msec) Gain transactions by 27% by moving to less frequent occurrence of outlier event (99.99% to

99.9999%)

Acknowledgement: Terry Pricket, Jeff Smit, Intel SSG Lab

Tuesday, August 21, 12

Page 15: How SSDs Fit in Different Data Center Applications

Transaction Processing And Importance of Latency QoS

§ Transaction processing requires dense IO (Higher IOPS/GB)

§ Systems tune to have no “storage bottleneck”

§ No Mercy for latency outliers and occasional drops of IOPS

0%

25.00%

50.00%

75.00%

100.00%

0.9999 0.99999 0.999999

99.12%94.71%

72.30%

% T

rans

actio

ns p

er u

nit t

ime

% of workload completed within 90 msec

0%

25.00%

50.00%

75.00%

100.00%

10 20 30 40 50 90

72.30%

84.53%88.51%92.72%96.77%98.99%

% T

rans

actio

ns p

er u

nit t

ime

99.99 percentile latency (msec) Gain transactions by 27% by moving to less frequent occurrence of outlier event (99.99% to

99.9999%)

Acknowledgement: Terry Pricket, Jeff Smit, Intel SSG Lab

Tuesday, August 21, 12

Page 16: How SSDs Fit in Different Data Center Applications

Transaction Processing And Importance of Latency QoS

§ Transaction processing requires dense IO (Higher IOPS/GB)

§ Systems tune to have no “storage bottleneck”

§ No Mercy for latency outliers and occasional drops of IOPS

0%

25.00%

50.00%

75.00%

100.00%

0.9999 0.99999 0.999999

99.12%94.71%

72.30%

% T

rans

actio

ns p

er u

nit t

ime

% of workload completed within 90 msec

0%

25.00%

50.00%

75.00%

100.00%

10 20 30 40 50 90

72.30%

84.53%88.51%92.72%96.77%98.99%

% T

rans

actio

ns p

er u

nit t

ime

99.99 percentile latency (msec) Gain transactions by 27% by moving to less frequent occurrence of outlier event (99.99% to

99.9999%)

Gain transactions by 24% by moving the latency from 90 msec to 20 msec

Acknowledgement: Terry Pricket, Jeff Smit, Intel SSG Lab

Tuesday, August 21, 12

Page 17: How SSDs Fit in Different Data Center Applications

SSDs for IT Management Services

• Automatic Updates for IT security patches

• Managing Design Simulation database

• Swap operation for over-flow memory

• Benchmarking and proof points

Tuesday, August 21, 12

Page 18: How SSDs Fit in Different Data Center Applications

Enterprise Patching and Security Compliance Performance Comparison

Flash Memory Summit 2012Santa Clara, CA 12

0

75

150

225

300

1 90 179

268

357

446

535

624

713

802

891

980

1069

1158

1247

1336

1425

1514

1603

1692

1781

1870

1959

2048

2137

2226

2315

2404

2493

2582

2671

2760

2849

2938

3027

3116

3205

3294

3383

3472

3561

3650

3739

3828

3917

4006

4095

4184

4273

4362

With 15K RPM Drive

Que

ue D

epth

s

0

75

150

225

300

1 90 179

268

357

446

535

624

713

802

891

980

1069

1158

1247

1336

1425

1514

1603

1692

1781

1870

1959

2048

2137

2226

2315

2404

2493

2582

2671

2760

2849

2938

3027

3116

3205

3294

3383

3472

3561

3650

3739

3828

3917

4006

4095

4184

4273

4362

With SSDs deployed

Que

ue D

epth

s

Acknowledgement: Christian Black, Intel IT Architect

QD gets built up easily and the IO demand cannot be met

QD is never too deep, IO saturation maintained without it

Tuesday, August 21, 12

Page 19: How SSDs Fit in Different Data Center Applications

Are All SSDs Ready for Transaction Processing?

Tuesday, August 21, 12

Page 20: How SSDs Fit in Different Data Center Applications

Are All SSDs Ready for Transaction Processing?

“Zero” IOPS!

Tuesday, August 21, 12

Page 21: How SSDs Fit in Different Data Center Applications

Are All SSDs Ready for Transaction Processing?

“Zero” IOPS!

ß RAID Array stalls and timeouts

ß Higher drive counts to meet IO needs

Tuesday, August 21, 12

Page 22: How SSDs Fit in Different Data Center Applications

Are All SSDs Ready for Transaction Processing?

“Zero” IOPS!

1 sec max latency!

ß RAID Array stalls and timeouts

ß Higher drive counts to meet IO needs

SSD B

Tuesday, August 21, 12

Page 23: How SSDs Fit in Different Data Center Applications

Are All SSDs Ready for Transaction Processing?

“Zero” IOPS!

1 sec max latency!

ß RAID Array stalls and timeouts

ß Higher drive counts to meet IO needs

Negative SLA impacts à

Catastrophic for certain applications à

SSD B

Tuesday, August 21, 12

Page 24: How SSDs Fit in Different Data Center Applications

Call to Action

§ Ample opportunity for SSD proliferation within data center

§ Innovate around applications needs§ Use faster interface and technology to unleash NAND

backend bandwidth

Tuesday, August 21, 12