323
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP 3PAR SA Enablement eLearning 1: Presenting 3PAR Core Technologies A technical overview of 3PAR StoreServ Storage, the world’s most agile and efficient storage arrays Sponsored by Intel Q2 FY2015

HP 3PAR StoreServe ELearning 1 2Q15 Full Deck

Embed Size (px)

DESCRIPTION

A technical overview of 3PAR StoreServ Storage, the world’s most agile and efficient storage arrays

Citation preview

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

HP 3PAR SA Enablement

eLearning 1:Presenting 3PAR Core

Technologies

A technical overview of 3PAR StoreServ Storage,

the world’s most agile and efficient storage arrays

Sponsored by Intel

Q2 FY2015

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Architecture overview

Traditional modular storage

Legacy architectures force tradeoffs

System hardware architecture matters

Cost-efficient usually active-passive dual-controller design limited in scalability and

resiliency

• Cost-effective, scalable, resilient, meshed, active-active architecture

• Meets cloud-computing requirements for efficiency, multi-tenancy, and autonomic management

Host connectivityTraditional monolithic

storage

• Scalable, resilient, and active-active but costly

• Might not meet multi-tenant requirements efficiently

Disk Connectivity

Distributedcontrollers

and functions

Host ports

Data cache

Disk ports

3PAR architecture

Mesh

Controller

LUN

The heart of every 3PAR storage system

HP 3PAR ASIC

Fast RAID 10, 50, and 60 Rapid RAID rebuild

Integrated XOR engine

Tightly coupled clusteringHigh bandwidth, low latency

interconnect

Mixed workload and CPU offload

Independent metadata and data processing

Built-in zero detectionAll reads and writes are

through the ASICCRC 32 and XOR used for

inline deduplication

Node 3

Node 1Node 0

Node 2

Example: Four-node 7400 with eight drive enclosures

HP 3PAR virtualization concept (1 of 2)

• This example shows a four-node configuration with eight drive enclosures in total

• Nodes are added in pairs for cache redundancy Note: The nodes are installed in the back of the first drive enclosures

• A particular physical drive is owned by one node

• HP 3PAR StoreServ arrays with four or more nodes support Cache Persistence

Example: Four-node 7400 with eight drive enclosures

HP 3PAR virtualization concept (2 of 2)

LDLD

LD

LDLD

LD

LDLD

LD

LDLD

LD

LDLD

LD

LDLD

LD

LDLD

LD

LDLD

LD

i.e. R

AID

5 (3

+1

)

Process step Phase state

Physical drives are automatically formatted

in 1 GB chunklets

Disk initialization

Virtualvolume

Chunklets are bound together to form logical

disks in the format defined in the CPG

policies (RAID level, step size)

Defines RAID level, step size, set size, and

redundancy

Virtual volumes are built striped across all LDs of all nodes from all drives defined in a particular

CPG

Autonomic wide striping across all logical disks

Virtual volumes can now be exported as LUNs to

servers

Present and access LUNs across multiple active-

active paths (HBAs, fabrics, nodes)

CPG

ExportedLUN

Server

Active-active multipathing

Which array is more efficient and easier to use?

Traditional storage array 3PAR array

Simplify provisioningHP 3PAR autonomic sets

Traditional storage

• Initial provisioning of the cluster − Requires 50 provisioning actions

(1 per host-volume relationship)

• Add another host/server− Requires 10 provisioning actions (1 per

volume)

• Add another volume− Requires 5 provisioning actions (1 per

host)

V1 V2 V3 V4 V5 V6 V7 V10V8 V9

Individual volumes

Cluster of VMware vSphere servers

• Initial provisioning of the cluster − Add hosts to the host set− Add volumes to the volume set− Export volume set to the host set

• Add another host/server− Just add host to the host set

• Add another volume− Just add the volume to the volume set

Autonomic volume set

V1 V2 V3 V4 V5 V6 V7 V10V8 V9

Autonomic host set

Autonomic HP 3PAR storage

When value matters

Starting at $25 K

HP 3PAR StoreServ is eliminating boundaries

7200c

When performance matters

Up to 900 K IOPS @ 0.7 ms latency

When scale matters

Up to 3.2 PB

7400c

7450cAll-flash array

7200c All-flash starter kit

7440c

Polymorphic simplicityONE architecture

• ONE operating system

• ONE interface• ONE feature set

New

Replication SW Suite• Virtual Copy (VC)• Remote Copy (RC)• Peer Persistence

HP 3PAR Operating System SW Suite

Same functions and features for 7000 and 10000

HP 3PAR software titles

• Virtual SP (7000 only )• SmartStart (7000 only)• Online Import license (180 days)• System Tuner• Host Explorer• Multipath I/O SW• VSS Provider• Scheduler

• Rapid Provisioning• Autonomic Groups• Autonomic Replication Groups• Autonomic Rebalance• LDAP Support• Access Guard• Host Personas

Application SW Suite for VMware vSphere

• Recovery Manager for vSphere • VASA, vCenter plug-in

Application SW Suite for Oracle• Recovery Manager for Oracle

Application SW Suite for Microsoft SQL

• Recovery Manager for Microsoft SQLApplication SW Suite for MS Exchange

• Recovery Manager for MS Exchange

Reporting SW Suite• System Reporter• 3PARInfo

• Adaptive Flash Cache • Persistent Cache• Persistent Ports• Management Console• Web Services API• SMI-S• Real Time Performance Monitor

• Full Copy• Thin Provisioning• Thin Copy Reclamation• Thin Persistence• Thin Conversion• Thin Deduplication for SSD• 3PAR OS Administration Tools

• CLI client• SNMP

Security SW Suite• Virtual Domains• Virtual Lock

Application SW Suite for MS Hyper-V• Recovery Manager for MS Hyper-V

Data Encryption

Storage Plug-in for SAP LVM

Optional Integration Solutions

Policy Manager

• StoreFront Mobile Access• Management Plug-in for MS SCOM

• OpenStack Integration• StoreFront VMware vCOPS Integration

Data Optimization SW Suite v2

• Dynamic Optimization• Adaptive Optimization• Peer Motion• Priority Optimization

From the EMC whitepaper “Virtual Provisioning for the New VNX Series”

Thin Provisioning with EMC VNX

• It is important to understand your application requirements and select the approach that meets your needs

• If conditions change, you can use VNX LUN migration to migrate among thin, thick, and classic LUNs

• Use pool-based thin LUNs for: − Applications with moderate performance requirements− Taking advantage of advanced data services such as FAST VP, VNX snapshots, compression, and

deduplication − Ease of setup and management, best storage efficiency, energy and capital savings− Applications where space consumption is difficult to forecast

• Use pool-based thick LUNs for: − Applications that require good performance− Taking advantage of advanced data services such as FAST VP and VNX snapshots− Storage assigned to VNX for file− Ease of setup and management

• Use classic LUNs for: − Applications that require extreme performance− The most predictable performance, precise data placement on physical drives and logical data

objects− Physical separation of data

12

2014 Gartner Magic Quadrant for general-purpose disk arrays

13

Learning check

1. Why are nodes added in pairs to the enclosure?______________________________________________________________________________________________________________________________________________________

14

Learning check answer

1. Why are nodes added in pairs to the enclosure?To provide cache redundancy

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

AFA and flash optimization

Reducing risk with a comprehensive approach to data integrity

Flash does not change storage requirements

ReliabilityProven architecture with guaranteed high availability

Ease of useSelf configuring, optimizing, and tuning

Drive efficiencyExtend life and utilization of flash

ScalabilityScale out architecture with multiple active-active nodes

Application integrationVMware, Oracle, SQL integrations

High performanceFlash-optimized architecture

Disaster recoveryData protection with sync and a-sync with multiple sites

Data mobility Federate across systems and sites

HP 3PAR flash strategy enables seamless transition

Perf

orm

anc

e

Cost ($/GB)

2 – 10 ms

< 1ms

< 100 us

Hybrid storage

All flash storage

Balance cost & performance

Consistent low latency

Single flash tier

HDD storageCost-optimized

3PAR StoreServ 7450, 7400, 7200

Polymorphic simplicityONE architecture

• ONE operating system

• ONE interface• ONE feature set

3PAR StoreServ 7000, 10000

SSDs + Adaptive Optimization + Flash Cache3PAR StoreServ 7000,

10000

3PAR SSD $/GB Adaptive Sparing

Making flash mainstream

85% Lower $/GB inlast 12 mo

Industry usable $/GB eMLC SSD

3PAR SSD $/GB Block-zero dedupe

3PAR SSD $/GBThin deduplicationThin clones

Industry raw $/GB15 K SAS HDD

Saving money and capacity withthe most complete set of data compaction technologies available

• 4:1 to 10:1 depending on workload

• Negligible performance impact dueto unique hardware acceleration

3PAR SSD $/GBcMLC SSD

$13

$5

$4

$2

3PAR approach to working with flashFlash optimized = more than just being fast

Cache management

Performance scalability

Efficiency and wear handling

Failure handling

Adaptive read

Adaptive write

Autonomic cache offload

Multi-tenant I/O processing

3PAR ASICExpress writes

System-wide

striping

Quality of service

3PAR Thin Technologie

sZero detect Adaptive

Sparing

System-wide

striping

Step size optimizatio

n

System-wide

sparing

Read optimization—from flash to cache

Adaptive read

• 3PAR architecture adapts its reads from flash media to match host I/O sizes

Benefits• Reduced latency by avoiding

unnecessary data reads• Optimized back-end throughput

handling

Flash

Cache

4 K

B 8 K

B

16 KB

HP 3PAR StoreServ

Front end

Back end

4.2

KB

*

8.4

KB

16

.8 K

B

Host

Read I/Os

*Extra bytes to account for DIF

Write optimization to cache

Adaptive write

• 3PAR architecture supports a granular cache page size of 16 KB

• However, if a sub-16 KB write I/O occurs, 3PAR array performs a sub-16 KB write to cache

• 3PAR array keeps a bitmap for each page and only the dirty part of the page

Benefits• Reduces latency and back-end throughput

and also extends flash life by avoiding unnecessary data writes

• For RAID 10 volumes, adapting writes to match I/Os avoids latency penalties associated with read-modify-write sequences

4 KB write I/O

Cache

16 KB cache page (valid page)

Host writes only 4 KB to cache page

FlashOnly the dirty data (4 KB) is written to flash

Host

HP 3PAR StoreServ

Maintaining service levels under mixed workloads

Adaptive I/O processing

• Front end– Cache-to-media write process is multi-

threaded, allowing for each I/O to start its own thread in parallel without waiting for threads to be free

• Back end– 3PAR architecture splits large R/W

I/Os into 32 KB sub-I/Os before sending them to flash media

– Ensures that smaller read I/Os do not suffer from higher response times

Benefits• Allows 3PAR arrays to serve sequential

workloads without paying a latency penalty on OLTP workloads

Cache

Flash devices

Front end

Back end

Host

128 KB Read I/O

Host 1 (DSS)

Host 2 (OLTP)

4 KB read I/Os

1 25 6 7

3 48

32 KB sub-I/Os

9 10 11 12 13 14 15 16

HP 3PAR Thin deduplication

0001101

1. Host write

3. Fast metadata lookup with Express Indexing

2. ASIC computes hash

0001101

0001101

4. On match data is compared against the existing potential deduped page and the ASIC used for a bit- bit compare using inline XOR operation

5. XOR =

0000000

6. A dedupe match will result in XOR outcome being a page of zeros that is detected inline by the ASICL1

Table

Hash L1

L2 Table

L3 Table

Hash L2

Hash L3

xxx yyy zzz

LBA

Accelerated by the ASIC and Express Indexing

User space

Over-provisioned

flashSpares

High endurance at high $/GB

SSD layoutMaking flash affordable

Every SSD has internal over-provisioning (OP)

Used for garbage collection and for minimizing write amplification

The internal OP reduces the raw capacity available to users

3PAR wide-striped architecture also reserves chunklets in each drive for sparing

Spare space is necessary to protect against drive failure scenarios

Rethinking over-provisioned capacityMaking flash affordable

User space

Over-provisioned

flash

Over-provisioned

flash

Spares

Spares

User space

20% gain in user space

High endurance at high $/GB

Low endurance at lower $/GB

Lower OP = lower endurance

What is data compaction?Data compaction is the reduction of the number of data elements, bandwidth, cost, and time for the generation, transmission, and storage of data without loss of information by eliminating unnecessary redundancy, removing irrelevancy, or using special coding

Data compaction on HP 3PAR StoreServ

Compaction

Thin technologiesVirtual Copy

System-wide striping and

sparingAdvanced caching algorithms

Compaction—A holistic approach to lowering cost

MinimizeDuplicate writes• Zero-page inline deduplication• Thin deduplication and thin

clones

Reservations/pools• Reservation-less thin/snaps

Allocation• 16 KB write allocation unit

Hot spares• System-wide sparing

MaximizeRaw capacity• Adaptive Sparing

Reclamation• With16 KB granularity

Wear management• Adaptive write• Wear gauge for every SSD

Enabling a smooth transition to flash

HP 3PAR Flash Advisor

Adaptive Optimization

I/O density reports

Dedupe estimation and dynamic optimization

Powerful I/O reporting to determine the exact amount of

flash needed for hot data

Adaptive Flash Cache Simulation

Adaptive Flash Cache Simulation helps determine benefits and

amount of flash required in their system for random read

acceleration

1- Estimate savings

Thin/HDD Dedupe/SSD

2- Online dynamic optimization to dedupe status

Thin deduplication estimation

How to calculate a blended dedupe ratio (1 of 4)• The first ratio to be determined is the thin efficiency in percent of savings

based on the measured benefit of thin provisioning− This can be measured using the host capacity scan in NinjaStars− Assume that this was completed and 75% of the exported capacity was written, which

results in a 25% savings

• The second ratio to be determined is the blended dedupe ratio for the applications/data of the customer environment− Assume that the environment is 39% database, 7% images, 39% virtual servers, and

15% file server volumes with expected representative dedupe ratios of 1:1 for database, 1:1 for the images, 4:1 for the virtual servers, and 5:1 for the file server volumes

− Dedupe ratios for each application or data class in the customer environment should be based on a discussion with the customer

− As a result of the discussion in this case, the next slide shows the example calculation to determine the blended dedupe ratio for use in NinjaStars

How to calculate a blended dedupe ratio (2 of 4)Database                             0.39 X 256 TB X 1/1 = 99.84 TBImages                                 0.07 X 256 TB X 1/1 = 17.92 TBVirtual servers                     0.39 X 256 TB X 1/4 = 24.96 TB         File server                            0.15 X 256 TB X 1/5 = 7.68 TB    ___________________________________________________                                                                                                                                                       150.40 TB Blended dedupe ratio = 256/150.40 = 1.7 On the following slide is a format that could be used to represent the requirements that summarize the way the total compaction ratio was determined

How to calculate a blended dedupe ratio (3 of 4)This is a presentation format that could be used to represent the requirements that summarize the way that the total compaction ratio was determined

How to calculate a blended dedupe ratio (4 of 4)This is a NinjaStars sizing that would meet the 256 TB requirement when it is at 85% of capacity with the total compaction ratio factored in

Learning check

1. List at least four benefits of using flash

______________________________________________________________________________________

______________________________________________________________________________________

______________________________________________________________________________________

______________________________________________________________________________________

______________________________________________________________________________________

_____________________________________________________________________________________

Learning check answer

1. List at least four benefits of using flashHigh performanceEase of useCost benefitsScalabilityMaximizes capacityGreater reliability

Learning check

2. What is deduplication, and why is it important in thin provisioning?

______________________________________________________________________________________

______________________________________________________________________________________

______________________________________________________________________________________

______________________________________________________________________________________

______________________________________________________________________________________

_____________________________________________________________________________________

Learning check answer

2. What is deduplication, and why is it important in thin provisioning?Deduplication is the process of compressing data to optimize space and eliminate copies of data. Estimating deduplication allows you to

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

7000 hardware overview

HP 3PAR StoreServ 7000

7200c

7400c 7440c 7450c

Controller nodes 2 2 4 2 4 2 4

Max drives 240 288 576 480 960 120 240

Cache per node-pair / max

40 GB 48 GB 96 GB 96 GB 192 GB 64 GB 128 GB

Max Adaptive Flash Cache

768 768 1500 96 192 96 192

Built-in 8 Gbit/s FC ports 4 4 8 4 8 4 8

Optional ports

8 Gbit/s FC16 Gbit/s FC10 Gbit/s iSCSI 10 Gbit/s FCoE

8444

8444

16888

8444

16888

8444

16888

Built-in IP Remote Copy port

2 2 4 2 4 2 4

Controller enclosures 2U with 24 SFF drive slots each

1 1 2 1 2 1 2

Add-on drive enclosures 2U - 24 SFF and/or 4U - 24 LFF slots each

0 to 9 0 to 11 0 to 22 0 to 19 0 to 38 0 to 9 0 to 18

HP 3PAR StoreServ 7440c hardware detailsItem HP 3PAR StoreServ

7440c

Number of controller nodes

2 or 4

HP 3PAR Gen4 ASICs 2 or 4

CPU (per controller node)

8-core, 2.3 GHz

Total cache 1.6 - 3.2 TB

Total flash cache 1.5 - 3 TB

Total on-node cache 96 - 192 GB

Number of disk drives 8 - 960

Number of solid state drives

8 - 240

Raw capacity 1.2 TB - 2000 TB

Drive enclosure SFF: 24 slots in 2ULFF: 24 slots in 4U

Number of drive enclosures

0 - 38

Item HP 3PAR StoreServ 7440c

Host adapters

Four-port 8 Gb/s FCFour-port 16 Gb/s FC

Two-port 10 Gb/s iSCSI/FCoE

Maximum host ports 24

 8 Gb/s FC host ports 4 - 24

 16 Gb/s FC host ports 0 - 8

 10 Gb/s iSCSI host ports 0 - 8

Maximum initiators 1024 or 2048

HP 3PAR StoreServ 7000 hardware building blocksBase

storage systems

Expansion drive

enclosures

Drives

Host adapter

s

RacksService process

or

4-port FC HBA- 8Gb/s

2-port 10Gb/s iSCSI/FCoE

CNA

HP M6710 2.5in 2U SAS

HP M6720 3.5in 4U SAS

SFF SAS HDD &

SSD

LFF SASHDD & SSDs

HP G3 rack Virtual (default)

Physical (optional)

HP 3PAR StoreServ 7200

(2 nodes, 4 FC ports, 24 SFF slots)

HP 3PAR StoreServ 74x0

(2-node, 4 FC ports, 24 SFF slots)

HP 3PAR StoreServ 74x0

(4-node, 8 FC ports, 48 SFF slots)

Customer-supplied rack(4-post, square

hole, EIA standard, 19 in. rack from HP or other suppliers)

Choice of encrypted and non-

encrypted drives

2-port FC HBA- 16Gb/s

Configuration options

HP 3PAR StoreServ 7000 controller enclosure

12 x 8 Gb FC configuration

4 x 8Gb FC and 4 x 10 Gb Eth (CNA) or 4 x 16 Gb configuration

Node 1 (3)

Node 0 (2)

1 Built-in 1GbE Remote Copy Port 2 1GbE Management Port3 Built-in 8Gb FC Ports

44-lane 6Gbit/s SAS for drive chassis connections

5 74x0 Controller Interconnects

66a6b

Optional PCIe Card Slot 4-Port 8Gb FC Adapter2-Port 10Gb CNA (iSCSI/FCoE) or 16Gb FC Adapter

12 3 4 65

6a

6b4 x 8 Gb FC base configuration

1

0

1

0

3

2

3

2

Disconnect all powerfor complete isolation

CAUTION

764W PCM

Disconnect all powerfor complete isolation

CAUTION

764W PCM

Disconnect all powerfor complete isolation

CAUTION

764W PCM

Disconnect all powerfor complete isolation

CAUTION

764W PCM

Disconnect all powerfor complete isolation

CAUTION

764W PCM

Disconnect all powerfor complete isolation

CAUTION

764W PCM

Disconnect all powerfor complete isolation

CAUTION

764W PCM

Disconnect all powerfor complete isolation

CAUTION

764W PCM

Mfg

PCI-H

BA

UID

DP-1

DP-2

RC-1

MG

MT

FC-1

FC-2

HP

3PAR

74000 1 2 30 1 2 3

Intr 0

Intr 1

1 2 3 4

Mfg

PCI-

HBA

UID

DP-1

DP-2

RC-1

MG

MT

FC-1

FC-2

HP

3PA

R74

000123 0123

Intr

0

Intr

1

1234

Mfg

PCI-H

BA

UID

DP-1

DP-2

RC-1

MG

MT

FC-1

FC-2

HP

3PAR

74000 1 2 30 1 2 3

Intr 0

Intr 1

1 2 3 4

Mfg

PCI-

HBA

UID

DP-1

DP-2

RC-1

MG

MT

FC-1

FC-2

HP

3PA

R74

000123 0123

Intr

0

Intr

1

1234

Node 3

Node 2

Controller interconnect

HP 3PAR StoreServ 74x0 four-node system

Node 1

Node 0

Two to four nodes per system—installed in pairs

HP 3PAR StoreServ 7000 controller nodes

Control cache

Data cache

To ASIC

or other nodes

Intel Sandy Bridge

processor

PCIe switch

EthernetRemote CopyEthernet

Management

Serialnode console

port

SATAboot SSD

3PAR Gen4 ASIC

Internal FC

adapter

FC ports

Optional PCIe slot

SAS IOC

SAS ports

Internal SFF drives

SAS expande

r

Multifunction controller

• One Intel Sandy Bridge processor− 7200c 6-core 1.8 GHz− 7400c 6-core 1.8 GHz− 7440c 8-core 2.3 GHz− 7450c 8-core 2.3 GHz

• Data cache− 7200c 4 GB− 7400c 8 GB− 7440c 16 GB− 7450c 16 GB

• Control cache− 7200 16 GB− 7400 16 GB− 7440c 32 GB− 7450 32 GB

• One Thin Built In Gen4 ASIC• Two built-in 8 Gb/s FC ports• One optional PCIe adapter

− Four-port 8 Gb FC or− Two-port 16 Gb FC or− Two-port 10 Gb/s CAN

• Two SAS back-end ports− Four-lane 6 Gb SAS

Per-node configuration

HP 3PAR StoreServ 7000 disk chassisMix and match drives and enclosures as required

2U with 24 SFF drive slots

4U with 24 LFF drive slots

HP 3PAR StoreServ 7000 drive overview HP 3PAR

StoreServ 7200HP 3PAR

StoreServ 7400

HP 3PAR StoreServ

7450

RAID levels RAID 0, 10, 50, 60

RAID 5 data to parity ratios 2:1 to 8:1

RAID 6 data to parity ratios 4:2; 6:2; 8:2; 10:2; 14:2

SFF 2.5” drives MLC SSDcMLC SSDSAS 15 krpmSAS 10 krpmNL SAS 7.2 krpm

480 GB, 920 GB480 GB, 1.92 TB300 GB450 GB, 600 GB, 900 GB, 1200 GB1 TB

480 GB, 920 GB480 GB, 1.92 TB300 GB450 GB, 600 GB, 900 GB, 1200 GB1 TB

480 GB, 920 GB480 GB, 1.92 TBNANANA

SFF 2.5” encrypted drives*

MLC SSDSAS 10 krpmNL SAS 7.2 krpm

920 GB450 GB, 900 GB1 TB

920 GB450 GB, 900 GB1 TB

920 GBNANA

LFF 3.5” drives MLC SSDSAS 15 krpmSAS 10 krpmNL SAS 7.2 krpm

480 GB, 920 GBNANA2 TB, 3 TB, 4 TB

480 GB, 920 GBNANA2 TB, 3 TB, 4 TB

480 GB, 920 GBNANANA

LFF 3.5” encrypted drives*

NL SAS 7.2 krpm

2 TB, 4 TB 2 TB, 4 TB NA * Array Encryption License required / Encrypted drives cannot be mixed with standard drives in the same array

HP 3PAR SSD drive options

MLCMulti-level

cell

cMLCCommercial multi- level

cell

Available sizes 480 GB, 920 GB 480 GB, 1.92 TB

Warranty 1 5 years 5 years1

• Within the warranty period worn-out drives will be replaced by HP• Remaining SSD drive life can be checked by the user (see chart to

the left)• The 3PAR array alerts the user when the wear-out level (max

Program/Erase cycles) reaches 95%• After the five-year warranty expires, the customer must purchase

worn-out drive replacements• HP 3PAR Adaptive Sparing decreases wear-out and extends drive life

dramatically

Recommended hardware configuration rules Record your requirements

1.Choose base configuration—defines scalability and cost• 7200 2-node or• 74x0 2-node or• 74x0 4-node

2.Define availability needs—defines required # of enclosures, possible RAID levels, and set sizes

• HA drive (magazine) or • HA enclosure (cage)

3.Choose drive types and quantity—defines capacity and performance • SSD• FC—Fast Class (10 k or 15 k rpm)• NL—Near Line (7.2 k rpm)

4.Choose connectivity—defines optional PCIe adapters• Number of host Fibre Channel ports required• Number of host iSCSI/FCoE ports required• Number of Remote Copy FC ports required

Recommended configuration rules

Base enclosure• Includes two controllers• Supports SAS Fast Class or SSD SFF drives

An even number of drives of the same drive class from left to right• Eight SSD* = Solid state drive and/or • Eight FC = Fast Class 15 K or 10 K SFF

and/or• 12 NL = Near Line drives (RAID 6)

Upgrade minimum four drives of the same class

Adding a new drive class• 8 (12) drives of the new drive class minimum

Two controllers, HA drive

An add-on enclosure requires a minimum of four drives installed

Installation order

* 4 SSD for use as Adaptive Flash Cache only

Learning check

1. The HP 3PAR StoreServ is available with a single controller nodes True False

Learning check answer

1. The HP 3PAR StoreServ is available with a single controller node True False

HP 3PAR StoreServ is available with two or four controller nodes

Learning check

2. What are the two important considerations when choosing an HP 3PAR StoreServ series 7000 base configuration?

Learning check answer

2. What are the two important considerations when choosing an HP 3PAR StoreServ series 7000 base configuration?• Scalability• Cost

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

10000 hardware overview

HP 3PAR StoreServ 10400 components

First rack with

controllers and drives

Expansion racks with

drives only

Full-mesh backplane• Post-switch architecture• High performance, tightly coupled• Completely passive

• Up to six in first, eight in each expansion rack

• Capacity building block− 2 to 10 drive magazines

• Add non-disruptively• Industry-leading density

Drive chassis (4U) and drive magazines

Service processor (1U)• Remote error detection• Supports diagnostics and

maintenance• Reporting to HP 3PAR Central

• Performance and connectivity building block− 8 Gb FC and/or 10 Gb CNA cards

• Add non-disruptively• Runs independent operating system

instance

Controller node chassis (15U) and nodes

HP 3PAR StoreServ 10800 components

First rack with

controllers and drives

Expansion racks

with drives only

Full-mesh backplane• Post-switch architecture• High performance, tightly coupled• Completely passive

• Two in first, up to eight in each expansion rack

• Capacity building block− Two to 10 drive magazines

• Add non-disruptively• Industry-leading density

Drive chassis (4U)

Service processor (1U)• Remote error detection• Supports diagnostics and

maintenance• Reporting to HP 3PAR Central

• Performance and connectivity building block− 8 Gb FC and/or 10 Gb CNA cards

• Add non-disruptively• Runs independent operating system

instance

Controller node chassis (28U) and nodes

Bus to switch to full mesh progression

The 3PAR StoreServ 10000 evolution

10000 full mesh backplane• High performance/low latency• 112 GB/s backplane bandwidth• Passive circuit board• Slots for controller nodes• Links every controller (full mesh)

− 2.0 GB/s ASIC to ASIC• Single hop

Fully configured 3PAR 10800• Eight controller nodes• 16 Gen4 ASICs—Two per node• 16 Intel Quad-Core processors• 256 GB control cache• 512 GB total data cache• 136 GB/s peak memory bandwidth• 450,213 SPC-1 IOPS

Max 10800 configurationwith eight nodes and 1,920 drives

Two to eight nodes per system—installed in pairs

HP 3PAR StoreServ 10000 controller nodes• Intel Quad-Core processors• Dedicated control and data cache• Two Gen4 ASICs per node

− Data movement, ThP, and XOR RAID processing• Internal SSD drive for:

− 3PAR OS− Cache destaging in case of power failure

• Scalable connectivity per node• Three PCIe buses with 9 PCIe slots

− Four-port 8 Gb/s FC adapter− Two-port 16 Gb/s FC adapter− Two-port 10 Gb/s iSCSI/FCoE CNA

• Flexibility enhancement− Host FC and Remote Copy FC ports can be configured on different ports of the same 8

Gb/s FC adapter

Recommended PCIe card installation order

Drive chassis FC connections 6, 3, 0

Host connections (FC, iSCSI, FCoE)

2, 5, 8, 1, 4, 7

Remote Copy FC connections

1, 4, 2, 3

0 1 2

3 4 5

6 7 8

Built-in Remote Copy Ethernet port RCIP E1

Serial ports

Management Eth port E0

PCIe slots

Two to eight nodes per system—installed in pairs

HP 3PAR StoreServ 10000 controller nodes

Per-node configuration• 2 x Thin Built In Gen4 ASIC

−2.0 GB/s dedicated ASIC-to-ASIC bandwidth

−112 GB/s total backplane bandwidth−Inline Fat-to-Thin processing in DMA

engine2

• 2 x Intel 2.83 GHz Quad-Core processors• 96 GB cache • 9 PCIe xlots – Warm-plug adapters

− 8 Gb/s FC host/drive adapter− 10 Gb/s iSCSI/FCoE host adapter

Control cache32 GB

Data cache64 GB

To ASIC

or other nodes

PCIe switc

h

PCIe Slots

Intel XEON

processor

Intel XEON

processor

PCIe switc

h

PCIe switc

h

3PAR Gen4 ASIC

3PAR Gen4 ASIC

To ASIC

or other nodes

EthernetRemote Copy

EthernetManagement

Serialnode console

port

SATAboot SSD

Multifunction controller

HP 3PAR StoreServ 10000 PCIe card options

PCIe cardsFour-Port 8Gb Fibre Channel

adapter

Two-Port 16Gb Fibre Channel

adapter

Two-Port converged network adapter

# ports / card 4 2 2

Max # cards / node 9 6 6

Port speeds 8 Gb/s (2, 4 Gb/s) 16 Gb/s (4, 8 Gb/s) 10 Gb/s

FC host connection (max # ports/node) Y (24) Y (12) N

iSCSI host connection (max # ports/ node) N N Y (4)

FCoE host connection (max # ports/ node) N N Y (12)

Drive cage FC connection Y N N

Port 1

Port 2

Port 3

Port 4

Port 1

Port 2

Four-port card Two-port card

HP 3PAR StoreServ 10000 drive chassis

• Holds from 2 to 10 drive magazines • (1+1) redundant power supplies• Redundant dual Fibre Channel paths• Redundant dual Fibre Channel switches

• Each magazine always holds four drives of the same drive type

• Each magazine in a chassis can be a different drive type

• Available drives2.5” SFF magazine 3.5” LFF magazine

HP 3PAR StoreServ 10000 drive overview HP 3PAR StoreServ

10400HP 3PAR StoreServ

10800

RAID levels RAID 0, 10, 50, 60

RAID 5 data to parity ratios 2:1 to 8:1

RAID 6 data to parity ratios 4:2; 6:2; 8:2; 10:2; 14:2

Drives MLC SSD15 k rpm FC10 k rpm FC7.2 k rpm NL

480 GB, 920 GB, 1.92 TB300 GB, 600 GB450 GB, 900 GB, 1200 GB2 TB, 4 TB

480 GB, 920 GB, 1.92 TB300 GB, 600 GB450 GB, 900 GB, 1200 GB2 TB, 4 TB

Encrypted drives *

MLC SSD10 k rpm FC7.2 k rpm NL

400 GB, 920 GB450 GB, 900 GB2 TB, 4 TB

400 GB, 920 GB450 GB, 900 GB2 TB, 4 TB

Density 4U drive chassis 40 drives 40 drives

# of chassis 4 to 24 4 to 48

# of drives 16 to 960 16 to 1920

Max # of SSD per StoreServ array 256 512 * Array Encryption license required; encrypted drives cannot be mixed with standard drives in the same array

HP 3PAR SSD drive options

MLCMulti-level

cell

cMLCCommercial multi- level

cell

Available sizes 480 GB, 920 GB 480 GB, 1.92 TB

Warranty 1 5 years 5 years1

• Within the warranty period worn-out drives will be replaced by HP• Remaining SSD drive life can be checked by the user (see chart to

the left)• The 3PAR array alerts the user when the wear-out level (max

Program/Erase cycles) reaches 95%• After the five-year warranty expires, the customer must purchase

worn-out drive replacements• HP 3PAR Adaptive Sparing decreases wear-out and extends drive life

dramatically

HP 3PAR StoreServ 10000 racking options (1 of 2)Legacy 3PAR racks until February 2013

• The StoreServ 10400 (former V400) could be ordered in either a 3PAR rack or field rackable

• The StoreServ 10800 (former V800) could be ordered only in a non-standard 3PAR rack

• The 3PAR racks are available only with 0U 4 x Single Phase PDUs

HP 3PAR racking options after February 2013• All StoreServ 10000 can now also be ordered in redesigned HP racks with user-selectable power options

− QW978A - HP 3PAR StoreServ 10400 16 GB Control/32 GB Data Cache Rack Config Base

− QW979A - HP 3PAR StoreServ 10800 32 GB Control/64 GB Data Cache Rack Config Base

− QW982A - HP 3PAR StoreServ 10000 2-Meter Expansion Rack

• PDUs can be selected as required− 252663-D74 - Single-phase NEMA (24A)

− 252663-B33  - Single-phase IEC (32A)

− AF511A - Three-phase NEMA (48A)

− AF518A - Three-phase IEC (32A)

• The total maximum numbers of drive chassis and drives remain unchanged

• The number of drive chassis in the base rack is reduced by two− 10400 base rack max 4 drive chassis

− 10800 base rack 0 drive chassis

• The new racks can be used to extend legacy configurations; any combination is supported

HP 3PAR StoreServ 10000 racking options (2 of 2)

Legacy V-class/ StoreServ 10000 racking with four integrated

vertically mounted 0U 3PAR single-phase PDUs

HP StoreServ 10000 racking with

two horizontally mounted HP three-phase IEC or NEMA

PDUs

HP StoreServ 10000 racking with

four horizontally mounted HP single-phase IEC or

NEMA PDUs

Before February 2013 After February 2013—Choose between HP single-phase and

three-phase PDUs

The disk racks can be up to 100 m apart from the first rack with the controllers

HP 3PAR StoreServ 10000 dispersed rack installation

Controller rack

Disk rack 1 Disk

rack 3

Disk rack 2

Learning check

1. How many drives does a drive magazine hold?______________________________________________________________________________________________________________________________________________________________________________

Learning check answer

1. How many drives does a drive magazine hold?Each drive magazine always holds four drives of the

same drive type

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Capacity efficiency

Copy technologies and thin technologies

Copy technologies

Thin technologies

What is data compaction?Data compaction is the reduction of the number of data elements, bandwidth, cost, and time for the generation, transmission, and storage of data without loss of information by eliminating unnecessary redundancy, removing irrelevancy, or using special coding

Data compaction on HP 3PAR StoreServ

Compaction

Thin technologiesVirtual Copy

System-wide striping and

sparingAdvanced caching algorithms

Part of the base 3PAR OS

HP 3PAR Full Copy V1—Restorable copy

• Full physical point-in-time copy • Provisionable after copy ends• Independent of base volume’s RAID and

physical layout properties• Fast resynchronization capability• Thin Provisioning–aware

− Full copies can consume same physical capacity as thinly provisioned base volume

Base volume

Full Copy

Full copy

Intermediate snapshot

Part of the base 3PAR OS

HP 3PAR Full Copy V2—Instantly accessible copy

• Share data quickly and easily• Full physical point-in-time copy • Immediately provisionable to hosts• Independent of base volume’s RAID and

physical layout properties • No resynchronization capability• Thin Provisioning–aware

− Full copies can consume same physical capacity as thinly provisioned base volume

Base volume

Full Copy

Full copy

Intermediate snapshot

HP 3PAR Virtual Copy—Snapshot at its best (1 of 2)• Smart

− Individually erasable and promotable − Scheduled creation/deletion− Consistency groups

• Thin− No reservation, non-duplicative− Variable QoS

• Ready− Instantaneously readable and/or writeable− Snapshots of snapshots of …− Virtual Lock for retention of read-only snaps− Automated erase option

• Integrated− Microsoft Hyper-V, SQL, Exchange− vSphere− Oracle− Backup apps from HP, Symantec, VEEAM, ComVault− SMI-S

Base volume

Up to 64,000 virtual volumes and snapshots

# of snapshots

Model

32004

10800

31911

10800

14734

10400

13907 T40013646 T400

11524

10400

9942 7400

8695

10400

8461

10400

8425 7200

81441040

0

80061040

0

77601040

0

74641040

07407 T8007166 7400

68691040

06755 7400

62571040

06152 S800

60551040

0

Top arrays worldwide

as of May 2014

Virtual copies

Hundreds of snaps per base volume…… but only one CoW required

HP 3PAR Virtual Copy—Snapshot at its best (2 of 2)• Virtual copies can be mapped to CPGs different from their base

volumes− This means that they can have different quality-of-service characteristics− For example, the base volume space can be derived from a RAID 1 CPG on

FC disks and the Virtual Copy space from a RAID 5 CPG on NL disks

• The base volume space and the Virtual Copy space can grow independently without impacting each other − Each space has its own allocation warning and limit

• Dynamic optimization can tune the base volume space and the Virtual Copy space independently

One week based on hourly snaps and an average daily change rate of ~10%

HP 3PAR Virtual Copy for backup use case

Base volume of 2 TB 24 copies

~200 GB48 copies~200 GB

72 copies~200 GB

96 copies~200 GB

120 copies~200 GB

144 copies~200 GB

168 copies~200 GB

Results in 168 virtual copies and only ~1.4 TB snapshot space needed

Monday Tuesday Wednesday Thursday Friday Saturday Sunday

TCO and space efficiency without compromise

HP 3PAR Thin Technologies leadership overview

Buy up to 50% less storage capacity 1)

Start thin with Thin

Provisioning

Get thin withThin Conversion

Stay thin with

Thin Persisten

ce

Reduce tech refresh costs by up to 50%

Thin 3PAR volumes stay thin over time

2 TB1 TB

16 TB

8 TB

1) See the HP Get Thin Guarantee at  http://www.hp.com/storage/getthin 2) Currently available on SSD only

LinuxPresented 24 GB

Consumed 3 TB + buffer

Buffer

Before After

Fast

Get even thinner with Inline Data-

Deduplication

Reduce your storage footprint by 50 to

90% 2)

1111100

0001100

0001101

0001100

1111100

0001100

0001101

1111100

0001101

0001101

0001100

HP 3PAR Thin Technologies benefits

• Built-in− Utility Storage supports ThP and Thin

Deduplication by eliminating the diminished performance and functional limitations that plague bolt-on thin provisioning and dedupe solutions

• In-band− The 3PAR ASIC detects sequences of

zeroes and same patterns of data in 16 kB chunks and does not write them to disks

− Third-party ThP and dedupe implementations reclaim space as a post-process, creating space and performance overhead

• Reservation-less− ThP draws fine-grained increments from a

single free-space reservoir without pre-dedication

− Third-party ThP implementations require a separate, pre-dedicated pool for each data service level

• Integrated − API for direct thin provisioning and thin

dedupe integration in Symantec File System, VMware vSphere, Oracle ASM, Windows Server 2012, and others

− Guaranteed efficiency− Save 50%+ storage capacity using ThP when

migrating from legacy storage *− Save another 50%+ capacity on SSD thanks

to thin deduplication

* As compared to a legacy storage array.  See the HP Get Thin Guarantee at http://www.hp.com/storage/getthin

HP 3PAR Thin Provisioning—Start thin

Physically installed disks

Required net array

capacities

Server presented

capacities/LUNs

Physical disks Physically installed disks

Freechunklets

Traditional array— Dedicate on allocation

HP 3PAR array–– Dedicate on write only

Actually written data

Thin online SAN storage up to 75%

HP 3PAR Thin Conversion—Get thin

• A practical and effective solution to eliminate costs associated with:– Storage arrays and capacity– Software licensing and support– Power, cooling, and floor space

• Unique 3PAR ASIC built-in zero detection delivers:– Eliminate the time and complexity of

getting thin– Open and heterogeneous migrations for

any-to-3PAR migrations– Preserved service levels at high

performance during migrations

Before After

0000

00000000

ASIC

Fast

Keep the array thin over time

HP 3PAR Thin Persistence—Stay thin

• Provides non-disruptive and application-transparent “re-thinning” of thin provisioned volumes

• Returns space to thin provisioned volumes and to free pool for reuse

• Delivers simplicity through unique 3PAR ASIC with built-in zero detection– No special host software required – Leverage standard file system tools/scripts to write zero blocks

• Preserves service-level zeroes detected and unmapped at linespeeds

• Intelligently reclaims 16 KB pages• Integrates automated reclamation with:

– T10 SCSI Unmap/Trim (Windows Server 2012, vSphere [manual], Linux)

– VAAI (write same zero)– Symantec file system– Oracle ASM Storage Reclamation Utility

Before After

00000000

ASIC

Fast

Built-in, not bolted on

HP 3PAR Thin Technologies positioning

• No up-front allocation of storage for thin volumes • No performance impact when using thin and thin deduped volumes unlike

competing storage products• No restrictions on 3PAR Thin Volumes use unlike many other storage arrays• Allocation size of 16 k which is much smaller than most competitors’ thin

implementations• Thin volumes can be created in less than 30 seconds without any disk layout

or configuration planning required• Thin volumes are autonomically wide striped over all drives within a certain

tier of storage

Host assisted by vSphere and Hyper-V

HP 3PAR Thin Clones

• Integration with HP 3PAR Inline Deduplication

• Works on VMware vSphere and Hyper-V

• Leverages HP 3PAR Reservation-less Snapshot technology

• Clones are created quickly and easily without pre-allocating any storage

• New data is deduplicated leveraging inline dedupe solution

Hypervisor

V M

V MV M

V M

V MV M

V M V M

VM cloning leverages Virtual Copy

and xCOPY and ODX

Learning check

1. What is the key difference in dedicating provisioning on a traditional array and provisioning on an HP 3PAR StoreServ array?

Learning check answer

1. What is the key difference in dedicating provisioning on a traditional array and provisioning on an HP 3PAR StoreServ array?

• On a traditional array you dedicate on allocation• With HP 3PAR StoreServ array, you dedicate on write only

Learning check

2. List at least three benefits of keeping an array thin over time___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Learning check answer

2. List at least three benefits of keeping an array thin over time• Provides non-disruptive and application-transparent “re-

thinning” of thin provisioned volumes• Returns space to thin provisioned volumes and to free pool for

reuse• Delivers simplicity through unique 3PAR ASIC with built-in zero

detection• Preserves service-level zeroes detected and unmapped at line

speeds• Intelligently reclaims 16 KB pages• Integrates automated reclamation

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Performance

Same operating system, management console, and software features

HP 3PAR StoreServ Storage

7200c 7400c 7450c 7440c 10400 10800

Controller nodes 2 2 - 4 2 - 4 2 – 4 2 – 4 2 – 8Fibre Channel host ports10 Gb iSCSI/FCoE Ports Built-in IP Remote Copy ports

4 – 12 0 – 4

2

4 – 24 0 – 82 – 4

4 – 24 0 – 82 – 4

4 – 24 0 – 82 – 4

0 – 96 0 – 482 – 4

0 – 1920 – 96

2 – 8GBs cache per node-pair/max 40 /40 48 / 96 96 / 192 96 / 192 192 / 384 192 / 768 GBs flash cache per node-pair/max 768 / 768 768 / 1500 NA 1500 / 3000

Drives per StoreServ 8 – 240 8 – 576 8 – 240 8 – 960 16 - 960 16 - 1920

Available drivetypes

SSD15 k SAS10 k SAS7.2 k NL SAS

yesyesyes

yes

yesyesyesyes

yesNANANA

yesyesyesyes

yesyesyesyes

yesyesyesyes

Max SSD per StoreServ 120 240 240 240 256 512Raw capacity (TB) 500 1600 460 2000 1600 3200 SPC-1 benchmark IOPS NA 258078 Planned NA NA 450213Max front-end IOPS read 300000 600000 900000 900000 240000 480000Max front-end MB/s 256 k read 2700 5000 4250 5000 10800 14900

Which array is more efficient and easier to use?

Traditional storage array HP 3PAR array

Adaptive Flash Cache

Read cache extension using SSD• Leverages portion of SSD capacity as flash cache• Provides second-level caching layer between DRAM and HDDs

− Cache most frequently accessed data− Redirect host I/O to flash cache to provide low latency access

• Is included with base software

Advantages and use cases• Lowers latency by ~20% for random read-intensive I/O workloads• Faster response time for periodic read burst on cold data on HDDs• Faster response time for read burst on cold data on tiered volumes• No dedicated SSDs required• Simple system-wide configuration• Available on all HP 3PAR StoreServ arrays

DRAM cache

HDD

Reads or writes

Controller

Reads without Adaptive

Flash Cache

Reads with Adaptive

Flash Cache

HDD SSD tier

DRAM cache

Controller

ReadsFlash cache16 KB page size

Cache

Adaptive Flash Cache specs

  HP 3PAR 7200

HP 3PAR 7400

HP 3PAR 10400 old

HP 3PAR 10400 new

HP 3PAR 10800

Minimum amount of drives per node

pair4 4 2xDMAG (8 drives) 2xDMAG (8 drives) 2XDMAG (8 drives)

Maximum amount of flash cache per

system768 GB 1.5 TB 3 TB 4 TB 8 TB

Maximum amount of flash cache per

node pair768 GB 768 GB 1.5 TB 2 TB 2 TB

Total system cache DRAM+AFC

792 GB 1,564 GB 3,384 GB 4,384 GB 8,768 GB

Adaptive Flash Cache provides performance acceleration for random reads

Included as part of the base OS suite

Enable/disable on the entire system or on selected vvsets

Minimum flash cache per node pair is 64 GB

Notes:

• The minimum amount of SSD drives work for Adaptive Flash Cache only; for Provisioning and AO the minimum remains 8 per node pair

• All SSDs are supported to be used for AFC; the only exception is the 480 GB cMLC (E7Y55A/E7Y56A) SSD that does not support creation of Adaptive Flash Cache

• Adaptive Flash Cache is not applicable to AFA; it does not accelerate data that is already stored within the SSD tier

HP 3PAR StoreServ 7440c hardware detailsItem HP 3PAR StoreServ

7440c

Number of controller nodes

2 or 4

HP 3PAR Gen4 ASICs 2 or 4

CPU (per controller node)

8-core, 2.3 GHz

Total cache 1.6 - 3.2 TB

Total flash cache 1.5 - 3 TB

Total on-node cache 96 - 192 GB

Number of disk drives 8 - 960

Number of solid state drives

8 - 240

Raw capacity 1.2 TB - 2000 TB

Drive enclosure SFF: 24 slots in 2ULFF: 24 slots in 4U

Number of drive enclosures

0 - 38

Item HP 3PAR StoreServ 7440c

Host adapters

Four-port 8 Gb/s FCFour-port 16 Gb/s FC

Two-port 10 Gb/s iSCSI/FCoE

Maximum host ports 24

 8 Gb/s FC host ports 4 - 24

 16 Gb/s FC host ports 0 - 8

 10 Gb/s iSCSI host ports 0 - 8

Maximum initiators 1024 or 2048

New functionality in 3PAR OS 3.2.1

HP 3PAR express writes

• Fibre Channel host write processing has been optimized to deliver significantly lower latencies

• Main improvement (10 - 30%) seen for small-block random writes at low workload intensity

• Ships as part of the base OS and is enabled by default after an upgrade to 3.2.1– Best practice is to leave this enabled by

default

• A new Target Mode Write Optimization column is added to the showport output

Performance improvement with 3PAR express writes

Host IOPS

Hos

t Res

pons

e T

ime

(ms)

Measured configuration:

• 7450 four-node with 48 SSD

• Six virtual volumes in RAID 1

Express writes disabled

Express writes enabled

HP 3PAR RAID 6 layout optimization

• RAID 6 handling has been improved to reduce the number of required back-end I/Os for writes

• Applies to RAID 6 set sizes of 6, 10, and 16 (4+2, 8+2, and 14+2) only

• Applies to HDD and SSD• After upgrade to 3.2.1 a tuneld or tunevv command

can convert an existing layout to optimal• Use the showblock command to see the difference

between optimal and non-optimal, example in backup

Performance improvement of optimized RAID 6 with 100% 16 kB writes

Wri

te IO

PS

Leg

acy

3PA

R R

AID

6

New

HP R

AID

6

TPVV grow optimization

• Adaptive VV grow makes the grow size of each TPVV related to its virtual size− Fixed growing by 256 MB per node is not optimal any longer, considering

larger disks and faster hosts− Small growing increments cause too many growth requests, causing more

“system busy” events− Adaptive grow increments are between 256 MB and 4 Gb per node

• Multinode VV growth has been enhanced − In certain cases, VVs were not optimally and symmetrically grown across

nodes

AO block-less move—Performance benefit

14:2

1:37

14:2

1:43

14:2

1:49

14:2

1:55

14:2

2:01

14:2

2:07

14:2

2:13

14:2

2:19

14:2

2:26

14:2

2:32

14:2

2:39

14:2

2:45

14:2

2:51

14:2

2:57

14:2

3:03

14:2

3:09

14:2

3:15

14:2

3:21

14:2

3:27

14:2

3:33

14:2

3:39

14:2

3:45

14:2

3:52

14:2

3:58

14:2

4:05

010000200003000040000500006000070000

Region switch in 3.1.2

IOP

S

14:39:06 14:39:24 14:39:43 14:40:01 14:40:22 14:40:40 14:40:58 14:41:17 14:41:35 14:41:56 14:42:14 14:42:33 14:42:540

10000200003000040000500006000070000

Region switch in 3.1.3

IOP

S

70

%

Optimizations in 3.1.3—Performance comparison

create 100 base VV

creategroupsv 100 VV

remove 100 VV use–pat option

3.1.2 3.1.3

Seconds 38.3 1.8

52545

Optimizations: 3.1.2 compared to 3.1.3

Secon

ds

< 95%

Learning check

1. All the following statements about Adaptive Flash Cache are true except which one?• Provides performance acceleration for random reads • Available as an add-on to the base OS suite• Enabled/disabled on the entire system or on selected vvsets• Requires no dedicated SSDs

Learning check answer

1. All the following statements about Adaptive Flash Cache are true except which one?• Provides performance acceleration for random reads • Available as an add-on to the base OS suite• Enabled/disabled on the entire system or on selected vvsets• Requires no dedicated SSDs

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Availability

HP 3PAR high availability (1 of 3)Spare disk drives compared to distributed sparing

Traditional arrays

3PAR StoreServ

Few-to-one rebuildHotspots and long rebuild exposure

Spare drive

Many-to-many rebuild

Parallel rebuilds in less time

Spare chunklets

Guaranteed drive enclosure (drive cage) availability if desired

HP 3PAR high availability (2 of 3)

en

closu

re

en

closu

re

3PAR StoreServ

Enclosure-independent RAIDRaidlet groups for any RAID level

Data access preserved with HA enclosure (cage)

User selectable per CPGen

closu

re

en

closu

re A1

B1

C1

D1

B2

A2

C2

D2

B3

A3

C3

D3

A4

B4

C4

D4

B5

A5

C5

D5

B6

A6

C6

D6

en

closu

re

en

closu

re

C G C

D H D

Traditional arrays

Enclosure-dependent RAID

Enclosure (cage) failure might mean no access to data

en

closu

re B F B

en

closu

re A E A

RAID 5 group R5

A1 A2 A3 A4

B1 B2 B3 B4

C1 C2 C3 C4

D1 D2 D3 D4

R1R1R1R1RAID 10

R1R1R1R1

A1 A2 A3 A4

B1 B2 B3 B4

C1 C2 C3 C4

D1 D2 D3 D4

R5R5R5R5RAID 50

Write cache remirroring

HP 3PAR high availability (3 of 3)

Traditional mid-range arrays

3PAR StoreServ

Traditional write cache mirroringLosing 1 controller results in poor performance due to write-through mode or risk of write data

loss

Persistent write cache mirroring• No write-through mode consistent

performance • Works with all 4-, 6- and 8-node systems

Mirror

Write cache

Mirror

Write cache

Write cache stays on thanks to redistribution

Ctrl 1 Ctrl 2

Online firmware update

• Non-disruptive to business applications• One node after the other is updated• Can be performed under I/O load• Tests performed by ESG on the following

environment:− VMware vSphere 5.1 running on four HP BL460

servers− 3PAR StoreServ 7450 four-node array− OLTP workload of 144,000 IOPS generated with

IOMETER

• The actual firmware update:− Initially each of the four nodes made 36,000 IOPS− First node being updated− Second node being updated − Third node being updated

Note: While one node was updated, the three remaining nodes made 48,000 IOPS each and the array performance stayed at 144,000 IOPS all the time

Fast and reliable

Performance during the firmware update

0:0

:1

• No user intervention required

• In Fibre Channel SAN environments, all paths stay online in case of loss of signal of a Fibre Channel path, during node maintenance, and in case of a node failure

• For Fibre Channel, iSCSI, and FCoE deployments all paths stay online during node maintenance and in case of a node failure

• Server will not “see” the swap of the 3PAR port ID, thus no MPIO path failover is required

HP 3PAR Persistent PortsPath loss, controller maintenance or loss behavior of 3PAR arrays

A Fibre Channel path loss is handled by 3PAR Persistent

Ports all server paths stay

online

0:0:

11:0

:1

1:0:1

0:0:2

1:0:2

0:0:

2 1:0

:2

0:0:1 0:0:2 1:0:21:0:1

Ctrl 0

MPIO

A controller maintenance or loss is handled by 3PAR Persistent Ports

for all protocols all server paths stay online

0:0:

1

0:0:2

0:0:

2

0:0

:1

1:0:

1

Ctrl 1

1:0:1

1:0:21:0

:2

1:0:21:0:1

MPIO

Ctrl 0

0:0:1 0:0:21:0:1 1:0:2 0:0:20:0:1

Ctrl 1

1:0:21:0:1

0:0:20:0:1 1:0:1 1:0:2 0:0:20:0:1

Ctrl 1

1:0:21:0:10:0:20:0:1

0:0:1 0:0:2

Ctrl 01:0:1 1:0:2

0:0

:1

0:0:

2

0:0

:1

Read more in the Persistent Ports whitepaper

99.9999% data availability—guaranteed*

HP 3PAR Get 6-Nines Guarantee

Products covered:• All four-node 7000 systems• All 10000 systems with more than four nodes

Industry-first

6-Nines guarantee

across midrange, enterprise, and all-flash storage

Program details*• 6-Nines Availability Guarantee on covered

systems• Remedy: HP will work with the customer to

resolve their issue and fund three additional months on customer’s mission-critical support contract

• Length of guarantee: First 12 months 3PAR storage system is deployed

* Complete program terms and conditions on the Get 6-Nines Portal Page

Learning check

1. Compare the key benefits of persistent write cache mirroring over traditional write cache mirroring

Learning check answer

1. Compare the key benefits of persistent write cache mirroring over traditional write cache mirroringIn traditional write cache mirroring, losing one controller results in poor performance due to write-through mode or risk of write data loss

In persistent write cache mirroring, No write-through mode produces consistent performance, and it works with all 4-, 6- and 8-node systems

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

File support

File and object offerings for HP 3PAR StoreServ

+

StoreAll8200

Primary storage Archive storage

+

3PAR StoreServFile Controller

Inte

gra

ted v

iaSM

I-S P

rovid

er

• Retention, WORM, integrity validation

• Metadata analytics and search

• Scale out performance and capacity

• Straightforward user shares and

home directories

• AD, local, and LDAP environments

• Unified GUI and CLI management

• Sophisticated file serving for AD-based environments

• Connected StoreEasy remote sites

• Configurable performance and capacity

3PAR StoreServFile Persona

HP 3PAR StoreServ primary file storage productsThere are 2 3PAR StoreServ products that provide primary file storage

HP 3PAR File Persona Software Suite

HP 3PAR StoreServ File Controller

+Feature within the 3PAR OS Add-on hardware+

Significantly different architectural approaches

Any 3PAR array

Truly converged Gateway with integrated management

Primary Storage – Block and File

What is it?

HP 3PAR File Persona Software Suite

• A licensed native feature of the HP 3PAR OS− No additional hardware required

• Includes:− A rich set of file protocols

(SMB, NFS, HTTP)− An Object Access API (REST)− File data services

• File storage feature using hardware within the array itself

• Available in the 7200c, 7400c ,7440c, and 7450c

HP 3PAR StoreServ withHP 3PAR File Persona

Efficient Effortless

Bulletproof

File Persona limits

• File storage capacity− 64 TB per node pair− 128 TB (7440c)

• File systems/file provisioning groups (FPG)− 32 TB per FPG− 16 FPG per node pair− 32 VVs allowed per FPG (min 1 TB – max 16 TB)

• Virtual file servers (VFS)− 16 VFS per node pair (1 VFS per FPG)− 4 VLANs per VFS− 4 IP addresses per VFS

• File stores− 256 per node pair

• Snapshots− 262,144 file snapshots per node pair

• Users− 1,500 users per node pair− 3,000 users (7440c)

• File shares− 4,000 SMB shares per node pair− 1,024 NFS shares per node pair

• Files− 2 TB max file size− 128 K files per directory− 100 million files and directories per FPG

• Quotas− 20,000 user/group quotas per node pair− 256 capacity quotas per node pair

HP 3PAR StoreServ File Controller

What is HP 3PAR StoreServ File

Controller?

Add-on hardware that provides file services

• Direct or fabric attached via Fibre Channel• Uses Fibre Channel ports of array• Provides its own network interfaces

Integrated management• End-to-end file storage and file-share

provisioning• Monitoring dashboard for file services

Significantly scalable• Two - eight file controllers per cluster• Multiple file tenants per cluster• Multiple clusters per 3PAR array

Windows Storage Server 2012 R2• Full Windows environment compatibility• SMB protocol updated to SMB 3.02• NFS v2, v3, and v4.1

Clustered for high availability

3PAR StoreServ File Controller limits

• Capacity− 352 TB per File Controller cluster (22 drive

letters x max volume size)

• File system (volume)− 16 TB per volume (3PAR LUN limit)− 1 VV per volume− 22 basic volumes per cluster (drive letter

limited)

• Cluster− 2 - 8 file controllers per cluster− 150 VLANs per cluster (tested limit) in practice;

limited by system memory and NIC driver− 32 physical network interfaces per cluster

• Multi-tenancy− Up to 24 tenants per cluster

• Snapshots− 64 shared folder VSS snapshots− Hardware snapshots limited by array

• Users− 20,000 users per file controller− 40,000 users with a file controller pair + 7440

• File shares− Undefined but thousands per file controller− The number of shares on a server affects server

boot time− On a server with typical hardware and thousands

of shares, boot time can be delayed by minutes− Exact delays depend on server hardware− Recommended max values

• 5,000 SMB shares per file controller• 2,048 NFS shares per file controller

• File− 16 TB max file size− 350,000 files per directory

• If directory enumeration performance is important, files should be stored in file system in a hierarchy

• Quotas− 20,000 user/group quotas per file controller

3PAR File Persona and 3PAR StoreServ File Controller features

3PAR File Persona 3PAR StoreServ File Controller

Product type Software feature Discrete add-on hardware

Scalability 2 or 4 3PAR converged controllersUp to 3,000 concurrent users*128 TB aggregate file capacity*32 TB per file system

2 to 8 file controllers per file controller clusterUp to 40,000 concurrent users*352 TB per file controller cluster16 TB per file system (3PAR LUN limit)

Protocols SMB 1.0, 2.0, 2.1, 3.0**NFSv3, v4NDMP

SMB 1.0, 2.0, 2.1, 3.0, 3.02NFSv2, v3, v4.1

Authentication

Active Directory, OpenLDAP, Local Active Directory, Local

Management Truly unified SSMC and 3PAR OS CLI Semi-integrated

Remote support

Truly unified STaTS Discrete Insight Remote Support

Advanced features

Object Access API for custom cloud apps

ScreeningClassification, access policies, rights managementAccess auditingMulti-tenancy (24 per cluster)

* 74x0c-4N** Select SMB 3.0 features only * Per file controller pair

Considerations when sizing for file workloads

Clients• Type of client• Number of concurrent

clients• Client applications• Overall performance of

the client• CPU• Memory/cache• Client network interface

Connectivity• Protocol

– SMB (1.0, 1.1, 2, 2.1, 3)– NFS (v3, v4)

• Network infrastructure– LAN, WAN– 1 GbE/10 GbE– Connectivity between

switches– Congestion – Network load balancing

File serving node• CPU• Memory/cache• Overall performance of the

server• Server network

configuration– 1/10 GbE– Bond mode if any– Number of links in bond

• Storage– HBA used– Media type (HDD, SSD)– RAID level

SMB,

NFSStorage

Operations

• Backup• Restore• Snapshots• Quotas• Anti-virus• Replicatio

n

File Share ("home")• Share permissions

File Store (“sales“)• Holder of policies, some of which can be inherited

from VFS• Snapshot entity for up to 1,024 snapshots

Virtual File Server (enterprise.hp.com)• Virtual IP interfaces and authentication service• User quotas and antivirus configuration

File Provisioning Group (fpg1)• Replication and disaster recovery entity• Built from an autonomic group (virtual volumes

set)

1 16

1 n

FPG32VFS32

…1 16

……….. 1 16…

FPG1VFS1

CPGs

Wide-striped logical disks

SMB, NFS, REST API

1 161 16

Logical view of managed objects

Antivirus scanning overview

• Policy-based antivirus scanning over SMB (CIFS), NFS, and HTTP (used by Object Access API) protocols− Exclusion AV policies at the VFS level and override policies at File Store level − Supports multiple virus scan servers (max 50) for redundancy and improved

throughput performance

• ICAP 1.0–based Virus Scan Engine (VSE) software supported (single vendor at a time)− Symantec Protection Engine 7.5 − McAfee VirusScan Enterprise 8.8 and VirusScan Enterprise for Storage 1.0.2− Supports on-access (real-time) and schedule scanning (on-demand scanning)

• Supports automatic and manual start/stop of AV service on addition/removal of the VSE to the cluster

• AV statistics (files scanned, files infected, files quarantined) at VFS level

On-Access scan

Antivirus scanning process

1. Client requests an open(read)/close(write) of SMB file or read for NFS/HTTP file

2. Storage system determines if the file needs to be scanned based on the policies, and notifies the AV scan server

3. VSE server scans the file and reports the scan results back4. If no virus found, access is allowed to file

– If virus found, then “Access Denied” to SMB client, “Permission Denied” to NFS client, “transfer closed” on HTTP client; file is quarantined, and scan messages are logged in /var/log/ade generated

5. If VSE server is unavailable and the policy is set to “Deny Access,” then “Access Denied” to SMB client, and event generated for VSE server is unavailable

1 2

4 3

Client PCs Antivirus scan servers

XAccess denied

Deny if no AV servers5

Using File Store snapshots

User-driven file recovery

• File Store snapshots are different from block volume Virtual Copy snapshots

• Restoring individual files from File Store snapshots is more efficient than administrator-driven recovery − Users can restore their own files

How it works• Windows clients

− Snapshots integrate with Previous Versions tab in Windows Explorer

• Linux/UNIX clients− Previous versions of the files appear

in .snapshot directory

Snapshots

Replication and disaster recoveryReplication• Remote Copy is used for files just as it is for block• Both Sync and Async Periodic Remote Copy supported• All VVs* in a file provisioning group must be in a single

Remote Copy group• Both uni-directional and bi-directional Remote Copy

supported to different volumes• 1:1, M:1 (many-to-one) and 1:N (one-to-many), M:N

topologies supported for failover only, not for distribution**

Disaster recovery• Required preconfiguration of the target array for node

networking, DNS configuration, AD config, AV services• Target/backup array must have the same number of

File Persona nodes as source/primary• Scheduled tasks must be manually migrated or will be

lost* Max 32 VVs of minimum 1 TB each in a node for first release

** M,N is a max of 4

3PAR StoreServ 3PAR StoreServ

VV1VV2VV3VV4

VV1’VV2’VV3’VV4’

RC Group RC Group

Sync/Async RC links

Backup options

Share-based backup• Network share-based backup over SMB or NFS• Is recommended mode of backup; use NDMP when needed

NDMP backup over iSCSI• Supports the software iSCSI initiator for the NDMP

backup• NDMP v2, v3, v4 (default is v4)• Shares the same network ports with file I/O

Backup software

3PAR StoreServ

Backup target

HP Data Protector

Commvault Simpana

Symantec NetBackup

IBM Tivoli Storage Manager

3PAR StoreServ Management Console

• Replacement for the existing management console • Converged management for entire HP 3PAR product line• Intuitive web-based UI with dashboard overviews• Modern and consistent look and feel • Redesigning hundreds of IMC screens• Better usability• Replacement for existing System Reporter• Standards based and integration with HP OneView

DashboardHealth, performance, and capacity

at a glance

Mega Menu

• An express-driven interface to all points within the SSMC and the objects monitored− From the Mega Menu, a user is linked directly to

any of the listed context areas

• Converged File and Block management and reporting

Learning check

1. What are the two options for backup, and when are they recommended?

___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Learning check answer

1. What are the two options for backup, and when are they recommended?

• Share-based backup—Is recommended mode of backup; use NDMP when needed

• NDMP backup over iSCSI

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Replication and recovery

Protect and share data

HP 3PAR Remote Copy

• Smart– Initial setup in minutes– Simple and intuitive commands– No consulting services

• Complete– Native IP LAN or FC SAN-based– No extra copies or infrastructure needed– Thin Provisioning and Thin Conversion aware– Mirror 1:1 between any 3PAR arrays (F-, T-Class, 7000, and 10000 )– For StoreServ 7000 and 10000 at 3.1.3 and later all configurations

from 1:1 to 4:4 are supported– Any combination of Sync and Async Periodic RC– VMware vSphere Site Recovery Manager and vSphere Metro Storage cluster certified

• Scalable– One RCIP link per node—Up to eight per 3PAR array– Up to four RCFC links per node—Up to 32 per 3PAR array– Up to 6,000 replicated volumes per 3PAR array

1:1 configurationSync RC and/or Async Periodic RC

Any 3PAR arrays

See the demo video at: http://h20324.www2.hp.com/SDP/Content/ContentListing.aspx?PortalID=1&booth=66&tag=534&content=3431

4:4 configurationSync RC and/or Async Periodic RC

Any 7000 or 10000 array

P S

S P

A specialized 1:2 disaster recovery solution

HP 3PAR Synchronous Long Distance configuration

• Combines the ability to maintain concurrent metro-distance synchronous remote copies with RTO=0 AND continental-distance asynchronous remote copies for disaster tolerance

Async Periodic RCActive

Primary

Secondary

P

S2

Tertiary

S1Async Periodic

RC

Standby

Sync RC

Synchronous Long Distance 1:2 configuration

Metr

opolit

an d

ista

nce

Continental distance

Find four demo videos at: https://www.youtube.com/playlist?list=PL9UfCHCZQuNDmU8WRXT_RU7yG_sL-eweV

Continuous operation and synchronization

HP 3PAR Remote Copy, Synchronous mode• Real-time mirror

− Highest I/O currency− Lock-step data consistency

• Space efficient− Thin Provisioning aware

• Targeted use− Campus-wide business continuity

• Guaranteed consistency − Enabled by volume groups

2. Primary array writes I/O to secondary write cache

P

Primaryvolume

Secondaryvolume

S

1. Host server writes I/O to primary write cache

2

3. Remote array acknowledges the receipt of the I/O

1

4

4. Host I/O acknowledged to host

3

Initial setup and synchronization

HP 3PAR Remote Copy, Asynchronous Periodic mode

• Efficient even with high latency links− Local writes acknowledgement

• Bandwidth friendly− Just delta replication

• Space efficient− Thin aware

• Guaranteed consistency − Enabled by volume groups− Based on snapshots

3

2. Local snapshot created

P

Primaryvolume

Secondaryvolume

S

1. Secondary volume created

Local snapshots

2

3. Initial synchronization started

Assured data integrity

HP 3PAR Remote Copy

Single volume• All writes to the secondary volume are completed in the same order as they were written on the primary volume

Autonomic multi-volume group• Volumes can be grouped together to maintain write ordering across sets of volumes

• Useful for databases or other applications that make dependent writes to more than 1 volume

• Secondary groups and volumes are autonomically created or reconfigured and credentials inherited

Replicated provisioning

group

Replicated provisioning

group

New source volume

New target volume created

autonomically

Primary 3PAR storage

Secondary 3PAR storage

HP 3PAR Remote Copy—Supported topologies and maximum latencies

Remote Copy type Max supported latencySynchronous RC FC 2.6 ms RTT*

Synchronous RCIP 2.6 ms RTT*

Asynchronous Periodic RC FC 2.6 ms RTT*

Asynchronous Periodic RCIP 150 ms RTT*

Asynchronous Periodic RC FCIP 120 ms RTT*

* RTT = round trip timeOptical fiber networks typically have a delay of ~5 us/km

(0.005 ms/km)Thus 2.6 ms allows fiber link distances of up to 260 km (2 x 260 km = 520 km 520 km x 0.005 ms/km = 2.6 ms)

Clustering solution protecting against server and storage failure

Cluster Extension for Windows

• What does it provide?− Manual or automated site failover for server and storage

resources − Transparent Hyper-V live migration between sites

• Supported environments− Windows Server 2003, 2008, 2012− HP StoreEasy (Windows Storage Server) − Max supported distances

• Remote Copy sync supported up to 2.6 ms RTT (~260 km)• Up to Microsoft Cluster heartbeat maximum of 20 ms RTT

− 1:1 and SLD configuration− Sync or async Remote Copy

• Requirements− 3PAR disk arrays− 3PAR Remote Copy − Windows cluster− HP Cluster Extension (CLX)− Max 20 ms cluster IP network RTT

• Licensing options− Option 1: per cluster node

• 1 LTU per Windows cluster node (4 LTUs for configuration to the left)

− Option 2: per 3PAR array • 1 LTU per 3PAR array (2 LTUs for the configuration to the left)

Also see the HP CLX resources

File share Witness

HP 3PAR

HP 3PAR

Data Center 2

Synchronous or

asynchronous Remote Copymanaged by

CLX

LAN/WAN

Clustering solution protecting against server and storage failure

Cluster Extension for Windows on vSphere

• What does it provide?– Manual or automated site failover for Windows VMs and

storage resources • Supported environments

– Windows Server 2003, 2008, 2012 VMs on VMware vSphere

– Max supported distances• Up to Remote Copy sync supported max of 2.6 ms RTT (~260

km)• Up to Microsoft Cluster Heartbeat max of 20 ms RTT

– 1:1 and SLD configuration– Sync or async Remote Copy

• Requirements– 3PAR disk arrays– 3PAR Remote Copy – Windows cluster – HP Cluster Extension (CLX) on each VM in the cluster– Max 20 ms cluster IP network RTT

• Licensing options– Option 1: per Windows VM

• 1 LTU per Windows VM in the cluster – Option 2: per 3PAR array

• 1 LTU per 3PAR array (2 LTUs independent of the number of VMs)

Also see the HP CLX resources

DC 3

File share Witness

Cluster 1Cluster 2Cluster 3Cluster 4

LAN/WAN

Data Center 2

vSphere

vSphere

vSphere

vSphereHP 3PAR

HP 3PAR

Synchronous or

asynchronous Remote Copymanaged by

CLX

End-to-end clustering solution to protect against server and storage failure

HP Serviceguard Metrocluster for HP-UX and Linux

• What does it provide?– Manual or automated site failover for server and

storage resources• Supported environments

– HP-UX 11i v2 and v3 with Serviceguard – RHEL 5 and 6 with HP Serviceguard 11.20.10 – SLES 11 with HP Serviceguard 11.20.10 – Max supported distances

• Up to Remote Copy sync max 2.6 ms RTT (~260 km)• Up to Remote Copy async max 150 ms RTT

• Requirements– HP 3PAR disk arrays – 3PAR Remote Copy– HP Serviceguard and HP Metrocluster

• Licensing for Linux– 1 LTU SGLX per CPU core and 1 LTU MCLX per CPU

core • Licensing options for HP-UX

– Option 1: per CPU socket for SGUX and MCUX– Option 2: per cluster with up to 16 nodes for SGUX

and MCUXAlso see the Metrocluster 3PAR manuals

Data Center 1

Quorum Service

Data Center 2

DC 3

HP 3PAR

HP 3PAR

Synchronous or

asynchronous Remote Copymanaged by

CLX

LAN/WAN

Peer Persistence overview• Peer Persistence is a high availability storage configuration between two sites/data

centers with the ability to transparently redirect host I/O from the primary to the secondary storage system

− “switchover” is a manual process allowing the facilitation of service optimization and storage system maintenance activities within a high-availability data storage solution

− “failover” is an automatic process that redirects host I/O from a failed source system to the target storage system• Failover uses the HP 3PAR Quorum Witness to monitor for HP 3PAR storage system failure to determine

whether a failover of host services is required

• The volumes must be synchronously replicated and must have the same WWNs • For vSphere, host persona 11 is required; for Windows, host persona 15 is required

VMware vSphere HP 3PAR OS

Host connectivity

5.0, 5.1 * ≥ 3.1.2 MU2 FC

5.5 *≥ 3.1.3≥ 3.2.1

FCFC, iSCSI, FCoE

* Stand-alone, cluster, and vMSC configurations

Windows Server HP 3PAR OS

Host connectivity

2008 R2 * ≥ 3.2.1 FC, iSCSI, FCoE

2012 R2 * ≥ 3.2.1 FC, iSCSI, FCoE

* Stand-alone, cluster, and Hyper-V configurations

RC LinkHP 3PAR

OS

RCFC≥ 3.1.2

MU2

RCIP ≥ 3.1.3

Currently supported environments as of September 2014

Certified for vSphere Metro Storage Cluster

Peer Persistence for VMware vSphere

• What does it provide?− High availability across data centers − Automatic or manual transparent LUN swap − Transparent VM vMotion between data centers

• How does it work?− Based on 3PAR Remote Copy and vSphere ALUA

• Primary RC volume presented with active paths• Secondary RC volume presented with passive paths

− Automated LUN swap arbitrated by a Quorum Witness (QW Linux ESX VM on third site)

• Supported environments− ESX vSphere 5.0, 5.1, 5.5 including HA, Failsafe,

and uniform vSphere Metro Storage Cluster

− Up to RC sync supported max of 2.6 ms RTT (~260 km)

• Requirements− Two 3PAR disk arrays− FC, iSCSI, or FCoE cross-site server SAN− Two RC sync links (RCFC or RCIP*)− 3PAR Remote Copy and Peer Persistence licenses− 3PAR OS ≥3.1.2 MU2

Also see the VMware KB "Implementing vMSC using 3PAR Peer Persistence“ and the HP white paper ”Implementing vMSC using HP 3PAR Peer Persistence”* RCFC strongly recommended; VMware vMSC certification is based on RCFC

S

P Primary RC Volume active path presentation

Secondary RC Volume LUN passive path presentation

vSphere Cluster

Data Center 2

Data Center 1

Synchronous Remote

Copy + Peer Persistence

HP 3PAR

HP 3PARQW DC 3

vSphere

vSphere

vSphere

vSphere

vSphere

P

S

P

LAN/WAN

S

Available with 3PAR OS 3.2.1

Peer Persistence for Windows

• What does it provide?− High availability across data centers − Automatic or manual transparent LUN swap − Transparent live migration between data

centers• How does it work?

− Based on 3PAR Remote Copy and MS MPIO• Primary RC volume presented with active

paths• Secondary RC volume presented with passive

paths− Automated LUN swap arbitrated by a

Quorum Witness (QW Linux Hyper-V VM on third site)

• Supported environments− Windows Server 2008 R2 and 2012 R2− Stand-alone servers and Windows cluster− Hyper-V − Up to RC supported max of 2.6 ms RTT

(~260 km )• Requirements

− Two 3PAR disk arrays− FC, iSCSI, or FCoE cross-site server SAN− 2 RC sync links (RCFC or RCIP)− 3PAR Remote Copy and Peer Persistence

license− 3PAR OS ≥3.2.1

Data Center 2

Data Center 1

Synchronous Remote Copy

+ Peer Persistence

HP 3PAR

HP 3PARQW DC 3

WitnessP

S

LAN/WAN

S

P Primary RC volume active path presentation

Secondary RC volume LUN passive path presentation

Hyper-V VM

Clustered application

Hyper-V

Hyper-V

Hyper-V

Hyper-V

Windows Failover Cluster

P

S

3PAR HA/DT options and comparison 3PAR Peer Persistence 3PAR CLX/ HP Serviceguard

Metrocluster

Primary use case Application transparent Storage Failover

Cluster Service Failover (downtime while services are restarted on the other site)

Integration method Agentless and transparent, based on MPIO (ALUA)

Agent in the cluster stack (Windows cluster, HP Serviceguard)

Configurations Uniform access (hosts in both sites need connectivity to the arrays in both sites)1:1 Remote Copy

Non-Uniform Access (hosts in each site are connected to local array only)1:1 Remote Copy, SLD

Supported replication Synchronous only, RCFC or RCIP Synchronous or asynchronous, RCFC or RCIP

Supported configurations

VMware stand-alone or clusteredWindows stand-alone, Failover Cluster, Hyper-V

Windows Failover Cluster (CLX),HP-UX, and Linux Serviceguard Metrocluster

Trigger Manual or automated failover Manually or automated failover

Manual granularity Remote Copy Group Remote Copy Group

Automated granularity

Full array Clustered Service/Remote Copy Group

License Replication Suite Remote Copy and CLX Software license

HP 3PAR

HP 3PAR

Recovery site

Servers

VMware infrastructure

Virtual machines

vCenterSite

Recovery Manager

Automated ESX disaster recovery

vSphere disaster recovery with Site Recovery Manager

• What does it do?• Simplifies disaster recovery and increases reliability

• Integrates VMware vSphere Infrastructure with HP 3PAR Remote Copy and Virtual Copy

• Makes disaster recovery protection a property of the VM

• Allows you to pre-program your disaster response

• Enables non-disruptive disaster recovery testing

• Requirements• VMware vSphere

• VMware vCenter

• VMware vCenter Site Recovery Manager

• HP 3PAR Replication Adapter for VMware vCenter Site Recovery Manager

• HP 3PAR Remote Copy Software

• HP 3PAR Virtual Copy Software (for disaster recovery failover testing)Also see the 3PAR vSphere white paper

Production LUNs

Remote Copy DR LUNs

Virtual Copy Test LUNs

Servers

VMware infrastructure

Virtual machines

vCenter

Site Recover

y Manage

r

Production site

Remote Copy

HP 3PAR Peer Persistence versus VMware SRMFunctionality HP 3PAR Peer Persistence HP 3PAR integrated with SRM

Concept Dual-site active-active data centers Dual-site active-standby data centers

Use case High availability and disaster avoidance Disaster recovery

Disaster on primary site

Transparent non-disruptive failover of active 3PAR volumes If vSphere Metro Storage Cluster (vMSC) is deployed, VMs can fail over automatically

Manually triggered storage failover and restart of selected VMs in disaster recovery site

Additional useAllows balancing load over the two data centers—active LUNs can be swapped transparently

Provides extensive failover test capabilities on the remote site on copies of production data

vMotion/Storage vMotion

YesOne cluster over two data centers No1 cluster in each data center

Granularity HP 3PAR Remote Copy Group HP 3PAR Remote Copy Group

ArbitrationAutomated by the Quorum Witness on third site

Human

Requirements

3PAR Remote Copy and Peer Persistence licenses;Fibre Channel SAN across both sites;Synchronous RCFC or RCIP* and max 2.6 ms RTT

3PAR Remote Copy and Virtual Copy and VMware SRM licenses;FC or IP replication connectivity between sites;Synchronous RCFC or IP and max 2.6 ms RTT

* RCFC strongly recommended; VMware vMSC certification is based on RCFC

3PAR Recovery Manager for VMware vSphere • Solution composed of:

− 3PAR Recovery Manager vSphere− 3PAR Virtual Copy− VMware vCenter

• Use cases− Expedite provisioning of new

virtual machines from VM copies− Rapid online recovery of files− Snapshot copies for testing and

development

• Benefits− Hundreds of VM snapshots − Granular, rapid online recovery − Reservation-less, non-duplicative without agents− vCenter integration —superior ease of use

Array-based snapshots for Rapid Online Recovery

Find product documentation at http://h18006.www1.hp.com/storage/software/3par/rms-vsphere/index.html See the demo video at 3PAR Management plug-in and Recovery Manager for VMware

Recovery managers for Microsoft Exchange Server and Microsoft SQL Server• RM MS Exchange Server and RM MS SQL Server

− Automatic discovery of Exchange and SQL Server servers and their associated databases

− VSS integration for application-consistent snapshots− Support for Exchange Server 2003, 2007, and 2010 − Support for SQL Server 2005, 2008, and 2012− Support for SQL Server running in a vSphere Windows VM− Database verification using Microsoft tools

• Built on 3PAR Thin Virtual Copy technology− Fast point-in-time snapshot backups of Exchange and SQL

Server databases− Hundreds of copy-on-write snapshots with just-in-time, granular

snapshot space allocation− Automatic recovery from snapshot− 3PAR Remote Copy integration− Exporting of database backups to other hosts

• Backup integration − HP DataProtector− Symantec NetBackup and Backup Exec− Microsoft System Center Data Protection Manager

Find product documentation at:http://h18006.www1.hp.com/storage/software/3par/rms-exchange/index.html http://h18006.www1.hp.com/storage/software/3par/rms-sql/index.html

See the demo video at: 3PAR Recovery Manager for SQL

Recovery Manager for Microsoft Hyper-V• Built on 3PAR Thin Virtual Copy technology

• Supports hundreds of snapshots with just-in-time, granular snapshot space allocation

• Create crash- and application-consistent virtual copies of Hyper-V environment

• VM restore from snapshot to original location

• Mount/unmount of virtual copy of any VM

• Time-based VC policy per VM

• Web GUI scheduler to create/analyze VC

• PowerShell cmdlets (CLI and scripting)

• Supported with: − Windows Server 2008 R2 and 2012

− Stand-alone Hyper-V servers and Hyper-V Failover Cluster (CSV)

− F-Class, StoreServ 7000 and 10000

Optional librarytape or D2D

RME and RMS architecture

Snapshots

3PAR productionvolumes

RM client and backup server

Exchange or SQL Serverproduction server

9:00

13:00

17:00

• Off-host backup

• Direct restore from tape

• Direct mount of snapshot

• Restore from snapshot with file copy restore

Production DB server

RME & RMS & RMH VSS integration

1. Backup server requests RM agent to create 3PAR VC

2. RM agent requests MS Volume Shadow Copy Service (VSS) for database metadata details

3. RM agent calls MS VSS to create virtual copies for specific database volumes

4. VSS queries 3PAR VSS provider if 3PAR VC can be created

5. VSS sets database/VHD to quiesce mode

6. VSS calls 3PAR VSS provider to create virtual copies of volumes

7. 3PAR VSS provider sends commands to 3PAR array to create virtual copies of volumes

8. 3PAR VSS provider acknowledges VSS VC creation completed

9. VSS sets database /VHD back to normal operation

10. VSS acknowledges RM agent creation of virtual copies completed

11. RM agent sends virtual copies and application metadata info to backup server

3

9

MS VSS

2

11

310

9 4 5 6 8

7 Exchange/

SQL DBor VDD

3PAR VSS provider

4

Recovery Manager

RM agent1

3PAR array

Backupserver

Recovery

Extended possibilities

RM Exchange and SQL Server in a CLX environment

• RM backup server at the remote secondary site (Site B) can actively manage Virtual Copy

• That means all the operations, including recovery, can be performed at the remote site

Site A Site BRM backup server - local

RM backup server - remote

Single copy cluster / SQL Extended Cluster (using CLX)

Can recover at Site B using

RM

Exchange/SQL 1

Exchange/SQL 2

Exchange/SQL 3

Exchange/SQL 4

VCVC

DB Remote Copy

Concurrent database validations

Recovery Manager for Microsoft Exchange • Validations can take hours

to complete for large databases (size of TB)

• Queuing and sequentially validating many databases can take a long time (hours to days)

• This enhancement ensures that the validations occur in parallel, mitigating the issue

DB1 (2 TB)

DB2 (2 TB)

DB3 (2 TB)

3 hrs

3 hrs

3 hrs

Total: 9 hrs to complete

Sequenti

al

DB1 (2 TB)

DB2 (2 TB)

DB3 (2 TB)

3 hrs 3 hrs 3 hrs

Total: approx 3 hrs to complete

Concurrent

Earlier versions

Since v4.4

Recovery Manager Diagnostic Tool• A tool that validates all the RM configuration parameters

and generates reports indicating non-compliance

• Runs on the backup server• Automatically probes all the servers registered in the

recovery manager including the backup server itself

• Checks all parameters required for a successful RM operation, such as:− Database status− VSS HWP configuration− StoreServ connectivity

• Generates a report indicating success, warning, and error• Advises the user of corrective action• Displays high-level dashboard status• Currently supported:

− RM Exchange Server− RM SQL Server− RM Hyper-V pending

Rapid, off-host backup recovery solution for Oracle databases

Recovery Manager for Oracle

Highlights• Back up using HP 3PAR Virtual Copy• Eliminate backup performance impact of production

database by exporting and backing up snapshot from a backup server

• Substantially reduce the time a database is in backup mode, hence reducing the media recovery time

• Rapid recovery from Virtual Copy itself• Integrated with HP 3PAR Remote Copy to provide

disaster recovery solution• Integrated with popular third-party backup software • Support single datafile or tablespace restore

HP 3PAR Recovery Manager for Oracle• Allows point-in-time copies of Oracle databases

− Non-disruptive, eliminating production downtime − Uses 3PAR Virtual Copy technology

• Allows rapid recovery of Oracle databases − Increases efficiency of recoveries − Allows cloning and exporting of new databases

• Integrated high availability with disaster recovery sites− Integrated 3PAR replication / Remote Copy for array-to-array

disaster recovery

• Supported operating systems− Oracle 10 g and 11 g− RHEL 4, 5, and 6− OEL 5 and 6− Solaris 10 SPARC− HP-UX 11.31− IBM AIX 6.1, 7.1

• Supported backup applications− HP DataProtector− Oracle RMAN− Symantec NetBackup

See also: http://h18006.www1.hp.com/storage/software/3par/rms-oracle/index.html

Optional librarytape or D2DSnapshots

Oracleproductionvolumes

Backup serverProduction

9:00

13:00

17:00

1. Fast automated restores2. Up-to-date DSS data3. Test with current data4. DB images presented to backup

server5. Full, non-disruptive Oracle

backups

1 2 3 45

Decisionsupport Test

Uninterrupted user access

What is Recovery Manager Central? (1 of 3) Snapshot-based data protection platform

Two elements• Recovery Manager Central for VMware

− Managed via vCenter plug-in; for VM backups only (application-consistent)

• Recovery Manager Central Express Protect− Managed via web browser—for all other snap backups (crash-consistent)

Fosters integration of 3PAR and StoreOnce• Near-instant recovery• Longer-term data retention• Catalyst integration as backup target

Flat Backup—data streams from 3PAR to StoreOnce** v1.0 – data path goes through RMC VM until 1.1 or 2.0, depending on when RMC is embedded in StoreOnce

• 3PAR StoreServ system (any currently supported model*)

• StoreOnce (software v. 3.12.x to support Backup Protect**)

• StoreOnce Recovery Manager Central 1.0• VMware 5.1 and 5.5*7000 series and 10000 series will have full functionality; F-Class and T-Class will be limited

**Catalyst over FC supported in controlled release in 3.11.x

StoreOnce Recovery Manager Central

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

StoreOnce3PAR StoreServ

What is Recovery Manager Central? (2 of 3)

What is Recovery Manager Central? (3 of 3) It is not a replacement for existing backup application

RMC v.1.0:• 3PAR only—cannot protect other storage platforms• No Oracle, SQL Server, Exchange Server, Hyper-V (unless on a VM)• No Hyper-V or KVM VM• No “bare-metal” recovery (unless on a VM)• No granular-recovery capability

It is intended to be a complementary piece along with backup app• Faster and cheaper alternative to backup app for non-granular protection

RMC Value Proposition

• Converged availability and backup service for VMware − Flat backup alternative to traditional backup apps

• Performance of Virtual Copy snaps• Reliability and retention of StoreOnce• Speed of backups and restores via SnapDiff

• Control of VMware protection passes to VMware admins− Managed from within vSphere

• Extension of primary storage− Snapshots key to entire data protection process

• Common integration and API point for backup applications, reporting, and security

Learning check

1. Complete the following table by filling in the maximum round trip time in milliseconds for each supported topology

Remote Copy type Max supported latencySynchronous RC FC

Synchronous RCIP

Asynchronous Periodic RC FC

Asynchronous Periodic RCIP

Asynchronous Periodic RC FCIP

Learning check answer

1. Complete the following table by filling in the maximum round trip time in milliseconds for each supported topology

Remote Copy type Max supported latencySynchronous RC FC 2.6 ms RTT*

Synchronous RCIP 2.6 ms RTT*

Asynchronous Periodic RC FC 2.6 ms RTT*

Asynchronous Periodic RCIP 150 ms RTT*

Asynchronous Periodic RC FCIP 120 ms RTT*

Learning check

2. What are the two elements of Recovery Central and how are they managed?____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Learning check answer

2. What are the two elements of Recovery Central and how are they managed?• Recovery Manager Central for VMware

− Managed via vCenter plug-in, used for VM backups only and is application-consistent

• Recovery Manager Central Express Protect− Managed via web browser, used for all other snap backups, and

is crash-consistent

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Federation and data mobility

What’s the benefit?

Federated storage

Storage federation

• Provides peer-to-peer versus hierarchical functionality as with compute federation

• Distributed volume management across self-governing, homogeneous peers systems allows resources management at the data center or metro level rather than at the device-by-device level

• Provides secure, non-disruptive data mobility at the array level, not the host level

• Eliminates the risk of over-provisioning a single array

To federate means to cause to join into a union or similar association; thus federated means to be united under a central government

DictionaryThe transparent, dynamic and non-disruptive distribution of storage resources across self-governing, discrete, peer storage systems

Marc Farley, StorageRap, April 2010

There are more complex and less complex solutionsSAN virtualization

• Traditional SAN virtualization appliances introduce more layers in the I/O stack and thus more dependencies and more to manage– EMC VPLEX– IBM SVC– Falconstore NSS– DataCore SANsymphony

• 3PAR Peer Persistence provides transparent storage presentation without the burden of an additional virtualization layer

SAN virtualization appliance

FC SAN

Layer 5Storage

Layer 2server SAN

Layer 1server

Layer 3storage

virtualization

DC 1 DC 2

Traditional SAN virtualization

FC SANLayer 4

storage SAN

FC SAN

3PAR PeerPersistence

DC 2DC 1

HP 3PAR federation

Layer 3federated

3PAR storage

Layer 2SAN

Layer 1Server

Federated storage vs. SAN virtualization

Requirement

SAN virtualization Federated storage

Flexibility: Yes: Supports changing workloads Yes: Supports changing workloads

Scalability: Some: Limited by most designs Yes: High levels of scale possible

Efficiency: Some: Improves utilization but adds to cost

Yes: Improves utilization with limited incremental cost

Simplicity: No: Complex. Storage capacity additions may be disruptive

Yes: Uses capabilities in underlying storage without complexity

Reliability: Some: Failover but adds network and management failure points

Yes: Failover without additional management or layers

Source: Evaluator Group: Storage Federation – IT Without Limits: Russ Fellows

HP 3PAR features

Priority Optimization

Dynamic Optimization

Adaptive Optimization

HP 3PAR and VMware VVOLs Online Import

HP 3PAR Priority Optimization—3.1.2 MU2Introduced in June 2013 via 3PAR OS 3.1.2 MU2

Allows customers to ensure quality of service and better use of storage resourcesEnables setting maximum performance threshold for front-end IOPS and/or bandwidth

Configured via HP 3PAR VVSETS

Can be enabled, disabled, or modified in real time from GUI or CLI

Host agnostic

No host agents are required

No physical partitioning of resources within the storage array is required

Supports multi-tenant environments

AppA App

B

AppC

Allotherapps

Arr

ay

Perf

orm

ance

Max limit

HP 3PAR Priority Optimization—3.1.3

HP 3PAR OS 3.1.3

HP 3PAR Priority Optimization

Priority levels (High, Normal

, Low)

Min goal

Minimum floor below which QoS

will not throttle

a volume

Max

limit

Maximum threshold for

front-end IOPS and/or bandwidth

Available since

3.1.2 MU2

Latency

goal

Svctime target

the system will try

to achieve

for a given

workload

System

busy level

Dynamic caps

adjusted based on real-time overall system

workload and

latency goals

Virtual domain

Enable QoS rules on

different

virtual domai

ns

SR-On-Nod

e alert

s

Create

alerts for

latency

goals

AppA

AppB Ap

pC

Allotherapps

Arr

ay

Perf

orm

ance

Max limit

Min goals

High priority Normal priorityLow priority

Latency goal

HP 3PAR Priority Optimization

IOPS

0 Busy Level

High

Medium

Low

IOPS cap = Function of System Busy level

Max 8 k

Max 6 k

Max 10 k

Min 5 kMin 4 kMin 3 k

• Performance caps are dynamically adjusted based on System Busy level• System Busy level is adjusted based on real-time latency and latency goal

25% 50% 75% 100%

10%

Manual or automatic tiering

HP 3PAR Dynamic and Adaptive Optimization

Tier 0

Tier 1

Tier 2

3PAR Dynamic

Optimization

3PAR Adaptive Optimization

- Region Sub-LUN block movements

between tiers based on policiesLUN movement between tiers

CPG 1

CPG 3

CPG B

CPG A

CPG C

CPG 2

Storage tiers—HP 3PAR Dynamic Optimization

Perf

orm

anc

e

Cost per Useable TB

Fast Class

Near LineRAID 1

RAID 5

RAID 1

RAID 6

RAID 6

RAID 1RAID 5

RAID 6

SSD

In a single command, non-disruptively optimize and adapt • Cost• Performance• Efficiency• Resiliency

HP 3PAR Dynamic Optimization—Use casesDeliver the required service levels for the lowest possible cost throughout the

data lifecycle

10 TB net 10 TB net 10 TB net

~50% savings

~80% savings

RAID 10300 GB FC drives

RAID 50 (3+1)600 GB FC

drives

RAID 50 (7+1)2 TB SATA-Class drives

Free 7.5 TBs of net capacity on demand

10 TB net

7.5 TB net free

20 TB raw―RAID 10 20 TB raw―RAID 50

10 TB net

Accommodate rapid or unexpected application growth on demand by freeing raw capacity

Tune virtual volume from a four-drive NL R5 CPG to a 16-drive FC R1 CPG

Tuning example with Dynamic Optimization

Iometer before tuneTune

startedTune

finished

Iometer after tune

Part of Dynamic Optimization

Online virtual volume conversion

Non-disruptively migrate VVs • From fat to thin provisioned (TPVV) and

vice versa

•From fat to thin dedupe (TDVV) and vice versa

• From thin provisioned to thin dedupe and vice versa

Source volume can be:•Discarded

•Kept

•Kept and renamed

Addressing I/O density with 3PAR architecture

0.00%

10.00%

20.00%

30.00%

40.00%

50.00%

60.00%

70.00%

80.00%

90.00%

100.00%

0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00% 70.00% 80.00% 90.00% 100.00%

Cum

ulati

ve A

cces

s Ra

te %

Cumulative Space %

ex2k7db_cpg

ex2k7log_cpg

oracle

oracle-stage

oracle1-fc

windows-fc

unix-fc

vmware

vmware2

vmware5

windows

cpg whole db

These SSD I/O densities can be achieved with 3PAR arrays based

on practical field information

% of total SSD net capacity

% of total I/O

1 33

2.5 50

5 66

10 80

20 90

35 99

Improve storage utilization

HP 3PAR Adaptive Optimization (1 of 4)

Traditional deployment

• Single pool of same disk drive type, speed and capacity, and RAID level

• Number and type of disks are dictated by the max IOPS + capacity requirements

Deployment with HP 3PAR AO

• An AO virtual volume draws space from two or three different tiers

• Each tier can be built on different CPGs, disk types, RAID level, and number of disks

Requir

ed

IOPS

Required Capacity

IO distribution

0% 100%

0%

100%

High-speed media poolSingle pool of

high-speed media

Medium-speed media

pool Low-speed media pool

Wasted space

Requir

ed

IOPS

Required Capacity

0% 100%

0%

100%

I/Odistribution

Efficient to own and manage

HP 3PAR Adaptive Optimization (2 of 4)

• Defined in policies by tiers and schedules

• Optimizes performance and cost by moving regions between tiers

• Up to 128 individual policies per 3PAR array

• Each policy can be scheduled individually

• A policy can run automatically or be manually triggered

• Part of 3PAR OS with in-node SQLite database

• No installation required

• Enabled by a license keyRead more in the 3PAR StoreServ Adaptive Optimization white paper

• An AO mode is cost-based, balanced, or performance-based−Cost: More data is kept in lower tiers−Performance: More data is kept in

higher tiers−Balanced (default): Balanced

between performance and cost

• Two or three tiers per policy can be defined

• Each tier is defined as a CPG • A CPG defines drive type, RAID

level, redundancy level, and step size

Configuring AO tiers

HP 3PAR Adaptive Optimization (3 of 4)

Scheduling AO

HP 3PAR Adaptive Optimization (4 of 4)

• Tier movement is based on analyzing these parameters:− Average tier service times− Average tier access rate densities− Space available in the tiers

• Tier movement can be started either: − Manually− Based on schedule

• Measurement interval can be defined between one hour and seven days

Mid-range evolution with the lowest risk upgrade

HP 3PAR StoreServ Online Import

• HP EVA– Trusted - 100,000 arrays

installed WW– Recognized for simplicity– Leading hardware efficiency

• EMC arrays– CX4 and VNX– Using the Peer Motion utility

• HP 3PAR– Tier 1 architecture and

features– Clustered scalable controller

architecture– Industry-leading efficiency

technologies– Multi-tenancy for mixed

workloads

Online Import

Online Import

Online Import

HP 3PAR StoreServ

A uniquely agile Tier 1 storage platform

HP 3PAR Online Import Utility for EMC StorageIs an orchestration platform that enables data migration from a source EMC storage system to a destination 3PAR StoreServ storage array using a scriptable CLIAllows for virtual disks to be migrated from an EMC CLARiiON CX4, VNX, or VMAX source system to a 3PAR StoreServ destination storage system with minimal disruption to data accessOrchestrates the movement of data from the source while servicing I/O requests from the hosts; data remains online during migrationIntegrates as a plug-in for 3PAR Online Import Utility framework to import data from a source EMC storage arrayRequires 3PAR OS 3.1.3 MU1 or later for VNX/CX4 and 3PAR OS 3.2.1 or later for EMC VMAXUses 3PAR Online Import license that is built into the 3PAR OS Suite

Online Import Software also available as an add-on 180-day license for 3PAR StoreServ 7000, 7000 Converged Controllers, and 10000 platforms, including the 3PAR StoreServ 7450 all flash arrayCan be used by storage administrators for scripting migrations

Supports REST API

What’s new?

Adding support for EMC VMAX arrays VMAX, VMAX SE VMAX 10K, 20K, 40K Enginuity 5876

Expanding host operating system support Windows Server 2003, R2 (stand-alone and clusters)Windows Server 2008 (stand-alone and clusters)RHEL 6— cluster supportRHEL 5 (stand-alone and clusters)

Support for 3PAR StoreServ 7000 Converged Controllers and 3PAR OS 3.2.1

Reducing outage window for Linux migrations using online migration method — Linux migration no longer requires a reboot of the host

Updated Online Import for EMC white paper and data migration guide

HP 3PAR Online Import Utility for EMC Storage Deployment

How does it work?

Windows client

Can be a physical Windows server

or within a VM

Can be a

Windows client

EMC VNXEMC CX4orEMC VMAX

Supported environments—at General Availability

3PAR Online Import for EMC Storage

Source arrays:EMC arrays

Family/Models/FW

EMC CLARiiON CX4 Family • Models: CX4-120,

240, 480, 960• Supported FW: Flare

4.30.xxEMC VNX Family• Models: VNX 5100,

5300, 5500, 5700, 7500

• Supported FW: VNX OE for Block 5.32.xxEMC VMAX Family

(Gen 1 & 2)• Models: VMAX, VMAX

SE, VMAX 10 K, 20 K, 40 K

• Supported FW: Enginuity 5876

Host operating system

platformsWindows Server

2003, R2(stand-alone, clusters)Windows Server

2008, R2 (stand-alone, clusters)Windows Server

2012 (stand-alone, clusters)

RHEL 6 (stand-alone, clusters)

RHEL 5(stand-alone, clusters)

Hyper-V 2008 R2* (stand-alone)

Multipath support

Native MPIO/DSM

No PowerPath (PP) support

(Clients must remove PP

from host before data migration begins)

Destination arrays:

HP 3PAR StoreServ models/3PAR OS

supportVNX, CX4

3PAR OS 3.1.3 MU1 or later

VMAX3PAR OS 3.2.1 or later

HP 3PAR StoreServ 7200, 7400, 7450,

10400, 108007200c, 7400c, 7440c, 7450c

For data migration from EMC Storage to 3PAR StoreServ

What is required for 3PAR Online Import to work

EMC Solutions Enabler (to enable SMI-S access to EMC arrays)

• Installed on a server• Free downloadable utility from EMC website (user account required)

HP 3PAR Online Import Utility (OIU) for EMC Storage• Client/Server-based application• Available as free download from HP Software Depot• Server component installed on a server (physical Windows server or within a VM)• Client components can be installed on same server as OIU or on a Windows client

HP 3PAR Online Import Software• No new SKUs added for Online Import for EMC Storage• 180-day Online Import license ships with 3PAR OS Suite• Supported (at GA) on destinations 3PAR StoreServ models 7200, 7400, 7450, 10400, 10800, also

supported with 3PAR StoreServ 7000 Converged Controllers• Supported with minimum 3PAR OS versions

• VNX/CX4: 3.1.3 MU1 or later • VMAX: 3.2.1 or later

Limitations

HP 3PAR Online Import for EMC Storage

Supported migration methods:

Minimal disruptive method (MDM) for Windows and Linux hosts

Online method for Linux hosts (does not require reboot)

Removal of PowerPath from host before data migration

Single outage Two successive reboots for MDMNo reboots required for online method**If PowerPath is installed, the volumes will have to be represented to DM

after PP has been removed. This requires a short application outage.

Five simple stages of the migration process3PAR Online Import for EMC Storage

Online Import Utility, create migration• Add source• Add destination• Create

migration

Zone host to 3PAR

Configure host

multipathing

Shut down the host,

unzone from source, start the migration

Start host and

validate application

Learning check

1. Fill in the fields in both columns as they apply to SAN virtualization compared to Federated storage

Learning check answer

1. Fill in the fields in both columns as they apply to SAN virtualization compared to Federated storage

Learning check

2. List the four key features of HP 3PAR data management____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Learning check answer

2. List the four key features of HP 3PAR data management• Priority Optimization• Dynamic Optimization• Adaptive Optimization• Online Import

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Management and support

HP 3PAR StoreServ management vision

Polymorphic simplicityAdj: Existence in several forms, shapes, and sizes

• One management platform• Consolidated management tools• Modern look and feel• Web-based • Consistent experience for file and block

Highend

Entry

Conve

rged

Sto

rage

managem

ent

1 Service and support

Syste

m re

portin

g

3PAR StoreServ Management Console

• Replacement for the existing management console • Converged management for entire HP 3PAR product line• Intuitive web-based UI with dashboard overviews• Modern and consistent look and feel • Redesigning hundreds of IMC screens• Better usability• Replacement for existing System Reporter• Standards based and integration with HP OneView

DashboardHealth, performance, and capacity

at a glance

Mega Menu

• An express-driven interface to all points within the SSMC and the objects monitored− From the Mega Menu, a user is linked directly to

any of the listed context areas

• Converged File and Block management and reporting

Search

• Global search allows user to find objects in seconds• Search can be global to all systems and objects or confined to a

given object • Intelligent search remembers previous queries

User is notified that the array is experiencing some delays on a host identified by the name “ATC”

User has a server attached to the array identified as only “WIN”

MAP views (1 of 2)Physical map views

System—Map view

MAP views (2 of 2)Logical map views

VVset—Map view

System Reporter in SSMC (1 of 2)

• Modern look and feel• Focused on ease of use • Based on SR-on-Node data (no external

DB setup)

Zoom in online from daily data

to high-res

System Reporter in SSMC (2 of 2)• In-depth capacity information• Point-and-click AT Time detailed reports

One console does it all—easy and straight forward

HP 3PAR Management Console

Management window

Alerts, tasks, connections

Common actions panel

Manager pane

Main tool bar

Main menu bar

Status bar

HP 3PAR—Management options• 3PAR Management Client (GUI)

− Fat client GUI—Windows, Red Hat Linux− Storage management GUI

• Command line interface − 3PAR CLI or ssh − Storage Management Interface− Storage Server—Very rich, complete command set

• SMI-S − Management from third-party management tools

• Web API− RESTful interface

• External Key Manager (ESKM)− HP Enterprise Key Manager or SafeNet KeySecure

• Service Processor (SP)− Physical or virtual machine (vSphere or Hyper-V VM)− Health checks by collecting configuration and

performance data− Reporting to HP 3PAR Central− Anomalies reported back to customer via OSSA− Array maintenance

Management LAN

GUICLI/SSHaccess

SMI-S access

Web API

SP eth connect3PAR node management eth connect

7000SP VM

SP instance (10000 physical, 7000 virtual/optional physical)

ESKM

Fine-grained privilege assignment

3PAR Direct Manageability

3PAR OS CLI

HP 3PARManagement Console

▸ Simple, comprehensive, consolidated administration

▸ Powerful, fine-grained control▸ Scriptable, with highly consistent

syntax▸ LDAP support, IPv6▸ Multiple assignable roles

Super

Access to all operations

Edit

Access to most

operations

My Snapshot

Create/refresh snaps

(for test / dev.)

Service

Limited operations

for servicing

Browse

Allows read-only

access

Basic Edit

Create and unmount,

cannot delete

Create

Create volumes

but cannot delete

Recovery Manager

Only for RM

operations

Adaptive Optimizatio

n

AO only operations

(pre 3.1.2)

Provides a well-defined API for performing storage management tasks

HP 3PAR Web Services API

Developer’s guide and sample client can be downloaded from HP Software depot at: http://software.hp.com

Init

ial fu

ncti

on

ality

wit

h

3.1

.2

• Creation and removal−Virtual volumes

−Virtual copies

−CPGs

−VLUN• Query all

−Volumes and their properties

−CPGs and their properties

−VLUNs and their properties N

ew

wit

h 3

.1.2

MU

2 • Modification of virtual Volumes, virtual copies, CPG parameters

• Creation, removal, and modification of hosts

• Query• Single item (as

opposed to querying an entire collection)

• Available space• General system

information

New

wit

h 3

.1.3

• Functions equivalent to the following CLI commands −Createvvset,

setvvset, removevvset

−Createhostset, sethostset, removehostset

−Createvvcopy for single vv and vvset

−Createsv for vv set−Createvlun for vv

set and host set support

−Setqos (support new 3.1.3 QOS features)

−Showportdev –fcswitch

−Showportdev all−Showtask

New

wit

h 3

.2.1

MU

1 • Support for Remote Copy

• New query facilities

• List user privileges

• Create a thin deduplicated volume (TDVV)

• Convert TPVV to TDVV

• Create TDVV physical copy

• New fields to Spacereporter, Volume and Capacity objects to show compaction and deduplication capacity efficiency numbers

Converged storage management

HP StoreFront Mobile Access for 3PAR StoreServ•Access: 24x7 access from virtually any location

– Remote access to 3PAR StoreServ using Android and now also iOS-based devices

• Insight: Monitor storage system statistics and properties– Capacity utilization, CPGs, virtual volumes, device types,

and more

•Automation: Receive critical alerts in real time to reduce risk– Instant notification of error conditions or issues that need

immediate action

•Security: Encrypted login for secure remote access– Browse-only enforcement access integrated with 3PAR

role-based securitySee more: hp.com/go/storefrontmobile

HP 3PAR Remote Support (1 of 2)

• Allows remote service connections for assistance and troubleshooting to deliver faster, more reliable response and quicker resolution time

• Transmits only diagnostic data• Uses secure service communication over Secure Sockets Layer (HTTPS) protocol between

HP 3PAR Storage Systems and HP 3PAR Central• The optional Secure Service Policy Manager allows the customer to individually enable or

disable remote support capabilities and logs all remote support activities• If customer security rules do not allow a secure Internet connection, a support file can be

generated on the Service Processor and sent to HP by mail or FTP on a regular basis For more details read the ”HP 3PAR Secure Service Architecture” white paper in the HP Enterprise Library at: http://www.hp.com/go/enterpriselibrary

HP 3PAR Remote Support (2 of 2)

HP 3PAR Central Customer

Internet

3PAR Management

ConsoleDNS Proxy

HP 3PAR Secure Service collector server

HTTPS port 443 enabled

Optional:Secure Service policy manager

Optional:mail server

3PARarrays

HPmail server

HP Global Services and

Support representative

HP 3PAR OSSA

HP 3PAR Central—What’s in it for you?

Proactive, remote error

detection

World-class support

Security and control

• Secure, encrypted communication

• Exclusive control of remote access policy configuration

• Viewable audit log• Simple SW upgrades

• HP 3PAR Central support hub staffed with experts

• 24x7x365 monitoring• Automated parts dispatch

• Proactive system scans and health checks

• Over-Subscribed System Alerts

Find more details in the 3PAR Central data sheet

Protect storage QoS

Faster support with less downtime

Stay informed and in control

35%*Higher

availability

64%*Faster time to resolution when onsite support is

required

days minutes

Complete control of connectivity and software

upgrades on your schedule

Protect your business with proactive fault detection and error resolution

HP 3PAR Central—Get connected and back to business

* Source = HP measurement of installed base of HP’s StoreServ remote support customers as of Q2 2014

Part of HP 3PAR Remote Support

HP 3PAR Over-Subscribed System Alert tool• OSSA performs periodic proactive utilization checks on

key system elements• Customers with active service contracts receive email

messages when systems seem to be oversubscribed in one of these areas:− Active VLUNs − Balanced drives − CPU utilization− Disk IOPS − Disk port bandwidth per node pair− Initiator distribution per node− Initiators per port− Initiators per system− PCI bus bandwidth− Port bandwidth− Raw capacity

• Data is collected periodically from the HP 3PAR StoreServ Storage server using HP 3PAR Secure Service Architecture

• The array must be signed up for Remote Support

Dear HP 3PAR StoreServ Client,

You are receiving this email because HP Technical Services has identified HP 3PAR StoreServ SN: xxxxxxx as displaying the exception listed below via the automated Over Subscribed System Alert (OSSA) tool.

The OSSA tool utilizes data that is collected periodically from HP 3PAR StoreServ and sent to HP 3PAR Central to perform proactive checks on key system utilization elements. The intent is to provide clients with valuable information to keep the storage array running optimally.

Data collected by the OSSA tool is scanned to identify exceptions in the following eleven critical areas: Active VLUNs, Balanced Drive, CPU Utilization, Disk IOPS, Disk Port Bandwidth per node pair, Initiator Distribution per Node, Initiators Per Port, Initiators Per System, PCI Bus Bandwidth, Port Bandwidth and Raw Capacity.

A report is generated each time when an exception is detected. A new report will be generated weekly while the exception persists.

OSSA Report Details:

Serial Number: xxxxxxx Model: 7400 HP 3PAR OS: 3.1.2.484 Nodes: 2

The Disk IOPS is checked based on performance files collected once every four hours. An OSSA report is generated when one or more disks have exceeded the Disk IOPS threshold in 3 out of 6 files collected. This relates to a rolling 24-hour period. Refer to the following table for the IOPS threshold based on Disk Type:

Physical Disk Type Defined Threshold Value

Nearline (NL) 75

Fibre Channel 10K RPM (FC 10) 150

Fibre Channel 15K RPM (FC 15) 200

PD Id Disk Type IOPS Date Time

24 NL 129 07/29/2014 10:31:49

24 NL 213 07/29/2014 15:17:44

24 NL 90 07/29/2014 05:45:54

25 NL 169 07/29/2014 10:31:49

Email-alert example

Secure Remote Support

HP 3PAR Virtual Service Processor

Virtual Service Processor• Cost-efficient, secure gateway for

remote connectivity for the StoreServ 7000 arrays

• Effortless, one-click configuration• Supported on:

− VMware vSphere (4.x, 5.x)− Microsoft Hyper-V (Windows Server

2008, R2, or 2012)• Enables:

− Remote, online SW upgrade− Proactive fault detection with remote

call home diagnostics − Remote serviceability − Alert notifications

• Optional HW Service Processor available

HP 3PAR Policy Manager

Provides:• Centralized audit log to facilitate security audits• Centralized policy administration for all HP 3PAR Storage systems• Complete control over policy administration including

− File Upload—File uploads of diagnostic data to HP 3PAR Central are allowed or disallowed

− File Download—File downloads from HP 3PAR Central are allowed or disallowed− Remote Session—Remote sessions for remote serviceability can or cannot be

established with HP 3PAR Central − Always Allow—All remote connection requests are allowed− Always Deny—All remote connection requests are denied− Ask—Approval is needed via email within a configured timeout window from the

configured customer administrator

Policy Manager is software installed on a separate, customer-provided Windows server

Learning check

1. Which HP 3PAR applications are found in the Management Console?_________________________________________________________________________________________________________________________________________________________________________________________________________

Learning check answer

1. Which HP 3PAR applications are found in the Management Console?3PAR Thin Provisioning, Virtual Copy, Dynamic Optimization, Virtual Domains, and Remote Copy

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Virtualization integration

• VMware• Hyper-V

HP 3PAR Utility Storage is the perfect fit for virtualized environments

3PAR VMware integration

• Efficient integration of HP 3PAR Thin Technologies • Simplified storage administration with:

− vCenter Server integration − vCenter Operations Manager integration − VAAI and VASA support − vCenter Site Recovery Manager integration

• High availability and disaster tolerance thanks to vSphere Metro Storage Cluster certification

• Allows greater virtual machine density thanks to: − Inherent wide striping− Mixed workload support

• Easy recovery and replication using HP 3PAR Recovery Manager Software for VMware vSphere

See also: http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA3-4023ENW.pdf

Core module

Provides the framework required by the Server Module and the Storage Module

Server module

Provides server hardware management capabilities, including comprehensive monitoring, firmware update, ESX/ESXi image deployment, remote control, end-to-end monitoring for Virtual Connect, and power optimization for HP servers in the VMware environment

StoreFront Module (former Storage Module)

Provides storage provisioning, configuration, and status information for mapping VMs, datastores, and hosts to LUNs on HP storage arrays in the VMware environment

HP OneView for VMware vCenter Server

ServerModule

StoreFront Module

VMware vCenter server

VMware vSphere client (legacy)

VMware vSphere web client

Core Module

An integrated application with three modules (formerly HP Insight Control)

HP OneView for VMware vCenter

Single pane of glass from VMware vCenter to your HP Converged Infrastructure

HP OneView StoreFront Module for vCenter• The StoreFront module enables you to:

− Map the VMware virtual environment to HP storage and provide detailed contextual storage information

− Create/expand/delete VMware data stores − Create virtual machines from a template − Clone virtual machines from an existing virtual machine − Delete an unassigned volume − Integrated with vSphere client and the new vSphere

Web Client− Visualize complex relationships between VMs and

storage− Easily manage Peer Persistence for HP 3PAR StoreServ

• Supports HP MSA, EVA/P6000, StoreVirtual, XP/P9500, 3PAR

Network

Servers

Storage

Management software

New functionality added to the StoreFront Module (former Storage Module)

HP OneView for VMware vCenter Server 7.4• Full interoperability with vSphere 5.5• Recovery Manager 2.5 for HP 3PAR StoreServ integration in the plug-in• HP 3PAR StoreServ VASA integration in the plug-in• Switch Peer Persistence support in storage provisioning actions for HP

3PAR StoreServ systems• New storage provisioning wizards in vSphere Web Client 5.5• New storage provisioning wizards now support Peer Persistence for HP

3PAR StoreServ systems− “auto_failover” and “path_management” parameters are set

• Peer Persistence configuration diagram• Graphical view of VMs-to-volumes information

Example: Peer Persistence manual transparent storage failover

HP OneView for VMware vCenter

HP StoreFront Analytics Pack for VMware vCOPS

• Provides detailed 3PAR and VM reporting for: • I/O ports

• Physical drives

• CPGs

• Volumes

• Capacities

• Performance

• Response times

• I/O Sizes

• Health

• and more

• Free features• Health information for the array

• Licensed features• Capacity information

• Performance information (key performance metrics: bandwidth, IOPS)

Effortless management thanks to the 3PAR vCenter Operations Manager integration

See the demo video at: http://www.youtube.com/watch?v=GF3ZQF_k5ME&feature=player_detailpage

HP 3PAR Recovery Manager for VMware vSphere • Solution composed of:

− 3PAR Recovery Manager for VMware vSphere− 3PAR Virtual Copy− VMware vCenter

• Use cases− Expedite provisioning of new

virtual machines from VM copies− Rapid online recovery of files− Snapshot copies for testing and development

• Benefits− Hundreds of VM snapshots granular, rapid online

recovery − Reservation-less, non-duplicative without agents− vCenter integration—superior ease of use

Array-based snapshots for rapid online recovery

Find product documentation at http://h18006.www1.hp.com/storage/software/3par/rms-vsphere/index.html See the demo video at:3PAR Management plug-in and Recovery Manager for VMware

HP 3PAR Management Plug-in for VMware vCenterPeer Persistence topology • Is visible in the

vCenter Client via the HP 3PAR Management Plug-in for VMware vCenter

• Is fully supported with Recovery Manager for VMware vSphere

Peer Persistence status information in the vCenter Client

Peer Persistence for VMware vSphere

What does it provide?• High availability across data centers • Automatic or manual transparent LUN swap • Transparent VM vMotion between data centersHow does it work?• Based on 3PAR Remote Copy and vSphere ALUA

− Primary RC Volume presented with active paths− Secondary RC Volume presented with passive paths

• Automated LUN swap arbitrated by a Quorum Witness (QW Linux ESX VM on third site)

Supported environments• ESX vSphere 5.0, 5.1, 5.5 including HA, Failsafe

and uniform vSphere Metro Storage Cluster

• Up to RC sync supported max of 2.6ms RTT (~260km)

Requirements• Two 3PAR disk arrays• FC, iSCSI, or FCoE cross-site Server SAN• Two RC sync links (RCFC or RCIP*)• 3PAR Remote Copy and Peer Persistence Licenses• 3PAR OS ≥3.1.2 MU2

Certified for vSphere Metro Storage Cluster

Also see the VMware KB "Implementing vMSC using 3PAR Peer Persistence" and the HP Whitepaper ”Implementing vMSC using HP 3PAR Peer Persistence”* RCFC strongly recommended; VMware vMSC certification is based on RCFC

Data Center 2

Data Center 1

Synchronous Remote Copy

+ Peer Persistence

S

PPrimary RC Volume active path presentation

Secondary RC Volume LUN passive path presentation

HP 3PAR

HP 3PARQW DC 3

vSphere

vSphere

vSphere

vSphere

vSphere Cluster vSphere

P

S

P

LAN/WAN

S

vSphere

Up to 2.6 ms RTT latency

Never lose access to your volumes

Peer Persistence for VMware

Fabric A

A A

BVol B prim

Vol B sec

VMware vSphere 5.x Metro Storage Cluster (single subnet)

3PAR array Site A

3PAR array Site B

B

Fabric B

QW

Site C

• Each host is connected to each array on both sites via redundant fabrics (FC or iSCSI or FCoE)

• Synchronous copy of the volume is kept on the partner array/site (RCFC or RCIP)

• Each volume is exported in R/W mode with same WWN from both arrays on both sites

• Volume paths for a given volume are “Active” only on the array where the “Primary” copy of the volume resides− Other volume paths are marked “Standby”

• Both arrays can host active and passive volumes

• Quorum Witness on third site acts as arbitrator in case of failures

Vol A secVol A prim

Active path (Vol A)Active path (Vol B)Passive (standby) path

Peer Persistence for VMware—ALUA path view

vSphere

2 2’

VMware vSphere 5.x Metro Storage Cluster (single subnet)

3PAR array Site A

3PAR array Site B

QW

Site C

Vol A secVol A prim

2’

2

vCenter path management view

3PAR MC Remote Copy view

Easy setup

Peer Persistence—3PAR Storage and vSphere• Steps to configure the QW VM

− Install the canned HP QW Red Hat VM thinly provisioned and on a vSphere server located at a preferably third site

− Set up the QW network− Define the QW hostname− Set the QW password− From the 3PAR management console or the CLI

configure the communication between the 3PAR StoreServs and the QW

• Now you can set up Peer Persistence− Zone ESX host to the 3PAR arrays (SAN)− Create hosts with persona 11, VVs, LUNs on primary

3PAR− Create datastores in vSphere− Create hosts with persona 11, VVs on secondary 3PAR− Set up and sync Remote Copy− Add Remote Copy Group to Enabled Automatic Failover

Remote Copy Groups in Peer Persistence− Create LUNs on secondary 3PAR− Test your setup

3PAR Recovery Manager for VMware vSphere

1. RMV collects all HP 3PAR storage devices’ information from vCenter Server

2. If a storage device contains multiple paths from a different StoreServ array, it participates in a Peer Persistence setup

3. The active path determines which StoreServ is the primary site

4. User can create crash level/application consistent virtual copy on both local and remote sites

5. Virtual Copy can be recovered from any site

How is Peer Persistence (Transparent Failover) integrated?

VMware vSphere Metro Storage Cluster

vStorage API for Array Integration

VMware VAAI primitives overview

Primitive Description

vSphere 4.1

3PAR support introduced with

3PAR OS 2.31 mu2+

ATS Atomic Test and Set; stop locking entire LUN and only lock blocks

XCOPYAlso known as Fast or Full Copy; leverage array ability to mass copy and move blocks within the array

WRITE SAME Eliminate redundant and repetitive write commands

TP Stun Report array TP state to ESX so VM can gracefully pause if out of space

vSphere 5.x

3PAR support introduced with 3PAR OS 3.11

UNMAP*Used for space reclamation rather than WRITE_SAME; reclaim space after a VMDK is deleted within the VMFS environment using the vmkfstools –y command

TP LUN ReportingTP LUN identified via TP enabled (TPE) bit from READ CAPACITY (16) response as described in section 5.16.2 of SBC3 r27

Out of Space ConditionUses CHECK CONDITION status with either NOT READY or DATA PROTECT sense condition

Quota Exceeded Behavior

Done through THIN PROVISIONING SOFT THRESHOLD REACHED (described in 4.6.3.6 of SBC3 r22)

* Initial vSphere 5.0 implementation automatically reclaimed space. However, VMware detected a flaw that can cause major performance issues with certain third-party arrays. VMware therefore disabled the automatic T10 UNMAP; see KB article vSphere 5.5 introduced a new simpler VAAI UNMAP/Reclaim command: # esxcli storage vmfs unmap

vStorage API for Array Integration

• Optimized data movement within the SAN for:− Storage vMotion− Deploy template− VM cloning

• Fully leverages 3PAR Thin Technologies

• Significantly lower CPU and network overhead− Much quicker migration− No host I/O—copy done by array

Hardware-assisted Full Copy

VMware Storage vMotion with VAAI enabled and disabled

HP 3PAR VMware VAAI support example

Back-end disk I/O

Front-endI/O

DataMover. HardwareAcceleratedMove=0

DataMover. HardwareAcceleratedMove=1

Hardware-assisted locking

vStorage API for Array Integration

• Increase I/O performance and scalability by offloading block locking mechanism− Moving a VM with VMotion − Creating a new VM or deploying a VM from a template− Powering a VM ON or OFF − Creating a template− Creating or deleting a file, including snapshots

Without VAAI

ESX

SCSI Reservation locks entire LUN

With VAAI

ESX

SCSI Reservation locks at Block Level

Hardware-assisted block zero

vStorage API for Array Integration

• Offloads large, block-level write operations of zeroes to storage hardware• Reduces the ESX server workload

With VAAI

0000000000000000000000000000000

Without VAAI

000000000000000000000000000000

ESX0

ESX0

VM creation 100 GB EagerZeroedThick formatted on 3PAR RAID 1 ThP volume

VMware Write Same—Block Zero in action

Disk portsBack-end throughput

Host-portsFront-end throughputBack-end and

Front-end traffic

Front-end

traffic only

Minimal front-end traffic only

VAAI Off Off On

3PAR zero detect

Off On On

Creation time 5:29 min 4.14 min14 sec

With vSphere 5 and 3PAR OS 3.1.x

VMware space reclamation

• Transparent− Thin Persistence allows manual

reclaiming of VMware space with T10 Unmap support in vSphere 5.0 and 3PAR OS 3.1.x using the vmkfstools -y command *

• Granular− Reclamation granularity is as low as 16

KB compared to 768 KB with EMC VMAX or 42 MB with HDS VSP

− Freed blocks of 16 KB of contiguous space are returned to the source volume

− Freed blocks of 128 MB of contiguous space are returned to the CPG for use by other volumes

Time

20 GB VMDKs finally only consume ~20 GB rather than 100 GB

HP 3PAR with Thin Persistence

20 GB

X X

0000000000000000

0000000000000000

25 GB 25 GB

10GB 10GB

DATASTORE

0000 T10 UNMAP – vmkfstools -y (16 KB granularity)

Rapid, inline ASIC Zero Detect

3PAR scalable Thin Provisioning

0000000000000000

0000000025 GB 25 GB

20GB 15GB

100 GB ThP 100 GB ThP

55GB

X X

* Initial vSphere 5.0 implementation automatically reclaimed space. However, VMware detected a flaw which can cause major performance issues with certain third-party arrays. VMware therefore disabled the automatic T10 UNMAP; see KB article

Are there any caveats to be aware of?

VMware vStorage VAAI

The VMFS data mover does not leverage hardware offloads and instead uses software data movement if:• The source and destination VMFS volumes have different block sizes • The source file type is RDM and the destination file type is non-RDM (regular file)  • The source VMDK type is EagerZeroedThick and the destination VMDK type is thin • The source or destination VMDK is any sort of sparse or hosted format• The source virtual machine has a snapshot  • The logical address and/or transfer length in the requested operation are not aligned to

the minimum alignment required by the storage device − All datastores created with the vSphere client are aligned automatically

• The VMFS has multiple LUNs/extents and they are all on different arrays • Hardware cloning between arrays (even if within the same VMFS volume) does not work 

You can find vStorage APIs for Array Integration FAQ at:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1021976

HP 3PAR StoreServ and VVolsHypervisor-aware storage on HP 3PAR StoreServ

Distinguish between individual VMs

3PAR Snapshot enables thousands of recovery points on a per-VM basis

3PAR Priority Optimization provides array-based QoS guarantees at the application and/or VM level for most efficient and dynamic/real-time control of VM performance

3PAR StoreServ is the first array to provide space reclamation via the zero-detect engine in the 3PAR ASIC with VVols on a per-VM basis

Virtual volumesGranular, policy-based VM

storage

What are VMware virtual volumes?

• VVols are part of the VASA 2.0 specification defined by VMware as a new architecture for VM storage array abstraction

• VASA 2.0 introduces interfaces to query storage abstractions such as storage containers and the capabilities they support

• This information helps VMware Storage Policy–Based Management (SPBM) make decisions about virtual disk placement and compliance

• VASA 2.0 also introduces interfaces to provision and manage the lifecycle of VVols− Multiple VVols are created for a single virtual machine

Datastore model

• This diagram illustrates how a storage array is provisioned for VMs today

• The storage array provides a single secure datastore for a number of VMs and their associated data sets 

• The positive aspect of this is that a large number of VMs and their data sets can be represented on the fabric with a small number of storage entities, which is good for the scalability of the deployment 

• The negative impact of multiplexing a large number of VMs into a monolithic datastore is that the granularity of service-provisioning management of VMs is limited

VM1 VM2 VM3

vCenter

Volume

VM1

VM2

VM3

HP 3PAR StoreServ

VVol model

• The VASA 2.0 specification describes the use of virtual volumes to provide ease of access and ease of manageability to each VM datastore

• Each VMDK, VMConfig, and SWAP disk is provisioned as a separate VVol within the array

• A single point of access on the fabric is provisioned via the protocol endpoint

• These PEs are discoverable using regular LUN discovery commands

VM1

VM2 VM3

vCenter

HP 3PAR StoreServ

VM1

VM2VM3

VVol

VVol

VVol

Protocol endpoint

VASA provider

Out-of-band communication

Goals of the VVol architecture

• Support VMware SPBM• Granularity of service provisioning

− Previous versions of vSphere stored several VMDKs in a single volume− As a result, the storage services provisioned to all VMs could not be changed easily

and were not on a per-VM (or per-application) basis

• Scalability of VMs and VMDKs− Because a number of VMDKs were stored in a single volume, scalability of the vSphere

deployment was not a major issue− When each VMDK is stored in a single volume, then the number of LUNs that can be

attached to ESXi becomes a problem, solved through the use of protocol endpoints to export multiple VVols from a single point on the fabric

• Improvement in operational recovery and disaster recovery− Management information stored alongside each VVol

VVOL timeline

VMware announced

VVOL Beta on 06/30

VVOL beta functionality

to be included

with 3.2.1 – 09/29

Handful of 3PAR

customers will be

allowed to enable VVOLs

on non-production

arrays

VVOL functionality will be

enabled by default in 3.2.1. MU

All customers can use

VVOLs after VMware

vSphere 2015 is GA

Key Windows Server 2012 storage features MSFT Thin Provisioning

• MSFT solution• Detects/identifies thinly provisioned virtual disks• Notifies administrator when storage thresholds are met• Unmap—returns storage when no longer needed

• MSFT requires UNMAP and SBC3 (T10 std) enablement to pass ThP certification

ODX (Offload Data Transfer)

• Enables protocol and transport agnostic, storage subsystem assisted data transfer• HPSD targeting “lead with” enterprise arrays for ODX support

Storage Management via operating system

• Integrated in the operating system; optimal for Small to Medium Businesses • SNIA Standards based (SMI-S) providers need to support Lifecycle Indications (auto updates “device info” cache in OS); without Lifecycle Indications, cache can be stale and would need to be refreshed either manually or via the 24-hour auto cycle

Failover Clustering• Scale: Operating system supports 64 nodes • Support level dependent on array market

SMB 3 (Server Message Block)

• Hyper-V support for SMB file storage; transparent failover, bandwidth improvements, support for RDMA NIC’s

Storage spaces

• Optional certification for JBODs (SATA, SAS, USB; not supported for Fibre Channel or iSCSI)

• Introduces Storage Pools concept; supports multi-tenancy mirroring or parity, clustering, ThP

A perfect fit

HP 3PAR Thin Persistence in Microsoft environmentsWith Windows Server 2003 and 2008• Zero unused space in a volume with:

− sdelete− fsutil

Introduced with Windows Server 2012• Active Thin Reclamation with T10 UNMAP

− Detects/identifies thinly provisioned virtual disks− Notifies administrator when storage thresholds are met− UNMAP—returns storage when no longer needed

Just-in-time allocation and ability to reclaim unused space automatically

Thin Provisioning support

• Identification− Providing mechanisms for identifying thinly provisioned LUNs throughout the

operating system (VPD 00h, B2h, PT= 010b LBPU=1)

− Ability to query the mapped/unmapped state of LUN extents• Notification

− Exposing events to indicate when LUNs cross threshold boundaries (Temporary/Permanent resource exhaustion handling)

− Events will be consumable by management applications

• Optimization− Providing end-to-end transparency of application and file system allocations− All the way from the app layer (including Hyper-V guests on VHDX) through the

storage hardware− UNMAP (space reclaim) requests provided both real-time and on scheduled basis

• Compatibility− Windows Logo test required

Identification

• Seen in Optimize Drives and File and Storage Services sub-screens and wizards− Volume view− Pool view

Automatically reclaim space with UNMAP• Scheduled UNMAP

− Runs at times of low server I/O or CPU utilization and at scheduled times (such as Defrag)

− Runs at time of file deletion

ODX allows Hyper-V and operating system to move storage faster and more efficiently

Offloaded Data Transfer—ODX

• Enables protocol- and transport-agnostic, cross-machine storage subsystem assisted data transfer− Practically eliminates load on the server, enables a significant reduction in the load on

the transport network, and presents an opportunity for innovation for the storage subsystem

• Used for live storage migration, VHD creation, bulk data movement, and so forth

HP 3PAR

Array- offloaded

copies

File copy request

File copy

request

Without ODX―Hyper-V live storage migration, 3PAR host port throughput

Up to 260 MB/s

With ODX―Hyper-V live storage migration, 3PAR host port throughput

Virtually no I/O activity

D:\

E:\ D:\

E:\

F:\

G:\

G:\

F:\

Hyper-V ODX support

• Secure Offload Data Transfer− Fixed VHD/VHDX creation− Dynamic VHD/VHDX expansion− VHD/VHDX merge− Live storage migration

• Just another example…

Average Desktop

using ODX

0

50

100

150

200

Creation of a 10 GB fixed disk

Time (seconds)

Boost your performance

~10 seconds!

~3 minutes

Capacity efficiency with deduplication

• Variable size chunk-based deduplication− 32 K - 128 K chunks found using sliding window hash

(Rabin fingerprint)− Chunks are compressed when saved

• Scope and scale− Runs per-volume with multiple volumes simultaneously − CPU and memory throttling can minimize performance

impact− Metadata kept redundant to protect against data corruption

• Performance (source: Microsoft)− No noticeable effect on typical office workloads (home

directories)− 10% reduction in performance in the number of users

supported over SMB3.0 using FSCT− Optimization/deduplication @ 20-35 MB/s using a single

core with 1 GB free RAM

Capacity savings by Microsoft IT

Home folders 30% savings

General file shares

64% savings

Virtual hard disk library

82% savings

Software development shares

67% savings

Windows Server 2012 unified management concepts

SMB & NFS sharesFSRM & DFS

iSCSI

Deduplication

File systems

Windows ClusterWindows volumesWindows disks

Storage

Physical disks

Storage pools

Virtual disks

Spaces

Disk SMP/SMI-S

Pool

Space(s)

Storage pool

LUN(s)Windows and/or external storage

WindowsFile Server

Windows Server

After registering the SMI-S Provider

Storage Manager—Storage Pool View

Storage Manager—Array View (Storage Pool)

HP Storage Management Pack for SCOM

HP Storage management in SCOM• Automated installation and discovery

− Supports both SCOM 2007 and SCOM 2012

• Monitoring of HP Storage directly from SCOM− Events and alerts for HP Storage hardware− Diagram/Topology views

• Not included with HP Insight Control for System Center− Download for free from:

http://h18006.www1.hp.com/storage/SCOM_managementpack.html

SCOM 2007SCOM 2012

HP Storage SMI-S integration in SCVMM

Manage and provision storage directly out of SCVMM2012• Import multiple SMI-S capable storage devices• Categorize logical units via classifications and pools• Provision logical units directly out of SCVMM• Allocate capacity for Hyper-V hosts or clusters

View list of HP SMI-S capable storage devices• http://h18004.www1.hp.com/storage/smis-matrix.html

SCVMM 2012

HP 3PAR Storage overview and details

Capacity information• Provisioned, allocated, and

available capacity for HP 3PAR

Volume information• RAID, CPG, WWN• Health status

Provision infrastructure >15x fasterAdd Hyper-V hosts in five easy steps

With traditional tools7. Select servers8. Select host

groups and profiles

9. Perform deep server discovery

1. Update server FW2. Configure BIOS3. Configure iLO

10.Configure server names and network

11.Deploy Hyper-V to bare-metal server

* Based on HP internal testing as of April 2014 comparing HP One View v1.10 vs. traditional HP and Microsoft management tools, each deploying 16 servers. Test was to configure the networks, enclosure, template, and profiles. HP OneView SCVMM integration takes 10 minutes of an admin’s time vs. traditional HP and Microsoft management tools taking 159 minutes of admin time.

With HP OneView SCVMM integration: Five steps1. In SCVMM –

Launch the HP Add Capacity wizard

2. Select SCVMM and HP OneView profiles and servers to deploy

4. Input computer names and optional network configuration

3. Match SCVMM network adapters with HP OneView uplink ports

12.Configure SCVMM virtual switches

13.Assign IP addresses

14.Complete host networking and storage configuration

5. Confirm settings―and start provisioning

Automated steps

1. HP OneView profile is applied (BIOS, VC, Storage)

2. Hyper-V is deployed3. Host networking and

shared SAN storage is configured

4. Configure Smart Array

5. Select credentials6. Set discovery

scope

Clustering solution protecting against server and storage failure

Cluster Extension for Windows

• What does it provide?− Manual or automated site failover for server and storage

resources − Transparent Hyper-V live migration between sites

• Supported environments− Windows Server 2003, 2008, 2012− HP StoreEasy (Windows Storage Server) − Max supported distances

• Remote Copy sync supported up to 2.6 ms RTT (~260 km)• Up to Microsoft Cluster heartbeat maximum of 20 ms RTT

− 1:1 and SLD configuration− Sync or async Remote Copy

• Requirements− 3PAR disk arrays− 3PAR Remote Copy − Windows cluster− HP Cluster Extension (CLX)− Max 20 ms cluster IP network RTT

• Licensing options− Option 1: per cluster node

• 1 LTU per Windows cluster node (4 LTUs for configuration to the left)

− Option 2: per 3PAR array • 1 LTU per 3PAR array (2 LTUs for the configuration to the left)

Also see the HP CLX resources

File share Witness

HP 3PAR

HP 3PAR

Data Center 2

Synchronous or

asynchronous Remote Copymanaged by

CLX

LAN/WAN

Available with 3PAR OS 3.2.1

Peer Persistence for Windows

• What does it provide?− High availability across data centers − Automatic or manual transparent LUN swap − Transparent live migration between data

centers• How does it work?

− Based on 3PAR Remote Copy and MS MPIO• Primary RC volume presented with active

paths• Secondary RC volume presented with passive

paths− Automated LUN swap arbitrated by a

Quorum Witness (QW Linux Hyper-V VM on third site)

• Supported environments− Windows Server 2008 R2 and 2012 R2− Stand-alone servers and Windows cluster− Hyper-V − Up to RC supported max of 2.6 ms RTT

(~260 km )• Requirements

− Two 3PAR disk arrays− FC, iSCSI, or FCoE cross-site server SAN− 2 RC sync links (RCFC or RCIP)− 3PAR Remote Copy and Peer Persistence

license− 3PAR OS ≥3.2.1

Data Center 2

Data Center 1

Synchronous Remote Copy

+ Peer Persistence

HP 3PAR

HP 3PARQW DC 3

WitnessP

S

LAN/WAN

S

P Primary RC volume active path presentation

Secondary RC volume LUN passive path presentation

Hyper-V VM

Clustered application

Hyper-V

Hyper-V

Hyper-V

Hyper-V

Windows Failover Cluster

P

S

Hyper-V

Up to 2.6 ms RTT latency

Never lose access to your volumes

Peer Persistence for Windows

Fabric A

A A

BVol B prim

Vol B sec

Windows Server Failover Cluster

3PAR array Site A

3PAR array Site B

B

Fabric B

QW

Site C

• Each host is connected to each array on both sites via redundant fabrics (FC or iSCSI or FCoE)

• Synchronous copy of the volume is kept on the partner array/site (RCFC or RCIP)

• Each volume is exported in R/W mode with same WWN from both arrays on both sites

• Volume paths for a given volume are “Active” only on the array where the “Primary” copy of the volume resides. Other volume paths are marked “Standby”

• Both arrays can host active and passive volumes

• Quorum Witness on 3rd site acts as arbitrator in case of failures

Vol A secVol A prim

Active path (Vol A)Active path (Vol B)Passive (standby) path

Peer Persistence for Windows—MPIO path view

Hyper-V

2 2’

Windows Server Failover Cluster

3PAR array Site A

3PAR array Site B

QW

Site C

Vol A secVol A prim

2’

2

Windows MPIO view

3PAR MC Remote Copy view

Recovery managers for Microsoft Exchange Server and Microsoft SQL Server• RM MS Exchange Server and RM MS SQL Server

− Automatic discovery of Exchange and SQL Server servers and their associated databases

− VSS integration for application-consistent snapshots− Support for Exchange Server 2003, 2007, and 2010 − Support for SQL Server 2005, 2008, and 2012− Support for SQL Server running in a vSphere Windows VM− Database verification using Microsoft tools

• Built on 3PAR Thin Virtual Copy technology− Fast point-in-time snapshot backups of Exchange and SQL

Server databases− Hundreds of copy-on-write snapshots with just-in-time, granular

snapshot space allocation− Automatic recovery from snapshot− 3PAR Remote Copy integration− Exporting of database backups to other hosts

• Backup integration − HP DataProtector− Symantec NetBackup and Backup Exec− Microsoft System Center Data Protection Manager

Find product documentation at:http://h18006.www1.hp.com/storage/software/3par/rms-exchange/index.html http://h18006.www1.hp.com/storage/software/3par/rms-sql/index.html

See the demo video at: 3PAR Recovery Manager for SQL

Recovery Manager for Microsoft Hyper-V• Built on 3PAR Thin Virtual Copy technology

• Supports hundreds of snapshots with just-in-time, granular snapshot space allocation

• Create crash- and application-consistent virtual copies of Hyper-V environment

• VM restore from snapshot to original location

• Mount/unmount of virtual copy of any VM

• Time-based VC policy per VM

• Web GUI scheduler to create/analyze VC

• PowerShell cmdlets (CLI and scripting)

• Supported with: − Windows Server 2008 R2 and 2012

− Stand-alone Hyper-V servers and Hyper-V Failover Cluster (CSV)

− F-Class, StoreServ 7000 and 10000

Optional librarytape or D2D

RMS and RME architecture

Snapshots

3PAR productionvolumes

RM client and backup server

Exchange or SQL Server production server

9:00

13:00

17:00

1. Off-host backup

2. Direct restore from tape

3. Direct mount of snapshot

4. Restore from snapshot with file copy restore

Production DB server

RME and RMS and RMH VSS integration

1. Backup server requests RM agent to create 3PAR VC

2. RM agent requests MS Volume Shadow Copy Service (VSS) for DB metadata details

3. RM agent calls MS VSS to create virtual copies (VC) for specific DB volumes

4. MS VSS queries 3PAR VSS provider if 3PAR VC can be created

5. MS VSS sets DB/VHD to quiesce mode

6. MS VSS calls 3PAR VSS provider to create VC of volumes

7. 3PAR VSS provider sends commands to 3PAR array to create VC of volumes

8. 3PAR VSS provider acknowledges VSS VC creation completed

9. MS VSS sets DB/VHD back to normal operation

10.MS VSS acknowledges RM agent creation of VC completed

11.RM agent sends VC and application metadata info to backup server

3

9

MS VSS

2

11

310

9 4 5 6 8

7Exch /SQL Server DBor VDD

3PAR VSS provider

4

Recovery ManagerRM agent

1

3PAR array

Backupserver

Extended possibilities

RM Exchange and SQL Server in a CLX environment• RM backup server at

the remote secondary site (Site B) can actively manage Virtual Copy

• That means all the operations, including recovery, can be performed at the remote site

Site A Site BRM backup server - local

RM backup server - remote

Single copy cluster / SQL Extended Cluster (using CLX)

Can recover at Site B using

RM

Exchange/SQL 1

Exchange/SQL 2

Exchange/SQL 3

Exchange/SQL 4

VCVC

DB Remote Copy

Concurrent database validations

Recovery Manager for Microsoft Exchange • Validations can take

hours to complete for large databases (size of TB)

• Queuing and sequentially validating many databases can take a long time (hours to days)

• This enhancement ensures that the validations occur in parallel, mitigating the issue

DB1 (2 TB)

DB2 (2 TB)

DB3 (2 TB)

3 hrs

3 hrs

3 hrs

Total: 9 hrs to complete

Sequenti

al

DB1 (2 TB)

DB2 (2 TB)

DB3 (2 TB)

3 hrs 3 hrs 3 hrs

Total: approx 3 hrs to complete

Concurrent

Earlier versions

Since v4.4

Recovery Manager Diagnostic Tool• A tool that validates all the RM configuration parameters

and generates reports indicating non-compliance

• Runs on the backup server• Automatically probes all the servers registered in the

recovery manager including the backup server itself

• Checks all parameters required for a successful RM operation, such as:− Database status− VSS HWP configuration− StoreServ connectivity

• Generates a report indicating success, warning, and error• Advises the user of corrective action• Displays high-level dashboard status• Currently supported:

− RM Exchange Server− RM SQL Server− RM Hyper-V pending

Learning check

1. On which two technologies is Peer Persistence for Windows based? (Select two)a. 3PAR Remote Copy b. Microsoft MPIOc. Microsoft Active Directoryd. VMware Storage Policy–Based Management (SPBM)

Learning check answer

1. On which two technologies is Peer Persistence for Windows based? (Select two)a. 3PAR Remote Copy b.Microsoft MPIOc. Microsoft Active Directoryd. VMware Storage Policy–Based Management (SPBM)

Learning check

2. Peer Persistence for VM requires four 3PAR disk arrays True False

Learning check answer

2. Peer Persistence for VM requires four 3PAR disk arrays True FalsePeer Persistence for VM requires two 3PAR disk arrays

Learning check

3. What are VVols?______________________________________________________________________________________________________________________________________________________________________________________________________

Learning check answer

3. What are VVols?

VVols are part of the VASA 2.0 specification defined by VMware as a new architecture for VM storage array abstraction

Learning check

4. List three benefits of HP 3PAR RM that apply to any server environment_________________________________________________________________________________________________________________________________________________________________________________________________________

Learning check answer

4. List three benefits of HP 3PAR RM that apply to any server environment• Off-host backup

• Direct mount of snapshot

• Restore from snapshot with file copy restore

• Hundreds of copy-on-write snapshots with just-in-time, granular snapshot space allocation

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Security and multi-tenancy

HP 3PAR StoreServ is now a complete FIPS 140-2 compliant encryption solution

3PAR DAR encryption―What’s new

• A 3PAR array uses these FIPS 140-2 validated encryption modules: – FIPS 140-2 Level 2 Self-Encrypting Drives (SEDS)

• New 920GB FIPS Encrypted MLC SSD

– FIPS 140-2 Level 2/3 Enterprise Key Managers (EKM)– FIPS 140-2 Level 1 Software

• A 3PAR array now supports these optional EKMs:– HP Enterprise Secure Key Manager v4.0– SafeNet KeySecure k450 and k150

Encryption is simple—Key management is not

Why is key management important?

Data is not protected just by encrypting it

• Keys are the little secrets that protect the big secrets

• Keys must be securely preserved, protected, and accessible for the life of the data

• 3PAR now supports the following enterprise key managers:

• HP Enterprise Secure Key Manager v4.0• SafeNet KeySecure k450 and k150

Event or threat Risk and impact

Exposing keys, unauthorized access

Exposure of protected data, noncompliance

Loss of authorized access to keys

Loss of data access, business interruption

Loss or accidental destruction of keys

Loss of keys, data loss, business failure

Failure to control/monitor/log access

Audit failures, increased liability

Other recent 3PAR security enhancements• Common Criteria Certification commences on 7000 and 10000 with 3PAR OS

3.2.1 − Takes two - three months to complete

• Maximum password length increases from 8 characters to 32• Password hash length was 31, now is 107

− Hash: A one-way cryptographic function that allows you to store a password without knowing its contents

• CA signed certificates− Allow an admin to import a certificate signed by an external authority− This enables TPDTCL to prove the authenticity of the StoreServ array to the remote

CLI

• New Audit User Account− A new class of user to enable Retina Network Security Scanner and Nessus

Vulnerability Scanner to perform a credential, or local, scan of the 3PAR OS Linux file system

What are 3PAR virtual domains?Multi-tenancy with traditional storage Multi-tenancy with 3PAR domains

Separate, physically secured storage

Shared, logically secured 3PAR storage

• Admin A• App A• Dept A• Customer A

• Admin B• App B• Dept B• Customer B

• Admin C• App C• Dept C• Customer C

Domain A

• Admin A• App A• Dept A• Customer A

• Admin B• App B• Dept B• Customer B

Domain B

• Admin C• App C• Dept C• Customer C

Domain C

See also http://h18006.www1.hp.com/storage/software/3par/vds/index.html

3PAR virtual domains for multi-tenancy security

VirtualDomain n

Virtualdomain A

Hosts

Access

VolumesCPGParameter

s

QoS

Hosts

Access

VolumesCPGParameter

s

QoS

Provides fine-grained access control for users

and hosts to achieve greater storage service

levels

Securely separates data and eliminates

unauthorized or accidental access

Up to 1024 domains with individual settings per

3PAR array

Optionally Priority Optimization allows

assigning QoS to virtual domains

What are the benefits of virtual domains?

Physical storage

Consolidated storage

Centralized adminFull setup, provisioning, and monitoring

UsersConsumers only

Provisionedstorage

UsersSelf-provisioning

Virtual domainsSecure virtual array

Centralized storage administration

with traditional storage

Self-service storage administration

with 3PAR virtual domains

Consolidated storage

Centralized adminVirtual domain setup and monitoring only HP 3PAR

LDAP login

Authentication and authorization

Management workstation

3PAR array LDAP server

12

3

4

5

6

Step 1 :

User initiates login to 3PAR via 3PAR CLI/GUI or SSH

Step 2 :

3PAR OS searches local user entries first; upon mismatch, configured LDAP server is checked

Step 3 :

LDAP server authenticates user

Step 4 :

LDAP server provides LDAP group information for user

Step 5:

3PAR OS authorizes user for privilege level based on user’s group-to-role mapping

HP 3PAR Virtual Lock

• HP 3PAR Virtual Lock Software prevents deletion of selected virtual volumes for a specified period of time

• Locked virtual volumes cannot be deleted, even by a 3PAR Storage System administrator with the highest level of privilegesNote: Mounted servers can still read, write, and delete files and folders

• Locked RO virtual copies cannot be deleted and overwritten (for compliance reasons)

• Because it is tamper-proof, it is also a way to avoid administrative mistakes

• Supported with:− Fat and thin virtual volumes − Full Copy, Virtual Copy, and Remote Copy

Also see the Virtual Lock overview

Learning check

1. List at least five recent HP 3PAR StoreServ security enhancements_______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Learning check answer

1. List at least five recent HP 3PAR StoreServ security enhancements• Is now a complete FIPS 140-2 compliant encryption

solution• Common Criteria Certification commences on 7000 and

10000 with 3PAR OS 3.2.1 • Maximum password length increases from 8 characters to

32• Password hash length was 31, now is 107• Allow an admin to import a certificate signed by an

external authority• New Audit User Account• Virtual domains provide secure virtual arrays• HP 3PAR Virtual Lock Software prevents deletion of

selected volumes for a certain amount of time

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Competitive

Why HP 3PAR versus EMC, NetApp, HDS, and IBM

• Only 3PAR has a single architecture that spans mid-range, high-end, and all-flash

• Only 3PAR has ASIC-enabled thin technology providing no performance drop-off for zero-reclaim and built-in thin provisioning

• Only 3PAR allows different RAID levels on the same disks, thus eliminating wasted spaceNote: NetApp offers only a single RAID level

• Only 3PAR has true ASIC-enabled, line speed scale-out in the mid-rangeNote: Both NetApp and IBM mid-range scale-out is based on slower Ethernet with all I/O inter-node traffic driven by the same Intel processors that drive front- and back-end IOPS

• Only 3PAR has true federation built into the storage controllers• Only 3PAR offers a full-featured, dedupe-enabled all-flash native block array

Note: NetApp just introduced a FAS AFA with performance numbers for files; with NetApp, block is an emulation, not native

Reasons to buy 3PAR over VMAX3

3PAR VMAX

Single HW and SW architecture mid-to-high-end and all-flash

Requires VNX, VMAX, and XtremIO

Better HA―3PAR node-pair architecture is more resilient

VMAX engine architecture has two clear SPOFs

Industry’s best Tier-1 ease-of-management―autonomic

According to customers, managing VMAX is ‘like taking a beating’

Industry’s most advanced thin technology

Weak, bolted on, and slow―see Edison Group report

3PAR only charges SW licenses for up to 1/3 the capacity of the

array

VMAX license fees are typically based strictly on each TB in the

array

Top reasons to buy 3PAR over VNX-23PAR VNX-2

Single HW and SW architecture mid-to-high-end and all-flash

Requires VNX, VMAX, and XtremIO

Solid, proven operating system Claims millions of lines of new VNX-2 code

Industry’s best ease-of-management― autonomic

Unisphere is easy, but VNX requires a lot of pre-planning and

decisions

Industry’s most advanced thin technology

Thin technology is weak, bolted on, and slow―see Edison Group

report

Industry’s best ‘real-world, day-to-day’ performance

Might be fast, but requires constant retuning to stay fast

Four ‘mesh-active’ controllers in the

mid-range

Two controllers max

Top reasons to buy 3PAR 7450 over XtremIO 3PAR XtremIO

Single HW and SW architecture mid-to-high-end and all-flash

Requires VNX, VMAX, and XtremIO to match 3PAR’s range

Full sync and async replication No remote replication

Four-controller scale-out with flexible configurations and scale-

up

Fixed configurations―no scale-up after purchase

Scales to 240 x 1.9 TB SSDs Limited to 150 SSDs in max-brick/12 controllers configuration

< $2 per usable GB ~$5 per usable GB

ASIC enabled deduplication Dedupe driven by same processors as system

IOPS―performance numbers given with dedupe turned off

Top reasons to buy 3PAR over NetApp3PAR NetApp

Single HW and SW architecture mid-to-high-

end and all-flash

A mid-range architecture. FAS all-flash is stop-gap with highly suspect

performance numbers

Industry’s best ease-of-management―autonomic

Cumbersome and complex management, particularly in block

environments and anti-virus

Industry’s most advanced thin technology

Thin provisioning is risky. Zero reclaim is untestable―see Edison Group report

Industry’s best ‘real-world, day-to-day’ performance

NetApp’s own SPC results show that it requires 50% more controllers than 3PAR to reach similar performance

Four ‘mesh-active’ controllers in the mid-range

SPECsfs benchmark results show that there is a huge efficiency drop when

expanding cluster from two controllers to four

Top reasons to buy 3PAR over HDS3PAR HDS

Single HW and SW architecture mid-to-

high-end and all-flash

VSP is high-end. VM is similar and mid-range but only two controllers. AFA VM has limited scalability and relatively weak performance.

HUS is a different platform.

Industry’s most advanced thin

technology

Zero reclaim has big performance and latency hit―see Edison Group report

Industry’s best ease-of-

management―autonomic

In both the high-end and mid-range HDS is more difficult to manage than 3PAR

Easy NAS with good value

HUS and HUS VM have expensive and cumbersome NAS based on BlueArc

HP also sells XP7―and offers a total solution

plus end-to-end support

Storage vendor only

Top reasons to buy 3PAR over IBM 3PAR IBM

Single HW and SW architecture mid-to-

high-end and all-flash

High-end (DS8800), mid-range (Storwize), and all-flash (FlashSystem) are totally

different platforms

Industry’s best ‘real-world, day-to-day’

performance

DS8800 matches 3PAR SPC result; however, 3PAR mid-range handily beats Storwize, XIV,

and FlashSystem

Industry’s best ease-of-

management―autonomic

Traditional LUN management

Industry’s most advanced thin

technology

Traditional thin provisioning;Zero reclaim runs on system processors

3PAR growing rapidly, HP business is stable

IBM’s hardware businesses are being either sold off or de-emphasized;

Storage revenues are dropping rapidly

Learning check

1. Name three unique features of HP 3PAR that are missing in competitors’ products____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Learning check answer

1. Name three unique features of HP 3PAR that are missing in competitors’ products• Single architecture that spans mid-range, high-end, and

all-flash • ASIC-enabled thin technology providing no performance

drop-off for zero-reclaim and built-in thin provisioning• Support for different RAID levels on the same disks, thus

eliminating wasted space• True ASIC-enabled, line speed scale-out in the mid-range• True federation built into the storage controllers• A full-featured, dedupe-enabled all-flash native block array

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Converged Storage

HP Converged Storage promise: SimplifyModern storage architectures designed for the cloud, optimized for big data, and built on converged infrastructure

Converged management orchestrationChoreograph across servers, networks, and storage

Scale-out and federated softwareNon-disruptive data growth and mobility

Standard x86-based platformsIncrease storage performance and density

Polymorphic

Autonomic

Efficient

Multi-tenant

Federated

Architectural

attributes HighEnd

Entry

Pri

mar

y Sto

rage

HD

D <

> F

lash

Info

rmatio

n

Pro

tectio

n,

Rete

ntio

n a

nd

Analytics1

Block | Object | File

Innovations for converged management productivity—Successor to HP SIM

Converged management—HP OneView

• Simple: Consumer-inspired user experience− Everyday tasks in seconds − Architected for team productivity

• Fast: Software-defined process templates− Push-button precision, consistency, reliability− Automate storage provisioning using server

profiles

• Extensible: Enterprise software integration− Infinite possibilities to automate and customize− VMware vCenter, Microsoft System Center, Red

Hat RHEV, HP CloudSystem, HP Orchestration, user customizations

Automated storage provisioning• Import 3PAR storage systems and storage pools• Carve 3PAR volumes on the fly• Attach/export 3PAR volumes to Server Profiles

Automated SAN zoning• Import Brocade fabrics for automated zoning• Zoning is fully automated via

Server Profile volume attachments

Integration of Storage with Server Profiles• Attach private/shared stand-alone volumes to

server profiles• Create ephemeral volumes in the Server

Profile―like adding a vdisk to a VM, but with real hardware

• Automated boot target configuration using port groups

• Flat SAN profile mobility across enclosures

What’s new in storage management? 3PAR provisioningHP OneView 1.1―Automation for 3PAR StoreServ Storage, traditional FC fabrics, Flat SAN

A Storage license is included as part of the

HP OneView 1.1 release

The FASTEST way to virtualize with breakthrough TCO on a cloud-compatible platform

Introducing: HP ConvergedSystem for Virtualization

Simple to buy

Simple to deploy

Simple to manage

Order to operations in as few as 20 days

Managed as ONE

50 - 1000+ VMsStarting at $2,250/month

Simple to support

One company: HP

HP advantages versus VCE Vblock 300*

HP ConvergedSystem 700

*Comparison between HP Converged System 700 and the Vblock 300 – May vary by deployment. VCE Data as of 11/1/2013 per : http://www.vce.com/asset/documents/vblock-320-gen3-1-architecture-overview.pdf. Hypervisor choice based on CS700x

Storageup to 55% lower SAN network latencyHP CS700 Flat SAN 2.05µs vs. Vblock 300 4.5 µs (FEX>FI>MDS)

Performance

3x Hypervisor choiceHP CS700x VMware, Hyper-V, Red Hat KVM vs Vblock VMware

VMs

Deployment

up to 33% Faster deploymentHP CS700 30 days to deploy vs 45 days Vblock

~15% Lower priceBased on estimated list prices

Cost 1Integrated design

Managed as one

Supported by one point of contact

up to 35% more throughputHP BL460 Gen 8 NetPerf vs Cisco UCS B200 M3 @ MTU 1500/Default IRQ

Best-in-class products, solutions, and services for hybrid IT

HP Helion portfolio overview

PartnerOne for Cloud

Cloud builders • Cloud resellers • Cloud service providers

Integrated cloud solutions• HP CloudSystem Enterprise

• HP CloudSystem Foundation

Cloud software and infrastructure• HP Automation &

Orchestration

• HP Hybrid Cloud Management

• HP Converged Infrastructure

OpenStack Software• HP Helion OpenStack Community

Managed services• HP Helion Managed Virtual Private Cloud & Managed Private Cloud

• HP Helion Managed & Workplace Applications

OpenStack Professional Services• Advisory

• Apps transform

• Implementation

• Design

• Strategy• Operations• Education

Public cloud and SaaS• HP Helion Public Cloud

• HP SaaS applications

What is Recovery Manager Central? (1 of 3) Snapshot-based data protection platform

Two elements• Recovery Manager Central for VMware

− Managed via vCenter plug-in; for VM backups only (application-consistent)

• Recovery Manager Central Express Protect− Managed via web browser—for all other snap backups (crash-consistent)

Fosters integration of 3PAR and StoreOnce• Near-instant recovery• Longer-term data retention• Catalyst integration as backup target

Flat Backup—data streams from 3PAR to StoreOnce** v1.0—Data path goes through RMC VM until 1.1 or 2.0, depending on when RMC is embedded in StoreOnce

• 3PAR StoreServ system (any currently supported model*)

• StoreOnce (software 3.12.x to support Backup Protect**)

• StoreOnce Recovery Manager Central 1.0• VMware 5.1 and 5.5*7000 series and 10000 series will have full functionality; F-Class and T-Class will be limited

**Catalyst over FC supported in controlled release in 3.11.x

StoreOnce Recovery Manager Central

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

2.0T

B2.

0TB

2.0T

B

StoreOnce3PAR StoreServ

What is Recovery Manager Central? (2 of 3)

What is Recovery Manager Central? (3 of 3) It is not a replacement for existing backup application

RMC 1.0:• 3PAR only—cannot protect other storage platforms• No Oracle, SQL Server, Exchange Server, Hyper-V (unless on a VM)• No Hyper-V or KVM VM• No bare-metal recovery (unless on a VM)• No granular-recovery capability

It is intended to be a complementary piece along with backup app• Faster and cheaper alternative to backup app for non-granular protection

RMC value proposition

• Converged availability and backup service for VMware − Flat backup alternative to traditional backup apps

• Performance of Virtual Copy snaps• Reliability and retention of StoreOnce• Speed of backups and restores via SnapDiff

• Control of VMware protection passes to VMware admins− Managed from within vSphere

• Extension of primary storage− Snapshots key to entire data protection process

• Common integration and API point for backup applications, reporting, and security

Learning check

1. How does HP OneView innovate storage management?______________________________________________________________________________________________________________________________________

Learning check answer

1. How does HP OneView innovate storage management?By automating 3PAR StoreServ Storage, traditional FC fabrics, Flat SANBy automating SAN zoning By integrating storage with server profiles

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Customer resources

HP product information• The HP Marketing Document

Library allows you to: − Access and search the QuickSpecs

online

− Download the offline QuickSpecs application

− Create quick quote for your desired product

− Look up individual product list prices

• The QuickSpecs provide technical info for:

− HP products

− HP services

− HP solutions

• Go here http://www.hp.com/go/qs

Find just what you are looking for

HP Storage Information Library

• Find up-to-date information including:− Installation Guides− Configuration Guides− User Guides − References such as Release Notes and

Planning Manuals − Service and Maintenance Guides

• Available for 3PAR and some other storage systems

• Visit www.hp.com/go/docs and click the Storage Information tab

HP SAN Design Reference Guide

HP SAN certification and support

• Main goals and contents − Architectural guidance− HP support matrices− Implementation best practices− Incorporation of new technologies− HP Storage implementations such as iSCSI,

NAS/SAN Fusion, FC-IP, FCoE, DCB

• Provides the benefit of HP engineering when building a scalable, highly available enterprise storage network

• Documents HP Services SAN integration, planning, and support services

• Visit www.hp.com/go/sandesign

Single Point of Connectivity Knowledge for HP Storage products

HP Storage interoperability

• SPOCK provides the information to determine interoperability for:− Integration of new products and

features

− Maintaining active installations

• SPOCK can be accessed by: − HP internal users

− HP customers

− HP partners

HP internal access: http://spock.corp.hp.com/default.aspxExternal access (requires an HP Passport): http://www.hp.com/storage/spock

Available for HP and HP partners

HP 3PAR assessment tools

NinjaSTARS

NinjaThin

NinjaVirtual

Allows capturing customer-installed storage base capacities,

configuring HP StoreServ 7000, and projecting thin savings

Allows capturing customer-installed storage base capacities and

projecting thin savings with HP StoreServ 10000

Allows capturing customer-installed vSphere configuration and

projecting VM density increase using HP StoreServ

Learning check

1. Which 3PAR assessment tools are available for HP partners?a. NinjaSTARSb. NinjaThinc. NinjaOptimized. NinjaVirtuale. NinjaSystems

Learning check answer

1. Which 3PAR assessment tools are available for HP partners?a. NinjaSTARSb.NinjaThinc. NinjaOptimized.NinjaVirtuale. NinjaSystems

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Thank you

www.hp.com/go/3PAR storeserv