14
FUNDAMENTALS OF HP 3PAR SUBHAM DUTTA Storage Administrator By

3PAR Fundamental

Embed Size (px)

DESCRIPTION

3PAR Fundamental .pdf

Citation preview

Page 1: 3PAR Fundamental

FUNDAMENTALS OF

HP 3PAR

SUBHAM DUTTA

Storage Administrator

By

Page 2: 3PAR Fundamental

1

Executive Summary

3PAR storage is the only storage from HP stable that offers you to build an architecture from

scratch and help you to scale up as and when your business needs. It supports true convergence of

Block and file architecture and the flexibility you need to accelerate new application deployments

and support server virtualization. It is damn easy to manage and performance of the array is

always a stand-out feature among all the mid-range/high-range arrays in the market as all the data

IOPS needs are handled by intelligent on-board ASIC’s.

HP 3PAR consist of F, S, T series array and the latest generation of Storeserv 10X (10400 , 10800 )

series and 7X (7200, 7400 )series array. In this whitepaper, we will discuss the fundamental

architecture of 3PAR and with that we will see the steps to provision a LUN to VMware host and

migration of an older EVA to 3PAR array.

Introduction

HP 3PAR StoreServ Storage supports true convergence of block, file, and object access while

offering the performance and flexibility that you need to accelerate new application

deployments and support server virtualization. It’s a storage platform that allows you to spend

less time on management, gives you technically advanced features for less money, and

eliminates trade-offs that require you to sacrifice critical capabilities such as performance and

scalability. With HP 3PAR Storeserv Storage, you can serve unpredictable and mixed workloads,

support unstructured and structured data growth, and meet block and file needs from a single

capacity store. The newest in the lot, HP 3PAR Storeserv 7450c Storage system is the All-Flash

Array (AFA) in the HP 3PAR Storeserv family and delivers over 900,000 IOPS with sub-

millisecond latency while supporting the full range of HP 3PAR data services. The HP 3PAR

StoreServ 7440c Flash Storage system is a truly versatile Converged Flash Array that combines

the outright performance of an AFA with the scalability and flexibility that comes with support

for hard disk drives. HP 3PAR StoreServ 7200 and 7400 Storage systems, which bring Tier-1

resiliency and data services to the mid-range market. The HP 3PAR StoreServ 10400 and 10800

Storage systems are the most scalable members of the HP 3PAR StoreServ family and provide

the ideal Tier-1 storage to meet the needs of today’s cloud and ITaaS environments.

HP 3PAR Hardware architecture overview

Each and every 3PAR storage arrays features a high speed full mesh backplane that joins multiple

controller nodes (data/IO movement engines) to form a mesh active, cache-aware cluster. In

every HP 3PAR StoreServ Storage system, each controller node has a dedicated link to each of

Page 3: 3PAR Fundamental

2

the other nodes that operates at 2 GB/s in each direction. In addition, each controller node may

have one or more paths to hosts—either directly or over a storage area network (SAN). The

clustering of controller nodes enables the system to present hosts with a single, highly available,

high-performance storage system. This means that servers can access volumes over any host-

connected port—even if the physical storage for the data is connected to a different controller

node. The modular HP 3PAR StoreServ Architecture can be scaled from 1.2 TB to 3.2 PB of raw

capacity, making the system deployable as a small, remote, or very large centralized system.

Controller node pairs are connected to dual-ported drive enclosures owned by that pair. Unlike

other approaches, the system offers both hardware and software fault tolerance by running a

separate instance of the HP 3PAR Operating System on each controller node, thus facilitating the

availability of customer data.

This architecture begins with a multifunction node design and, like a modular array, requires just

two initial controller nodes for redundancy. However, unlike traditional modular arrays, enhanced

direct interconnects are provided between the controllers to facilitate Mesh-Active processing.

Unlike legacy Active/Active controller architectures—where each LUN (or volume) is active on only

a single controller—this Mesh-Active design allows each LUN to be active on every controller in the

system, thus forming a mesh. This design delivers robust, load-balanced performance and greater

headroom for cost-effective scalability, overcoming the trade-offs typically associated with modular

and monolithic storage arrays.

Page 4: 3PAR Fundamental

3

HP 3PAR Storage Concepts and Terminology

The HP 3PAR StoreServ storage system is composed of the following logical data layers: Physical Disks , Chunklets , Logical Disks , Common Provisioning Groups ,Virtual Volumes. The relationship between HP 3PAR StoreServ storage systems data layers is illustrated in figure below .Each layer is created from elements of the layer above. Chunklets are drawn from physical disks, logical disks (LDs) are created from groups of chunklets, CPGs are groups of LDs and virtual volumes use storage space provided by CPGs. The virtual volumes are exported to hosts and are the only data layer visible to hosts.

In 3PAR arrays, once the physical disk is inserted , the inform OS breaks the disk into uniform

size of 1 GB pockets called chunklets to form the first level of granularity. The fine-grained

nature of these chunklets eliminates underutilization of precious storage assets.

Page 5: 3PAR Fundamental

4

The second layer of abstraction happens with the creation of logical disk from the underlying

chunklets of physical disk which is stripped across the system's physical disk and

implemented on specified RAID levels. Multiple chunklets RAID sets from different PDs are

striped together to form an LD. LDs can consist of all NL, FC, or SSD chunklets. There are no

mixed-type LDs, with the exception of Fast Class (Fiber Channel) LDs, where the LD may

consist of mixed 10K and 15K drive chunklets. LDs are divided into “regions,” which are 128

MB of contiguous logical space. Virtual volumes (VVs) are composed of these LD regions,

with VV space allocated across these regions.

The third layer of abstraction happens with the accumulation of LD's to form CPG which is

nothing but a virtual pool of LDs that allocates space to virtual volumes on demand. A CPG

allows virtual volumes to share the CPG resources. You can create fully provisioned virtual

volumes (FPVVs) and thinly-provisioned virtual volumes (TPVVs) that draw space from a CPG

LD pool.

Logical Disks

A Logical Disk (LD) is a collection of physical disk chunklets arranged as rows of RAID sets. Each RAID set is made up of chunklets from different physical disks. There are three types of logical disks (LDs): • User (USR) LDs provide user storage space to fully provisioned VVs • Snapshot data (SD) LDs provide the storage space for snapshots (or virtual copies), thinly provisioned (TPVV) and thinly DE duplicated (TDVV) virtual volumes. • Snapshot administration (SA) LDs provide the storage space for metadata used for snapshot and TPVV and TDVV administration. The HP 3PAR Operating System will automatically create LDs with the desired availability and size characteristics. In addition, several parameters can be used to control the layout of an LD to achieve these different characteristics:

Set size: The set size of the LD is the number of drives that contain redundant data. For example, a RAID 5 LD may have a set size of 4 (3 data + 1 parity), or a RAID MP LD may have a set size of 16 (14 data + 2 parity). For a RAID 1 LD, the set size is the number of mirrors (usually 2). The chunklets used within a set are typically chosen from drives on different enclosures.

Step size: The step size is the number of bytes that are stored contiguously on a single physical drive.

Row size: The row size determines the level of additional striping across more drives. For example, a RAID 5 LD with a row size of 2 and set size of 4 is effectively striped across 8 drives.

Page 6: 3PAR Fundamental

5

Number of rows: The number of rows determines the overall size of the LD given a level of striping. For example, an LD with 3 rows, with each row having 6 chunklets’ worth of usable data (+2 parity), will have a usable size of 18 GB (1 GB/chunklets x 6 chunklets/row x 3 rows).

RAID 5

On a RAID 5 LD, data is striped across rows of RAID 5 sets. A RAID 5 set must contain at least three chunklets. A RAID 5 set with three chunklets has a total of two chunklets of space for data and one chunklets of space for parity. RAID 5 set sizes with between 3 and 9 chunklets are supported. The data and parity steps are striped across each chunklets in the set. The chunklets in each RAID 5 set are distributed across different physical disks, which may be located in different drive magazines or even different drive cages.

RAID 6

On a RAID 6 LD, data is striped across rows of RAID MP sets. A RAID MP set, or double-parity set, must contain at least 8 chunklets. A RAID MP set with eight chunklets has a total of six chunklets of space for data and two chunklets of space for parity. RAID MP set sizes of eight and 16 chunklets are supported. The data and parity steps are striped across each chunklets in the set. The chunklets in each RAID MP set are distributed across different physical disks, which may be located in different drive magazines or even different drive cages. The following example shows 2 RAID MP sets in one row, the second set is shown below the first set. In the first RAID MP set in the following example, p0 is the parity step for data steps F, L, M, Q, T, V, and X. Figure shows a RAID MP LD with a set size of 8, and 2 sets in 1 row:

Page 7: 3PAR Fundamental

6

Common Provisioning Groups

A common provisioning group (CPG) creates a virtual pool of LDs that allows VVs to share the

CPG’s resources and allocate space on demand. You can create fully provisioned VVs and TPVVs

that draw space from the CPG’s logical disk pool. CPGs enable fine-grained, shared access to

pooled logical capacity. Instead of pre-dedicating logical disks to volumes, a CPG allows multiple

volumes to share the buffer pool of LDs. For example, when a TPVV is running low on user space,

the system automatically assigns more capacity to the TPVV by mapping new regions from LDs in

the CPG to the TPVV. As a result, any large pockets of unused but allocated space are eliminated.

Fully provisioned VVs cannot create user space automatically, and the system allocates a fixed

amount of user space for the volume.

Virtual volumes

Virtual volumes are the contiguous vessels of user data which derive its space from the

underlying CPG. Virtual volumes can be of three types:

Full-provisioned volume: An FPVV is a volume that uses LDs that belong to a CPG. There is

a set amount of space allocated under FPVV from the initiation and regardless of the

usage the FPVV consume the entire amount of space allocated. The maximum allocation

size is 16TB.

Thin-Provisioned volume: A TPVV uses LDs that belong to a CPG. TPVVs associated with

the same CPG draw user space from that pool, allocating space on demand in one

chunklets increments. As the volumes that draw space from the CPG require additional

storage, the system automatically creates additional LDs and adds them to the pool until

Page 8: 3PAR Fundamental

7

the CPG reaches the user-defined growth limit that restricts the CPG maximum size. The

TPVV volume size limit is 16 TB.

Thin-DE duplicated virtual volume : A TDVV is same as thin-provisioned volume with an

extension of inline-deduplication set for the volume. We can create TDVV for SSD drives

only.

Exporting Virtual Volumes (VLUN and Lun masking)

Creation of the virtual volume depends on the formation of underlying CPG. Virtual volumes are the only data layer visible to the hosts. You export a virtual volume to make it available to one or more hosts by creating an association between the volume and a LUN. A VLUN is a pairing Between a virtual volume and a LUN, expressed as either a VLUN template or an active VLUN.

Some enhanced features:

Thin Deduplication with Express Indexing: The system’s Thin Deduplication software

feature uses a hashing engine capability built into the HP 3PAR ASICs in combination with a unique Express Indexing feature to DE duplicate data inline and with a high degree of granularity. Hardware-accelerated Thin Deduplication delivers a level of capacity efficiency that is superior to other approaches without monopolizing CPU resources and degrading performance, thereby delivering the only primary storage deduplication solution in the industry that is truly enterprise-class. ASIC-assisted, block-level deduplication takes place inline, which provides multiple benefits, including increasing capacity efficiency, protecting system performance, and extending flash media lifespan. •

Adaptive Flash Cache: This read acceleration feature can as much as double IOPS and reduce

latency by using SSD capacity to extend the system’s cache. As a built-in system feature, it can be enabled on both all-flash arrays and systems with a dedicated SSD tier.

File Persona: With the HP 3PAR File Persona software you can create a converged storage

solution with block and file storage services. This unique solution delivers tightly integrated, converged storage for Provisioning both block volumes for server workloads and file and object shares for client workloads such as home directory consolidation.

Persistent Cache: allows systems to maintain a high level of performance and availability

during node failure conditions, and during hardware and software upgrades. This feature allows the host to continue to write data and receive acknowledgments from the system if the backup node is unavailable. Persistent cache automatically creates multiple backup nodes for LDs that have the same owner.

Page 9: 3PAR Fundamental

8

Persistent Ports: eliminates the dependency on multipath software during the process of

online software upgrades. Persistent ports enable the host paths to remain online during the online upgrade process and maintaining host I/O with no disruptions. Persistent ports are also called virtual ports.

Data Compaction Technology

3PAR Thin Provisioning: allows you to allocate virtual volumes to application

servers yet provision only a fraction of the physical storage behind these volumes. By enabling a true capacity-on-demand model, a storage administrator can use HP 3PAR Thin Provisioning to create TPVVs that maximize asset use.

3PAR Thin Conversion: converts an FPVV to a TPVV. Virtual volumes with large

amounts of allocated but unused space are converted to TPVVs that are much smaller than the original volume. To use the thin conversion feature, you must have an HP 3PAR StoreServ 10000 or HP 3PAR StoreServ 7000 storage system, an HP 3PAR Thin Provisioning license. and an HP 3PAR Thin Conversion license.

Page 10: 3PAR Fundamental

9

3PAR Thin Persistence: minimizes the size of TPVVs and read/write snapshots of

TPVVs by detecting pages of zeros during data transfers and not allocating space for the Zeros. This feature works in real time and analyzes the data before it is written to the destination TPVV or read/write snapshot of the TPVV. To use the thin persistence feature, you must have an HP 3PAR StoreServ 10000 or HP 3PAR StoreServ 7000 storage system, an HP 3PAR Thin Provisioning license, and an HP 3PAR Thin Conversion license.

3PAR Thin Copy Reclamation: reclaims space when snapshots are deleted from

an HP 3PAR StoreServ storage system. As snapshots are deleted, the snapshot space is Reclaimed from a TPVV or FPVV and returned to the CPG for reuse by other volumes.

Replication techniques

3PAR Remote Copy: a host-independent, array-based, data-mirroring solution

that enables affordable data distribution and disaster recovery for applications. With this optional software, you can copy virtual volumes from one system to a second system.

3PAR Physical Copy: A physical copy is a full copy of a volume (CLONING TECHNIQUE). The

data in a physical copy is static; it is not updated with subsequent changes to the parent volume. The parent volume is the original volume that is copied to the destination volume. The parent volume can be a base volume, volume set, virtual copy, or physical copy. Creating physical copies does not require a separate license.

3PAR Virtual Copy: A snapshot is a point-in-time virtual copy of a base volume. The base

volume is the original volume that is copied. Unlike a physical copy, which is a duplicate of an entire volume, a virtual copy only records changes to the base volume. This allows an earlier

Page 11: 3PAR Fundamental

10

state of the original virtual volume to be recreated by starting with the current state of the virtual copy and rolling back all the changes that have been made since the virtual copy was created.

3PAR Peer Motion software is the first non-disruptive, do-it-yourself data migration tool for

enterprise block storage. Unlike traditional block migration approaches, HP 3PAR Peer Motion

enables online storage volume migration between any HP 3PAR StoreServ Storage systems non-

disruptively and without complex planning or dependency on extra tools.

HP 3PAR Online Import software leverages federated data mobility on the HP 3PAR

StoreServ Storage array to simplify and expedite data migration from HP EVA Storage, EMC

VMAX, EMC VNX, 5 and EMC CLARiiON CX4 arrays. With HP 3PAR Online Import software,

migration from these platforms can be performed in only five steps:

1. Set up the online import environment

2. Zone the host to the new system

3. Configure host multipathing

4. Shut down the host, unzone from the source, and start the migration

5. Start the host and validate the application

Data optimization

3PAR Dynamic Optimization: allows you to improve the performance of virtual

volumes without interrupting access. Use this feature to avoid over-provisioning for peak system usage by optimizing the layout of your virtual volumes. You can change virtual volume parameters, RAID levels, set sizes, and disk filters by associating the virtual volume with a new CPG. You can also use this feature to analyze your entire system and automatically correct space usage imbalances in the system for optimal performance. This feature requires an HP 3PAR Dynamic Optimization license.

3PAR Adaptive Optimization: provides a much higher degree of control over

disk usage by reserving your faster and more expensive storage resources for the data that is frequently accessed and relegating your slower and less expensive drives to storing data that is only occasionally accessed.

Page 12: 3PAR Fundamental

11

Steps to provision LUN from 3PAR to VMware

The first step in the presentation of LUN to VMware is the creation of virtual volume from HP

3PAR storage arrays. The below command will create a virtual volume from CLI.

# createvv [options] <usr_CPG> <VV_name> [.index ] [g|G|t|T]

Based on the choice we can create full provisioned, thin provisioned and thin deduplicated volume

from the underlying CPG. Now the created VV should be exported to the ESXi Host as VLUN with

attached VLUN template based on the preference. With the creation of a VLUN template, we

enables the export of a VV as a VLUN to one or more ESX host. Now this can be based on the type

of template we choose : port based or host set based.

# createvlun [options] <VV_name | VV_set> <LUN> <node:slot:port>

# createvlun [options] <VV_name | VV_set> <LUN> < Host name/ set>

New VLUNs exported while the host is running will not be registered until a bus re-scan is initiated,

this can be done automatically by ESX 4.x, ESXi 5.x, or ESXi 6.0 hosts managed by vSphere client or

vCenter Server from the VI/vSphere client Management Interface.

Page 13: 3PAR Fundamental

12

Online import from HP EVA

EVA to 3PAR Online Import manages the migration of data from a source EVA storage system to a destination 3PAR storage system. Using EVA to 3PAR Online Import, you can migrate EVA virtual disks and host configuration information to a 3PAR destination storage system without changing host configurations or interrupting data access. Following are the steps to do that:

1. The host is connected (zoned) to the source EVA storage system where its volumes reside. 2. A destination 3PAR storage system is connected (zoned) to the source EVA. The 3PAR storage

system is configured as a new host on the source EVA. All of the volumes exported to the original host are exported to the 3PAR.

3. The 3PAR storage system "admits" these volumes, which creates a set of peer volumes that are fully backed by the data on the EVA.

4. The host is connected (zoned) to the destination 3PAR storage system. The admitted volumes on the 3PAR are now exported (presented) to the host.

5. The host performs the appropriate SCSI rescan/multipathing reconfiguration to detect the exported volumes from the 3PAR storage system. The exported volumes are detected as additional paths to existing EVA LUNs.

6. Once the host detects the paths to the 3PAR storage system, the host connections to the EVA can be unzoned. The host retains access to all its data through the 3PAR connections, which are now the only paths to the host data.

Page 14: 3PAR Fundamental

13

7. The 3PAR storage system begins copying data from the EVA to local storage on the 3PAR storage system. During this process, the 3PAR continues to mirror writes to the EVA, so the EVA retains a consistent copy of the data on the volumes being migrated. Once the 3PAR storage system has copied all the data on a volume to local storage, it stops mirroring writes to the EVA for that volume. The volume is now a normal local volume on the 3PAR storage system.

8. When all of the host volumes have been copied to the 3PAR storage system, the migration is complete. The exports from the EVA to the 3PAR storage system and the host can now be removed.

9. If no additional migrations are to be performed, the zoning between the source EVA and the destination 3PAR storage system can now be removed. There are three types of migration possible for online import:

1. Online: Above steps are the one for the online import. This is for non-Windows host or a virtual disk presented to a non-Windows host. During online migration, all presentation relationships between hosts and virtual disks being migrated are maintained. Host I/O to the data is not disrupted during an online migration. When a VMware host is migrated, the online procedure is always used.

2. Offline: when migrating one or more unpresented virtual disks. During offline migration, only the selected virtual disks are migrated. No hosts are migrated in this situation.

3. Minimally disruptive: Selected when migrating a Windows host or a virtual disk presented to a Windows host. The host DSM used to access the storage system must be reconfigured from the EVA DSM to a DSM that will communicate with the destination 3PAR storage system. Host I/O is interrupted only during the time it takes to

Useful Links http://h20195.www2.hp.com/v2/getpdf.aspx/4aa3-3516enw.pdf http://www.vmware.com/files/pdf/HP-3PAR-StoreServ-Why-the-right-architecture-matters-with-vSphere.pdf http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA4-6016ENW.pdf http://www8.hp.com/h20195/V2/getpdf.aspx/4AA4-5778ENW.pdf?ver=1.0 http://h20628.www2.hp.com/km-ext/kmcsdirect/emr_na-c03290624-19.pdf http://www8.hp.com/h20195/V2/getpdf.aspx/4AA4-5558ENW.pdf?ver=1.0 http://h20195.www2.hp.com/v2/GetPDF.aspx%2F4AA4-0867ENW.pdf http://h20566.www2.hp.com/hpsc/doc/public/display?docId=emr_na-c04204225-6&docLocale=en_US