33
Technical white paper HP 3PAR StoreServ reference configuration for Microsoft SQL Server OLTP databases Table of contents Executive summary ...................................................................................................................................................................... 3 Introduction .................................................................................................................................................................................... 3 Objective...................................................................................................................................................................................... 3 Solution ....................................................................................................................................................................................... 4 Microsoft SQL Server 2012 and Windows Server 2012 ................................................................................................... 4 Workload structure ................................................................................................................................................................... 4 Testing ......................................................................................................................................................................................... 5 HP 3PAR StoreServ Storage features ....................................................................................................................................... 5 Hardware features .................................................................................................................................................................... 5 Software features ..................................................................................................................................................................... 6 HP 3PAR StoreServ 10400 array ............................................................................................................................................... 9 Capabilities and sizing .............................................................................................................................................................. 9 HP 3PAR StoreServ SQL OLTP reference configuration ...................................................................................................... 10 Hardware configuration ............................................................................................................................................................. 10 HP 3PAR StoreServ hardware sizing and configuration ................................................................................................. 10 System physical view ............................................................................................................................................................. 12 HP 3PAR StoreServ software configuration .......................................................................................................................... 13 SQL Server 2012 deployment on reference configuration ................................................................................................ 13 Host environment ................................................................................................................................................................... 13 Array environment .................................................................................................................................................................. 15 Storage provisioning .............................................................................................................................................................. 15 Instance and database deployment ................................................................................................................................... 17 Reference configuration testing .............................................................................................................................................. 17 Adaptive Optimization............................................................................................................................................................ 17 System performance.............................................................................................................................................................. 20 Overall I/O performance ........................................................................................................................................................ 22 Controller node resiliency ..................................................................................................................................................... 23 Service levels ........................................................................................................................................................................... 24 Thin provisioning ..................................................................................................................................................................... 26 Zero detect ............................................................................................................................................................................... 26 Thin reclamation ..................................................................................................................................................................... 28

HP 3PAR StoreServ reference configuration for Microsoft SQL

  • Upload
    others

  • View
    17

  • Download
    0

Embed Size (px)

Citation preview

Page 1: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

HP 3PAR StoreServ reference configuration for Microsoft SQL Server OLTP databases

Table of contents Executive summary ...................................................................................................................................................................... 3

Introduction .................................................................................................................................................................................... 3

Objective...................................................................................................................................................................................... 3

Solution ....................................................................................................................................................................................... 4

Microsoft SQL Server 2012 and Windows Server 2012 ................................................................................................... 4

Workload structure ................................................................................................................................................................... 4

Testing ......................................................................................................................................................................................... 5

HP 3PAR StoreServ Storage features ....................................................................................................................................... 5

Hardware features .................................................................................................................................................................... 5

Software features ..................................................................................................................................................................... 6

HP 3PAR StoreServ 10400 array ............................................................................................................................................... 9

Capabilities and sizing .............................................................................................................................................................. 9

HP 3PAR StoreServ SQL OLTP reference configuration ...................................................................................................... 10

Hardware configuration ............................................................................................................................................................. 10

HP 3PAR StoreServ hardware sizing and configuration ................................................................................................. 10

System physical view ............................................................................................................................................................. 12

HP 3PAR StoreServ software configuration .......................................................................................................................... 13

SQL Server 2012 deployment on reference configuration ................................................................................................ 13

Host environment ................................................................................................................................................................... 13

Array environment .................................................................................................................................................................. 15

Storage provisioning .............................................................................................................................................................. 15

Instance and database deployment ................................................................................................................................... 17

Reference configuration testing .............................................................................................................................................. 17

Adaptive Optimization ............................................................................................................................................................ 17

System performance.............................................................................................................................................................. 20

Overall I/O performance ........................................................................................................................................................ 22

Controller node resiliency ..................................................................................................................................................... 23

Service levels ........................................................................................................................................................................... 24

Thin provisioning ..................................................................................................................................................................... 26

Zero detect ............................................................................................................................................................................... 26

Thin reclamation ..................................................................................................................................................................... 28

Page 2: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

HP 3PAR StoreServ deployment settings and best practices ........................................................................................... 29

Front-end port cabling ........................................................................................................................................................... 29

FC zoning................................................................................................................................................................................... 29

Common Provisioning Groups (CPGs) ................................................................................................................................ 30

Virtual Volumes (VVs) ............................................................................................................................................................. 30

Virtual LUNs .............................................................................................................................................................................. 30

Adaptive Optimization ............................................................................................................................................................ 30

SQL deployment recommendations ....................................................................................................................................... 30

Adaptive Optimization ............................................................................................................................................................ 31

Zero detect ............................................................................................................................................................................... 31

Conclusion ..................................................................................................................................................................................... 31

Implementing a proof-of-concept .......................................................................................................................................... 31

Appendix bill of materials .......................................................................................................................................................... 32

For more information ................................................................................................................................................................. 33

Page 3: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

3

Executive summary High availability and high performance are key requirements for Microsoft® SQL Server OLTP deployments since data access is considered business-critical for many organizations. For example, storage response time is critical for the performance of database transactions. Efficient storage is yet another key requirement as it reduces complexity in fast growing Microsoft SQL Server environments and eliminates costs of unnecessary capacity. Furthermore, with the massive growth of data management platforms, many Microsoft SQL Server instances are deployed on dedicated and frequently underutilized server hardware. IT organizations have long turned to server virtualization as a consolidation strategy to better utilize resources and reduce the amount of physical hardware running Microsoft SQL Server instances, but there was no simple solution to address data stranded in older storage devices caused by rigid architectures.

Today, HP 3PAR StoreServ Storage arrays extend resource virtualization beyond servers as another key enabling technology that continues to reduce costs, improve agility, and enhance business continuity.

The objective of this reference configuration is to address these IT challenges by designing, evaluating and testing an HP 3PAR StoreServ Storage configuration that serves as a tier 1 enterprise SQL Server 2012 storage platform capable of:

• Entry to middle level tier 1 enterprise OLTP performance

• Concurrently host mission and business critical workloads (QoS)

• High availability consistent with SQL Server 2012

• Balanced performance and capacity efficiency

• Flexible configuration and growth options for multi-host tenancy

• Ease of management and capacity control

Based on an estimated 80,000-120,000 host IOPS OLTP I/O performance typically referenced for mid-range tier 1 enterprise SQL workloads, the HP 3PAR StoreServ 10400 is chosen as the storage array for this reference configuration. The performance rating of this StoreServ system configured with two node pairs provides approximately between 77,000 and 135,000 host IOPS depending on RAID type used leaving additional I/O, disk, and physical space in place for growth, effectively extending hardware refresh cycles.

Testing performed with this reference configuration demonstrates how HP 3PAR StoreServ features uniquely deliver a flexible, efficient and always-on storage platform ideal for both physical and virtual SQL Server 2012 deployment configurations.

For example, SQL Server 2012 workloads are tested to analyze array performance-related functionality such as Adaptive Optimization (AO) and capacity management-related features such as thin provisioning to illustrate the array’s ability to balance performance and capacity efficiency. This paper additionally provides best practice guidance for proof-of-concept implementations.

Last, but not least the ease of use of the HP 3PAR StoreServ management console is instrumental in quickly and easily providing the flexibility to adjust the system to new performance demands or additional capacity requirements. When this is combined with Peer Motion data migration, the configuration can achieve a shorter, lower risk hardware refresh migration to the platform with minimum SQL Server database disruption.

Target audience: This reference configuration is intended to familiarize IT decision makers, database and solution architects, and system administrators with the capabilities and features of the HP 3PAR StoreServ reference configuration and provide a tested configuration and best practice guidance for deploying SQL Server 2012 in an HP 3PAR StoreServ environment.

This reference configuration is provided as a reference only since customer configurations will vary depending on their requirements or specific needs. The referenced configuration (number of nodes, disk enclosures, disks, etc.) represents a minimum configuration recommended for optimum high availability, although smaller configurations can be deployed.

Introduction

Objective

Storage systems are key and high growth elements of today’s data driven IT landscape. The addition of flash media options to this environment increases the complexity and manageability of different storage devices, making a purchase decision or implementation a non-trivial task. These challenges are compounded when multiple hosts and data workloads exist across many storage devices and there is a large ROI in consolidating multiple systems into fewer, easier to manage ones.

Page 4: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

4

To that end, this reference configuration has been specifically tested with challenging database workloads to provide customers in the process of evaluating storage options with a proven and sized reference configuration for their Microsoft SQL Server data storage needs.

This reference configuration is specifically designed to service a range of database workloads that can be characterized as entry to middle level of enterprise storage systems, with host I/O workloads typically ranging in the 80k-120k host IOPS range.

Solution

The HP storage system chosen to meet these performance, management and reliability requirements is the HP 3PAR StoreServ 10400. In this particular configuration the HP 3PAR 10400 StoreServ is sized to approximately half of its full capability of 180k backend IOPS, leaving room for increased workload and capacity growth.

Given the varied nature of performance requirements within an organization, it is not easy or economical to service all applications with a single high performance level of service. In order to meet multi-instance and multi-workload I/O demands HP designed the StoreServ family to provide three I/O service levels (high, middle, and low), based on the use of Solid State (SSD), Fibre Channel (FC), or Nearline (NL) media drives. In this particular configuration, only high and middle performance levels are provided; although, Nearline level of service, typically used for archived SQL data, can be achieved by adding NL media.

A static multi-tier media driven approach to storage aids in the ability to consolidate databases with varied performance requirements but it increases the complexity of deploying databases, forcing administrators to manually place files according to their I/O needs and this approach clearly is not easy to manage and does not scale.

The HP 3PAR StoreServ family architecture improves this approach with a highly virtualized provisioning model that provides two adaptive (LUN and sub-LUN) features that automatically move data according to their I/O needs without complex administrative oversight. This virtualization and adaptive data movement approach is used in this configuration to provide variable levels of service and reduces the effort SQL database or SAN administrators would have to perform in optimally deploying SQL Server databases.

In addition to the specific performance and quality of service criteria for this reference configuration, the thoughtful hardware and software engineering design of the HP 3PAR StoreServ family provides a feature rich data management environment and console that provides several storage management and capacity efficiency functions that, as shown in this configuration white paper, are highly complementary to Microsoft Windows® Server 2012 environments running SQL Server 2012 workloads.

The reference configuration also addresses enterprise high availability requirements, found at every element of the HP 3PAR StoreServ family architecture. Duplicative components and resilient cache architecture enable the system to provide failure tolerance, resiliency and data protection needed to protect SQL Server data from corruption.

Finally, the HP 3PAR StoreServ management console is an easy to use interface that facilitates all aspects of storage system management enabling administrators to maximize the capacity efficiency of the array and provision storage with minimum keystrokes. The reference configuration testing documented in this paper did not necessarily test the console interface but the console interface was used frequently to setup and manage the environment.

Microsoft SQL Server 2012 and Windows Server 2012

The latest releases of SQL Server and Windows Server have new features that further facilitate virtualization and improve capacity management from an operating system perspective. Features such as SCSI TRIMM function, thin provisioning for Hyper-V, and offload data transfer support (ODX) work seamlessly with the HP 3PAR StoreServ 10400 arrays to deliver a thin and virtualization friendly environment ready to efficiently host SQL Server 2012 databases.

Workload structure

Several SQL Server 2012 databases used to characterize the configuration are divided into two main service levels:

• Premium high performance (mission critical) – set of large OLTP databases deployed on physical servers

• Regular performance (business critical) – set of smaller OLTP and BI databases deployed on virtual machines

The premium database data is serviced by an adaptive virtualized volume that uses a combination of SSD and FC drives while the regular database data is serviced by a regular virtualized volume based only on FC media.

Page 5: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

5

Testing

The reference configuration is setup to host data for multiple Microsoft SQL Server 2012 OLTP database instances. Transactional testing is performed using an OLTP workload that generates slightly more I/O than what is typically found in enterprise OLTP workloads to ensure the system is tested to its capacity.

Note For an in-depth description of all performance building features, the 3PAR architecture overview white paper (see link in the For more information section) is an essential document in providing a detailed description of the 3PAR systems. SQL Server value is also highlighted throughout the document for readers who do not need in-depth technical storage details.

To further familiarize the reader with HP 3PAR StoreServ architecture concepts, features are broken down into hardware and software based features. The specific storage array configuration tested represents a recommended minimum configuration that provides the maximum high availability supported by enabling features such as cage level availability. Smaller and larger valid configurations can be used while maintaining many of the same SQL Server benefits outlined in this configuration.

Note The term host IOPS used in this document represents I/O per second as measured in the StoreServ system, also commonly known as backend IOPS. Host IOPS represents aggregate I/O per second being issued by all the SQL Server hosts connected to the test environment. StoreServ I/O performance measurement chart labels do not specify host or backend and interpretation depending on the point of measurement is needed. Host ports and virtual volume measurements typically represent host I/O characteristics while disk port measurements represent backend I/O.

HP 3PAR StoreServ Storage features

To meet and exceed increasing demand on storage, HP 3PAR StoreServ Storage has been architected from the ground up to provide scalable performance, high reliability and efficient capacity management, key requirements of enterprise grade storage platforms.

The following sections provide a high level description of the core hardware and software technology features used or tested as part of this reference configuration project.

Hardware features

Gen4 ASICs Gen4 ASICs are at the core of several innovating hardware features related to data processing. They provide very fast controller node interconnect speeds and data management features such as mixed workload support, thin provisioning conversion algorithm and built-in RAID, CRC data integrity calculations along with zero detection at very low latencies.

Cache coherency

This cache coherency is an enabler for additional reliability and resiliency functions such as persistent cache, on-demand node cache re-mirroring, and persistent port presentation.

Mixed workload architecture Mixed workloads are supported by processing data independently from control information through separate hardware paths. Control information is processed through a dedicated control processor that incorporates its own control memory cache while data is processed and cached separately in the HP 3PAR StoreServ’ s Gen4 ASICs.

This mixed workload architecture of the array is very efficient for SQL Server OLTP environments. OLTP data access in SQL databases ranges typically from 8K-16K operations mixed with occasional 64K read-ahead operations. SQL Backup operations can also request large sequential I/Os exceeding 4MB. Without this architecture, many small 8K I/O operations would have to wait behind larger I/O requests, adding architecture induced latency to the service level.

Adaptive cache The cache adaptation algorithm can dynamically re-allocate up to 100% of the cache for reads during heavy read periods without neglecting write needs. During periods of higher write activity, the cache can re-allocate up to 50% of cache memory for writes helping keep write latencies down.

Page 6: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

6

In a typical OLTP SQL server environment, the I/O demands placed on the array are not static in terms of the read/write ratio so a single setting may work well at some times of the day but may negatively impact performance at other times. This makes the adaptive cache algorithm a very useful feature for database administrators.

Zero detect

The zero-detect algorithm built into the Gen4 ASICs increases disk space utilization by not writing large amounts of repeating zeroes to disk. This is particularly valuable when used with SSD media, as it keeps costs down.

SQL Server value SQL Server 2012 directly benefits from the features implemented in the Gen4 ASICs. Test cases in this reference configuration document identify SQL Server benefits and best practices related to these features, such as:

• Cache Coherency provides service level consistency to SQL Server instances during node downtime.

• Mixed workload architecture keeps SQL transactions from slowing down during large data transfers.

• Zero detect improves SQL disk space utilization when a database is created.

Software features

Virtualized provisioning and wide striping

This virtualization approach to provisioning is the foundation for the Dynamic and Adaptive Optimization features of the array.

HP 3PAR StoreServ virtualizes data access by mapping physical disk space using a three-tier mapping approach. First, each physical disk is mapped into fine grained disk allocations called chunklets. Secondly, the chunklets are mapped to create logical disks essentially striping data across many disks ensuring uniform performance levels and eliminating hot spots associated with older designs. The third mapping layer creates Virtual Volumes (VV) from fractions of a logical disk or an entire logical disk.

User data in an HP 3PAR StoreServ resides in Virtual Volumes that are exported to hosts as LUNs. These VVs are defined within a virtual construct called a Common Provisioning Group or CPG. The enclosing CPG has a fixed redundancy RAID type and can have other parameters defined such as capacity warnings that will apply to every VV built inside the CPG.

The CPG can only be defined with one type of media, and HP 3PAR StoreServ Storage currently has three media types partitioning the overall provisioning space into three tiers: Solid State (SSD) media, Fibre Channel (FC) media, and Nearline (NL) media. Looking at the logical view of this virtualization, we can see how media types, CPGs, LUNs and disks are related.

Figure 1. Logical view of HP 3PAR StoreServ provisioning virtualization

Service levels

Different data service levels are provided by the StoreServ arrays through two automated service level adjustments available to administrators: Dynamic Optimization (DO), which migrates all the data residing in a LUN, and Adaptive Optimization (AO), which migrates data at a block level (sub-LUN).

Page 7: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

7

Figure 2. Tiered HP 3PAR StoreServ service level architecture

Dynamic Optimization Dynamic Optimization (DO) of LUN contents is an important performance enabler for SQL Administrators. Once a database is in place, DO allows data to be migrated to a faster or slower service level tier on demand and online without disrupting SQL Server uptime. For example, if a SQL logfile is deployed on a RAID1 FC LUN and it is determined it is better served in a RAID1 SSD LUN due to increased update and write demands, the administrator can tune the LUN to a faster SSD based CPG automatically without disrupting the database operations and without having to copy files.

Adaptive Optimization

Adaptive Optimization (AO) works on a sub-LUN basis and this works very well for SQL OLTP data files as it migrates highly active portions of data files to SSD and migrates less active portions to Nearline drives if desired.

It is important to know that AO is not always necessary and CPGs can exist without an AO policy. In this case, the data would be statically pinned to the underlying media and redundancy type tier of the CPG.

For example, log or tempDB files don’t benefit from AO as much since the log file writes to the entire log file space and tempDB tends to have transient data access that is not recurrent. Those SQL data files are better served in an appropriate fixed CPG without an AO policy, such as a RAID5 SSD CPG for tempDB and a RAID1 FC CPG for logs.

In this paper, we evaluate Adaptive Optimization of a CPG containing SQL database LUNs; the optimization works on the basis of migrating heavily accessed areas to faster media and least accessed areas to slower media.

The three-tier architecture in HP 3PAR StoreServ Storage supports multiple AO policies that optimize data within all three tiers or just within two media type tiers, even if the system has disk drives of all three media types installed.

For example an AO policy can be defined between SSD and FC CPGs to create a VV and export a LUN to host a key SQL data filegroup, while another AO policy can be defined between an FC CPG and an NL CPG to host a filegroup that contains large amounts of archive data that occasionally needs to be accessed such as end of month reporting.

Setting up and using AO is simple and a system can keep more than one Adaptive Optimization policy active at the same time, thus allowing service level adjustments of several different SQL databases according to their own I/O service needs.

An AO Policy consists of the two or three CPG tiers that define the optimization along with two operational schedules. First, the system needs to know how often to collect performance metrics on the source CPG (Measurement hours). Second, the system needs to know how often to perform the actual migration of block data from the source CPG to the target CPG (Schedule).

There is flexibility in the performance metric collection by allowing metrics to be taken during scheduled measurement windows. This approach can help isolate adaptation data to peak traffic hours while excluding nightly backup windows honing in the adaptation to hot data access areas in the array.

Page 8: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

8

Figure 3. Adaptive Optimization policy settings in System Reporter

Thin storage

HP 3PAR StoreServ Storage implements several thin data capacity related features to enable users to create, convert, maintain, and reclaim space in efficient and cost effective ways. Our evaluation tests these features and shows how efficient SQL Server 2012 can be when deployed using these HP 3PAR StoreServ thin technologies.

Thin provisioning

Thin provisioning creates small Virtual Volumes on the array that are presented as fully allocated to Windows hosts. The virtual volume will re-size as data is written or deleted from it providing free space for other thin VVs to use, maximizing the data utilization of the media present in the array.

When SQL Server is deployed on thinly provisioned LUNs with zero detect, databases themselves essentially become thin as large database files can be created yet occupy a minimum of disk space until they grow, keeping the storage Virtual Volume thin, further maximizing media utilization and deferring scaling costs.

This is due to the zero detect feature of the Gen4 ASICs which essentially eliminates stranded capacity in newly created database files as they are zero-filled upon creation.

Thin conversion

Existing SQL database deployments on legacy system volumes can be converted to thinly provisioned during migration to 3PAR volumes by using thin conversion. When combined with zero detect, unused and stranded (zero-filled) internal to SQL datafile space does not strand capacity in the converted 3PAR volume.

Thin persistence

Using zero detect hardware technology thin volumes stay thin over time in a SQL Server environment when database internal space is released. For example, after a SQL datafile shrink operation the containing thin volume will shrink and release space for other thin volumes to use.

Thin reclamation New features such as the UNMAP command implemented in Windows 2012 provide additional operating system tools to automate thin space reclamation. For example, when datafiles and databases are deleted, the system automatically reclaims the space without administrator intervention.

SQL Server benefits

The combination of the above thin technologies and hardware-based zero detect result in a congruent approach to capacity management that when used as shown in the following test sections, increases the capacity efficiency of SQL Server database deployments. These efficiency gains reduce data growth scaling costs and improve ROI. The management ease of use and OS integration approach to capacity management also simplify administrator roles in capacity control.

Page 9: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

9

HP 3PAR StoreServ 10400 array Capabilities and sizing

The HP 3PAR StoreServ 10400 storage array used in this SQL OLTP reference configuration is based on a four-node HP 3PAR StoreServ 10400 Storage array.

This storage array shares the same hardware and software architecture as other arrays in the HP 3PAR StoreServ family, differing from other models by its node processing power, scaling, cache, and capacity. HP 3PAR StoreServ 10800 Storage is the only other array in the HP 3PAR StoreServ family that scales further up in performance and capacity.

The StoreServ 10400 array is an enterprise class array capable of processing 180,000k backend IOPS in its largest four node configuration, placing the array in a position to serve the high-transaction OLTP workloads commonly found in enterprise environments.

Figure 4. HP 3PAR StoreServ 10400 specifications among HP 3PAR StoreServ family

Figure 5. HP 3PAR StoreServ 10400 storage array components

Page 10: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

10

HP 3PAR StoreServ SQL OLTP reference configuration HP 3PAR StoreServ 10400 Storage reference configuration for SQL Server 2012 focuses on mixed SQL Server 2012 database workloads due to its inherent design suited for mixed I/O loads.

The database server tier was implemented using HP Blade System servers in a combination of both physical servers and virtual servers in order to cover and characterize the configuration performance and efficiency under varied deployment models that typically exist in mixed database integration/consolidation platforms.

The instances and databases used to characterize the configuration are also mixed in size, and the workload defined to present different user loads on each service level. Their deployment is described in the SQL deployment recommendations section.

To estimate and compare the capability of this configuration with respect to your specific needs, a database user count is not a very effective metric since user load varies for each application, and therefore not provided.

A SQL Performance counter called Batch Requests per Second is a rough estimator that can easily be measured on each Windows host to determine an aggregate transactional rate that relates to queries per second serviced. Since I/O performed by each query varies, a more accurate sizing metric for OLTP is the storage front end or host IOPS, which can be obtained from your current storage system, or from the Windows logical disk performance counter called Transfers/sec. This metric can be broken down into read/sec and writes/sec to better understand the system I/O load under your databases.

The configuration uses an 80/20 read/write Host I/O ratio as a representative sizing ratio. Solutions with higher read ratios like 90/10 will also work as more reads with fewer writes can be served under the same total I/O size factor.

Given those metrics considerations, the configuration is sized to safely provide over 80,000 host IOPS.

In addition to performance, given the cost of SSD media, the database size (footprint) on disk for the premium mission critical tier is also stated as an additional reference, although each database has its own I/O density rate and a larger database could perform well in a smaller SSD configuration if data access is local to a small percentage of the entire database file space. Conversely a small database may require a larger SSD allocation if data access is uniform across the entire database file space, although this is unlikely due to the nature of clustered indexes and other database objects that share the entire space.

Table 1. Estimated database deployment size and I/O characteristics per tier

Tier Total DB size (aggregate) 80/20 R/W Host IOPS

Premium service level 4TB 71,000

Normal service level 6TB 15,000

Total 10TB 86,000

Hardware configuration

The following section outlines the hardware configuration, in terms of component size and physical configuration needed to satisfy the project target OLTP I/O rate of 80K SQL host IOPS in an 80/20 read/write pattern.

HP 3PAR StoreServ hardware sizing and configuration

Capacity planning

Each component in the system is sized to ensure the configuration meets the target I/O requirements for OLTP.

Disk drive quantities are sized first as they are the key I/O driving component of the system. Once the optimal disk configuration is established, the correct controller node and drive shelve configurations needed to support it are identified. RAID5 is used for this estimate due to the improved write performance in HP 3PAR StoreServ RAID5 implementation.

Table 2. SQL OLTP 80/20 host 8k I/O target

SQL host IOPS

8k random SQL host reads 64,000

8k random SQL host writes 16,000

SQL OLTP 80/20 aggregate host 8k I/O target 80,000

Page 11: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

11

Table 3. Total backend array IOPS based on RAID 5

StoreServ System IOPS

RAID 5 Reads 64,000

RAID 5 Writes (16,000 X 4) 64,000

Total backend IOPS required 128,000

Disk sizing This I/O load can be serviced using a combination of 160 FC drives and 28 SSD drives, however, the drive quantities chosen are increased to multiples of 8 in order to meet disk cage configuration requirements in an 8 cage system. Sizing the system one step down with 24 drives instead of 32 would not meet the design requirement.

Table 4. Total configuration backend IOPS

Configuration backend IOPS

FC tier backend (160 X 260) 41,600

SSD tier backend (32 X 3000) 96,000

Total configuration backend exceeds target 137,600

Under RAID5, the 80/20 r/w host I/O workload translates to a 50/50 r/w array workload due to extra RAID5 writes. This estimate assumes 260 IOPS per FC drive and 3000 IOPS per SSD drive.

Note This represents a minimum RAID5 I/O rate expectation for this configuration. It does not factor-in additional I/O performance derived from controller node cache hits or from the use of RAID1 for SQL Logs, making the actual SQL host I/O performance higher than estimated here (see the System performance section). Your HP representative can assist in determining I/O sizing estimates for your needs using HP sizing tools.

Controller node sizing The array configuration needed to support 128k host IOPS slightly exceeds the 90k backend IOPS safe limit for a single node-pair (two-node) configuration, so two node-pairs (four-node) are used. In addition, two node-pairs provide better high availability and performance during node failure as the system cache re-mirroring function requires a minimum of four nodes.

Additional drives can be added to this configuration increasing the performance until a maximum of 180k backend IOPS (or 112.5k host IOPS, based on an 80/20 RAID5 workload) is reached. This represents close to 30% backend IOPS headroom within this particular 10400 configuration.

Disk shelve (cage) sizing Each disk shelve can house a total of 40 drives installed in 10 magazines of 4 drives each. There are two reasons that direct the configuration to a minimum of 8 shelves. The 10400 supports a maximum of 1 magazine (4 drives) of SSD media per cage, so a total of 8 would be needed to house 32 drives. In addition, cage level high availability also requires a minimum of 8 cages. This feature allows an entire cage to fail without data loss. Finally this configuration also leaves room for future disk expansion.

Physical disk placement

The array is sized with 8 disk cages (shelves) with 6 populated drive magazines holding 4 drives each for a total of 24 drives per shelve. Table 5 shows the breakdown of disks used in this configuration.

Page 12: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

12

Table 5. Reference configuration drive configuration

Drive Type Speed Capacity Quantity per cage Total

FC LFF 15K rpm 600GB 20 160

SSD SFF N/A 200GB 4 (Maximum) 32

System physical view

The HP 3PAR StoreServ 10400 storage used in our SQL 2012 reference configuration consists of two controller node-pairs (a total of four-controller nodes), a main cabinet and a second expansion cabinet. Each node-pair is evenly connected to 4 drive shelves.

Figure 6. HP 3PAR StoreServ reference configuration

Page 13: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

13

HP 3PAR StoreServ software configuration

The HP 3PAR StoreServ 10400 Storage used in our SQL 2012 reference configuration was running HP 3PAR Operating System version 3.1.1 MU2.

The following software features were tested/used as part of this project:

• Adaptive Optimization

• Domains

• Dynamic Optimization

• System Reporter

• Thin Conversion

• Thin Persistence

• Thin Copy Reclamation

• Thin Provisioning

• VSS provider for Microsoft Windows

SQL Server 2012 deployment on reference configuration

The SQL Server 2012 test environment for the reference configuration is setup according to the following host configuration, storage provisioning, and database layout.

Host environment

The HP BladeSystem c7000 enclosure has two HP ProLiant BL680c blades and four BL460c blades. The two BL680c blades are configured as physical SQL servers to host the premium service level databases (higher performance/mission-critical).

The four BL460c blades are configured as Hyper-V virtualized database servers to host the normal service level databases (business-critical).

The choice of HP blade servers deployed with a mix of both physical and virtual operating systems is used to test the system in a mixed host/mixed service level environment similar to what may be typically found in a consolidation or pre-consolidation production system.

The use of a blade-based configuration in this analysis does not mean the HP 3PAR StoreServ Storage is limited to blade host configurations. For example, similar benefits can be obtained when connecting HP 3PAR StoreServ 10400 Storage to high performance standalone physical servers such as HP ProLiant DL980 enterprise servers.

Page 14: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

14

Figure 7. SQL Server database servers

Host instance allocation

The c7000 blade enclosure has two BL680c blades and four BL460c blades.

The two BL680c blades are configured as physical SQL Server hosts to drive the premium service level tier of the array simulating dedicated high performance/mission-critical use.

The four BL460c blades are configured as Hyper-V SQL Server hosts to drive the normal service-level tier simulating the business-critical consolidated database use typically deployed in a virtualized environment.

Page 15: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

15

Array environment

For this reference configuration the storage array is setup to service two distinct service levels. A high performance (premium) level based on adaptively optimized VVs and a regular performance (normal) service level based on a fixed Fibre Channel service level. This approach helps demonstrate the advantages of Adaptive Optimization and the ability to use the system without adaptation at the same time.

The Common Provisioning Groups (CPG) and Adaptive Optimization policies are the constructs used to setup VVs needed to implement each of the two service level. Additional service levels can be configured by defining separate sets of CPGs and AO policies and lower levels of service can be setup by adding Nearline drives. For the purposes of this reference configuration, two levels provide a test environment representative for the project objectives.

Storage provisioning

System provisioning is accomplished during user initiated steps illustrated in Figure 8. The following section describes these steps and identifies the resulting CPGs and VVs.

Premium service-level tier

The premium service-level tier is implemented using a RAID5 FC CPG called Tierable_FC_Data created with an AO policy enabled. The SSD space needed for the data is defined in the AO Policy.

From this CPG virtual two (or more) volumes are created and exported to each of the two physical BL680c servers for high performance OLTP data LUNs.

A second RAID1 CPG called FC_Logs is defined for SQL Log files. From this CPG two VVs are created and exported to each BL680c for SQL Log LUNs.

A separate fully provisioned RAID5 SSD CPG called SSDtempDB is defined for fixed SSD performance needs. From this CPG two VVs are created and exported to each BL680c for tempDB LUNs.

Normal service-level tier The normal service-level tier is simpler in terms of CPG definitions, however more VVs are created and exported to each Hyper-V host. Guest OS systems then defined the disks as pass-through devices and are available online from each guest.

A single RAID5 CPG called FC_Data is defined for data. From this CPG multiple data VVs are created and exported to each BL460c Hyper-V host server. Notice how this CPG does NOT have AO policy resulting in a fixed level of service capable of RAID1 Fibre Channel performance.

From the FC_Logs CPG defined above, multiple VVs are created and exported to each BL460c Hyper-V host for virtual SQL instance log LUNs.

From the SSDtempDB CPG defined above, multiple tempDB VVs are created and exported to each BL460c Hyper-V host for virtual SQL instance tempDB LUNs.

Figure 8. Sample view of Common Provisioning Groups (CPG) using the HP 3PAR Management Console

Note The definition of a CPG does not require a capacity, as this is allocated during virtual volume creation. The CPG listing in Figure 8 shows total allocation capacity for each CPG as the total aggregate allocation of all Virtual Volumes created from each CPG.

Page 16: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

16

The following diagram displays an overlay of physical drives to the Common Provisioning Groups and VVs along with their intended host presentations.

Figure 9. Physical and Logical overlay of drives, hosts and provisioning

Page 17: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

17

Instance and database deployment

After provisioning the array for SQL Server databases, several databases are created and deployed according to the deployment shown in Figure 10.

The allocation of four large OLTP databases for high performance and over 10 smaller OLTP and BI databases for normal performance was chosen to stress the system in a similar way a typical hybrid environment would experience.

The high performance databases are hosted on the high performance BL680c servers and have their datafiles deployed on the premium service-level tier defined in the array.

The normal performance databases are hosted on the virtualized BL460c servers and have their datafiles deployed on the normal service-level tier in the array.

Figure 10. Database host deployment

Reference configuration testing

The following engineering tests are designed to verify the reference configuration meets or exceeds the 80,000 host IOPS performance target it was designed to service. In addition, testing is performed to validate the configuration provides a dynamic balance between performance and capacity. Finally, the reference configuration is tested to verify mixed workloads can be serviced concurrently.

During this evaluation, the premium service level is first tuned by enabling Adaptive Optimization. Once the test workload data was optimally migrated to the SSD tier, the array was subject to normal service level tier workload stressors such as database backups and maintenance jobs in order to evaluate service level interference, if any, at the premium service level tier.

Adaptive Optimization

The Adaptive Optimization test is used to set up and verify that the optimization algorithm can adapt data under OLTP workloads.

Four large OLTP databases were deployed on two physical database servers (HP ProLiant BL680c) and stored on fully provisioned VVs inside a FC Tier CPG. This CPG had an AO policy defined to sample performance data every hour and perform data movement every three hours.

Note that the SSD tier 0 limit drives how much data is migrated to that tier, essentially migrating data right up to that limit. In this case, the four OLTP databases residing in the FC source CPG have an aggregate data file footprint of approximately

Page 18: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

18

4TB and the SSD target CPG was set to 1TB. This effectively improved response times for half of the data accessed under a uniform data access pattern in the data files.

The range of adaptation goes from 0 (all data in the FC tier) to 100% (all data migrated to the SSD tier), depending on the service level requirement and the available SSD space in the array. In this particular test case we used a 25% data file to SSD tier target size ratio.

The AO data migration is captured in the charts that follow. The IOPS in the FC tier drop as data pages migrate to SSD and a proportional amount of IOPS begin to be serviced by the SSD tier.

Although the test OLTP workloads are paced to a fixed I/O rate, the aggregate system wide IOPS observed after data movement increased due to queued I/O requests being serviced with the lower service times achieved in the adaptation.

In order to test the AO functionality an AO policy was defined and enabled. Once the initial 3 hour sampling period lapsed, every hour the system reevaluates data I/O to see if data migration is needed again. This initial adaptation is setup to only place less than 30% of actual data onto the SSD tier, excluding log files which are not part of the AO policy.

Figure 11. Adaptive Optimization policy settings in System Reporter

This allocation will be used to evaluate the quality of service isolation between tiers as the FC tier is loaded with surge workloads from normal business-critical workload databases.

Figure 12. Data File backend IOPS before, during and after partial data file AO data migration to SSD

After the AO data migration, the System Reporter “VV Space” report shows exactly how much data migrated to the SSD tier0. This came out to approximately 10% of the fully allocated space in the FC tier1.

Page 19: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

19

Figure 13. Initial AO data migration

Each bar represents a virtual volume (VV) exported as LUNs to premium tier hosts, BL05 host had a single VV and BL06, the second premium tier host having 8 VVs. The bar for VV BL06PHData did not have any data so no migration took place for it (unused VV).

When we look at the actual SQL data file space used and the space used in SSD tier0, the ratio is 29%.

Table 6. Ratio of space used in SSD tier0 to actual data file space

Space used GB

Total SQL data file space used 3,336

Total space used in SSD tier0 936

Sufficient data was migrated to tier0 to nearly fill the AO target capacity of 1TB. Figure 14 illustrates the overall performance gain due to only 29% of actual data on SSD media, more than tripled the backend IOPS capability when compared to tier1 FC alone. This is due to I/O density, as regions of data with higher I/O load are migrated to SSD.

Figure 14. Host I/O performance after 1TB data movement

From a SQL perspective data migrated may include highly active cluster index or table data, without the need for a DBA to manually or statically make data placement decisions, a key reduction management overhead, when compared to storage arrays that mix FC and SSD disks without automatic data migration capabilities.

Page 20: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

20

System performance

The initial AO optimization test was set up with only a 28% SSD to FC disk usage ratio; it was set low to aid in the quality of service tests. In this test we double the SSD AO allocation from 1TB to 2TB to observe the corresponding AO adjustment and increase workload capability due to more I/O being serviced by the SSD tier.

Figure 15. System Reporter Adaptive Optimization policy configuration page with 2TB SSD limit

While this is not a benchmark, the workload placed a fairly high load on the array without saturating controllers or SSD/FC disk tiers. Data migrated as expected to fill the higher AO SSD tier allocation to over 20% on an over provisioned capacity basis.

Figure 16. Increased allocation of SSD space to AO

When we look at the actual SQL data file space used and the space used in SSD tier0, the ratio increases to 56%.

Table 7. Ratio of space used in SSD tier0 to SQL data space used

Space used MB

Total SQL data file space used 3336

Total space used in SSD tier0 1872

Page 21: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

21

The additional data migrated to the AO SSD tier results in more I/O being serviced by the SSD drives along with a corresponding reduction in overall service time. The shift in I/O is observed in the Region IO Density report of System Reporter. In these charts red represents SSD I/O activity and green represent FC I/O activity.

Figure 17. Initial and Increased AO data migration I/O rate density charts

As we increase the workload rate over the same data, the IO Rate Density remains stable on each tier after the data migration is complete, resulting in a proportional scaling of I/O in each tier. This helps sustain SQL transactional throughput stability under workload surges as the surge data is also serviced at a same level of service (resides in the same regions) and the system controller nodes are sized to accommodate I/O surges.

Figure 18. Increased AO data capacity under heavier user workload

Controller node performance The controller node utilization scaled well after AO adaptation and doubling of the workload rate. The system CPU time effectively doubled from 20% to about 40% utilization after the workload was doubled showing linear scaling under SQL Server OLTP user surge load, such as those experienced during peak hours.

Page 22: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

22

Figure 19. CPU Utilization under heavy workload

Overall I/O performance

From a component perspective, the I/O is limited by the drive configuration. Assuming 260 drive IOPS per FC drive and 3000 drive IOPS per SSD drive, the higher end of a configuration using 160 FC and 32 SSD drives is shown in Table 8.

Table 8. Higher end of configuration with 160 FC and 32 SSD

Tier Backend IOPS

FC tier (160 X 260) 41,600

SSD tier (32 X 3000) 96,000

Total tier backend IOPS 137,600

50/50 backend read write ratio yields are shown in Table 9.

Table 9. Read/write ratio

Operation Backend IOPS

Read 68,800

Write 68,800

Based on this backend ratio and RAID 5 provisioning, the front end IOPS disk limit is estimated at RAID 5 Frontend IOPS = 68,800 + (68,800/4) = 86,000 Frontend IOPS.

Figure 20 shows backend IOPS as measured, with a backend read/write ratio closer to 60/40 and an average of 150,000 backend IOPS. Figure 21 shows the corresponding frontend host IOPS. The actual host IOPS of the entire solution is higher due to the use of cache, and conservative numbers used for SSD drive IOPS capability.

In the 3PAR StoreServ cache design, more cache is allocated per GB of SSD than per GB of FC drives. When combined with the Adaptive Cache algorithm, the higher cache allocation for SSD in the system results in larger cache adjustment swings, useful for varying read/write ratios.

Cache adaptability is important for SQL Server workloads that experience daily variations of read/write ratios, such as surges when a report is run (read), or a large sequential data load (write). During database sequential reads the read cache allocation increases and the array internal sequential pre-fetch activates to quickly get ahead of SQL Server physical reads, dynamically lowering response times. During heavy writes, the array adapts and allocates more cache for writes, once again reducing the service times. From a SQL Server perspective this means faster reports and shorter table loads or backups without needing an administrator to change cache settings before nightly backup jobs.

Page 23: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

23

Figure 20. 8K Backend IOPS measurement

Figure 21. 8K Frontend IOPS measurement

Controller node resiliency

This test evaluates the performance and data resiliency when a node goes offline for a firmware upgrade or when it fails. During a controlled node shutdown, data cached in that node is re-mirrored to the other three nodes in order to prevent a write-through condition. The re-mirroring activity increases internal disk port I/O.

Once the data is re-mirrored, the node completes its shutdown and the other nodes pick-up the workload until the failed node comes back online.

Page 24: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

24

In this case we perform a planned node shutdown while the system is under an active SQL Server workload. We observe in Figure 22 the host port service times remain constant and the SQL transactional throughput remains stable. This effectively eliminates the need for complex application maintenance windows during firmware upgrades and provides predictable SQL transactional performance in the event of a node failure.

Figure 22. Host port I/O measurement

During the re-mirroring we also anticipate a slight increase in controller node CPU utilization as the remaining nodes are performing additional mirroring I/O. Figure 23 shows an approximate increase from 23% to 30% utilization.

Figure 23. CPU utilization of active nodes (0,2,3) when node 1 is shutdown

Service levels

This test demonstrates the ability of the system to sustain service levels in one tier while other tiers experience large block I/O surge loads. The baseline load of the system consists of multiple OLTP databases. Four of those databases are large OLTP databases deployed in the premium service level tier. These databases have partial I/O against SSD media and partial I/O against FC media. Additional smaller OLTP databases are deployed on virtual machines and deployed in the normal service level tier (FC disks only).

The surge workload is performed against one of the smaller OLTP databases deployed in the normal service level tier.

The following maintenance and stressor workloads are run to observe the impact if any on the higher service level workloads:

• SQL database backup (uncompressed, 4MB max transfer size)

• SQL database restore (uncompressed, 4MB max transfer size)

• Statistics update

Page 25: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

25

The results pictured in Figure 24 show no measurable reduction in SQL transactional throughput on hosts using the premium service level storage. In this picture the blue line represents SQL transactions per second while green represents host read/s and red represents host write/s (host IOPS). Workloads surge execution timeline is identified at the bottom of the chart.

Figure 24. Quality of service – SQL transactions per second and host IOPS under load

Figures 25 and 26 show the read and write I/O sizes as measured in the StoreServ system (backend I/O) during this test on VVs under surge load in the normal service level. As expected, we see nominal 8k I/O size during the steady state OLTP concurrent workloads along with large sequential I/O (400k) performed during backup and restore, demonstrating how well the array handles mixed workload sizes without degrading smaller I/O operations.

Figure 25. Read I/O size of business-critical (normal tier1) LUN

Figure 26. Write I/O size of business-critical (normal tier1) LUN

Figure 27 shows the sustained service times on the premium tier host LUN. The peaks shown are normal I/O surges classic to SQL Server instance checkpoint writes. On average the service times did not surge beyond baseline checkpoint swings during the large read/write backup/restore operations executed against the secondary fixed tier.

Figure 27. Unaffected I/O Service time of Mission critical (premium tier0) LUN

Page 26: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

26

SQL benefits The system is designed to minimize concurrency impact of large I/O over small I/O. This results in fair isolation between service level tiers, helping sustain level performance despite surges. For SQL Server deployments, this translates into uniform SQL transaction and application response times. This also reduces or eliminates the need for database administrators to manually develop complex SQL Server maintenance job concurrency schedules to avoid degrading end user experience.

Thin provisioning

Thin provisioning allows VVs to be created and exported to a host representing a virtual unallocated size. The host does not see it any different than a fully provisioned volume. On the array, the virtual volume does not use up all that space and only grows as data is written to the LUN.

A thinly provisioned virtual volume was created and exported to a SQL Server host as a LUN. In this test we evaluate how the volume grows when a database is created under both thin and fully provisioned LUNs.

Zero detect

Thin provisioning relates to initial storage allocation when a volume is created. The zero detect feature of the Gen4 ASICs allows a volume to remain thinner as data is stored in a volume by detecting zero filled patterns and not storing them as repeating zeroes.

Database creation

This test evaluates what happens when an empty 10GB SQL database is created. Database creation consists of the initialization of 10GB of data and log files on the array. Zero detect is very effective in extending thin provisioning to the database realm by keeping allocated database storage thin due to the zero fill nature of SQL Server data files.

In this case, Figure 28 demonstrates the 10GB database uses no measurable space in the array.

Figure 28. Initial database creation on a thinly provisioned virtual volume

Database migration

This test evaluates what happens when a populated database is copied from a non-zero detect volume to a Zero detect volume. Zero detect is very effective at maintaining a SQL Server data volume thin as zero padded fill space is not allocated on disk.

In this case, Figure 29 illustrates a 150GB database copied to a zero detect volume yielded a 61% reduction in space utilization.

Figure 29. Database migration onto a thinly provisioned virtual volume

Page 27: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

27

SQL benefits Thin provisioning and Zero Detect are valuable features that can be used together extending thin-like behavior to the SQL Server database provisioning layer by keeping SQL datafiles trim, making more efficient use of storage media.

Database compression

This test measures what reduction in size is obtained with ROW level compression when applied to thinly provisioned VVs (TPVV) with and without zero detect-enabled. Two Identical databases are restored to each of the LUNs associated with each TPVV.

The initial VVs have minimal space utilization as show in Figure 30.

Figure 30. Empty and thinly provisioned virtual volume

Test databases were restored from the same backup file. Figure 31 illustrates the reduced space utilization obtained by enabling zero detect on a thinly provisioned virtual volume.

Figure 31. Space utilization after databases are restored

Figure 32 shows there is no space reduction on the LUN that has zero detect disabled after the database is ROW compressed, while the LUN that has zero detect-enabled increases in size. This is due to additional space needed to perform the ROW compression.

Figure 32. Space utilization after database row compression is implemented

Page 28: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

28

Table 10 shows the database restored and compressed on the virtual volume with Zero Detect disabled shrunk in internal used size from 121GB to 112GB, but none of it was reclaimed into available space in the CPG.

Table 10. Datafile internal space usage

Total space usage 112,773.00 MB

Data files space usage 84,933.00 MB

Transaction log space usage 127,840.00 MB

It is interesting to note that the space initially gained from using zero detect far exceeded the space gained from using ROW compression alone. The resulting zero-detect database size was 52.6GB (49 Gigabytes) while the space used of the ROW compressed database in a non-zero detect LUN was measured to be 112GB.

SQL Server benefits SQL ROW compression and zero detect are complementary features from SQL Server’s perspective that can be implemented together to further improve disk utilization. ROW compression reduces used datafile internal space by storing more rows per SQL data page. Zero detect, on the other hand, eliminates capacity stranded (caused by database over-provisioning) in unused portions of each SQL datafile.

Thin reclamation

Table truncation

In this evaluation, identical databases deployed in normal TPVV and zero detect TPVV had large tables truncated to evaluate if SQL Server internal disk space was reclaimed by the array.

Note

The test databases are overprovisioned in that the SQL datafiles have unused space/zeros in them.

Figure 33 shows no space reclamation was observed in either database after table truncation, as expected since SQL Server does not replace truncated data bits with zero.

Figure 33. Space utilization after tables are deleted from test databases

Database shrink operation

This test verifies the new SCSI UNMAP implementation in Windows Server 2012 working with the HP 3PAR StoreServ Storage and SQL Server shrink/drop data file operations.

Prior to the UNMAP implementation space reclamation was possible in Windows only by writing zeros to the file prior to operating system deletion.

In this test, the same databases used for table truncation where shrunk using the DBCC SHRINKFILE command.

Page 29: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

29

Figure 34 shows space reclaimed automatically by the array and available for provisioning after a database shrink operation. The reclamation occurred regardless of the zero detect capability of each LUN.

Figure 34. Space utilization after database shrink operation

Both databases were identical in data size, and had very little data in them but large log files. Once again zero detect is more efficient in storing the uncompressed log portion of the database.

For space to be reclaimed after a large database table truncation, the database data files must be shrunk.

Database drop operation

The last test regarding thin space reclamation in Windows 2012 evaluates if space is reclaimed when a database is completely deleted from the host LUN.

Figure 35 shows that both zero detect-enabled VVs and zero detect-disabled VVs reclaim all space after the test databases are deleted from each instance.

Figure 35. Space utilization after database deletion

After the delete, volumes with and without zero detect reclaimed space as expected, due to the implementation of the SCSI UNMAP command.

SQL Server benefits Zero detect provides space compression from a SQL Server database perspective during initial database creation, and the implementation of standard SCSI UNMAP in Windows Server 2012 keeps SQL databases thin after datafile deletion, without administrator intervention. Over time, typical database defragmentation jobs should still be used to compact data rows and release unused internal space that is no longer zero-filled (by using shrink, or preferably rebuilding indexes in alternate datafiles).

HP 3PAR StoreServ deployment settings and best practices

The following hardware and software configuration settings outline key deployment recommendations for optimum performance and availability of HP 3PAR StoreServ storage systems. Please refer to the HP 3PAR StoreServ Storage best practices guide (see link in the For more information section) for in-depth implementation details.

These recommended settings were used to configure the reference configuration tested. Consult with HP prior to implementing a proof-of-concept or production deployment as these best practices may change in new hardware/software releases.

Front-end port cabling

• Each system node should be connected to both fabrics (assumes a redundant fabric topology).

• Odd-numbered ports should be connected to fabric 1 and even-numbered ports connected to fabric 2.

FC zoning

• Single initiator to single target zoning is preferred (1-1 zoning).

• Zoning should be done using 3PAR World Wide Port Numbers (WWPN).

• A host needs a minimum of two connections, one to each node of a node pair, e.g., 3PAR node ports 0:2:1 and 1:2:2.

Page 30: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

30

Common Provisioning Groups (CPGs)

• Keep the number of CPGs defined in the system to a minimum.

• For data that requires multiple AO policies, use multiple CPGs. (A given CPG can only be associated with one AO policy when Adaptive Optimization is used.)

• SSD-based CPGs need to be defined using RAID5 redundancy with a RAID set size of 3+1.

• FC-based CPGs need to be defined using RAID5 redundancy unless heavy write utilization exceeds 50% and write performance requirements justifies using RAID1.

• NL CPGs need to be defined using RAID6 redundancy which is the default value.

Virtual Volumes (VVs)

• VVs must be created from CPG structures, not directly from physical disks.

• Virtual Volumes need to have User CPG and Copy CPG checked.

• Zero detect needs to be enabled on TPVVs that are periodically “zeroed out.” (Zero detect is enabled by default in HP 3PAR OS version 3.1.2 and later).

• Thinly provisioned VVs can have an allocation warning, but must not have an allocation limit, not even 100%.

Virtual LUNs

• Use volume sets when exporting multiple Virtual Volumes to a host or host set.

• Virtual Volumes need to be exported to host objects, not to ports for all hosts.

Adaptive Optimization

When using thin provisioning volumes along with Adaptive Optimization, select a CPG using FC disks for the User CPG of the thin provisioning volumes. This means that when new data is written, it will be on a good performance tier by default.

Note

Ensure that the default tier (FC) has enough capacity and performance to accommodate the requirement of new applications until data is migrated to other tiers.

When new data is created (new VVs or new user space for a thin volume), it will be created in the FC tier, and Adaptive Optimization will not migrate regions of data to other tiers until the next time the Adaptive Optimization configuration is executed.

It is therefore important that the FC disks have enough performance and capacity to accommodate the performance or capacity requirements of new applications (or applications that are in the process of being migrated to HP 3PAR StoreServ) until the moment when the regions of data will be migrated to the other tiers.

If SSDs are used in Adaptive Optimization configurations, no thin provisioning volumes should be directly associated with SSD CPGs. The thin provisioning volumes should only be associated with FC CPGs. This will help ensure that SSD capacity is consumed by Adaptive Optimization and will allow this capacity to be safely used to 95 percent or even 100 percent. A different AO configuration and its associated CPGs should be created for every 100 TB of data or so. (Each configuration has a 125TB aggregate limit.)

Note Schedule the different Adaptive Optimization configurations to run at the same time, preferably at night. This method is recommended because Adaptive Optimization will execute each policy in a serial manner but will calculate what needs to be moved at the same time.

It is preferable to not set any capacity limit on the Adaptive Optimization configuration level, or on the CPG (no allocation warning or limit). Enter a value of “999999” (999 TB) for each tier.

SQL deployment recommendations

The following SQL Server deployment considerations are derived from the testing performed and architecture information available at the time of this writing.

Page 31: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

31

Adaptive Optimization

Due to the transient nature of data residing in SQL Server tempDB files and the sequential write nature of SQL database Log files, use of Adaptive Optimization for TempDB and Logs might not be of value. When performance requirements dictate faster service levels for tempDB it can be provisioned in a fixed virtual volume defined with a SSD tier 0 CPG, essentially pinning tempDB to SSD.

For logs, and other sequential write intensive applications, implementation in a fixed virtual volume defined with a FC CPG with RAID1 redundancy or a fixed virtual volume defined within a SSD CPG.

Zero detect

In order to maximize zero detect compression benefits over time, do not size the initial datafiles excessively above the size of the actual data. Over time, as data is inserted and deleted from a datafile, zero-filled pre-allocation content is no longer zero and the compression factor will decline. Typical SQL index maintenance rebuilds on alternate datafiles will help restore the original compression ratio.

Conclusion

The HP 3PAR StoreServ 10400 Storage is an excellent storage platform for SQL Server OLTP and mixed workloads. This white paper has expanded on many of the features of the HP 3PAR StoreServ Storage family and the reference configuration tests have shown how key features increase performance of SQL Server deployments while reducing costs.

HP 3PAR StoreServ Adaptive Optimization has proven easy to use and powerful in managing quality of service for demanding workloads. The ability to use different media and RAID types for different data file types in SQL Server is a very compelling approach that enables administrators to initially deploy and later adjust deployments without costly migration downtime.

SQL Server 2012 incorporates changes to their scalability model by adding Availability Groups, allowing additional servers to be read-only secondary availability group nodes. An HP 3PAR StoreServ Storage can host data for both a primary and secondary servers, maximizing performance while minimizing costs. Adaptive Optimization and the adaptive cache works well with a read only host and a read/write host, allowing administrators to reduce cost by allowing Adaptive Optimization to migrate infrequently read data to less expensive media, maximizing the utilization of more expensive SSD media for other data portions.

Additionally, we have also seen many of the thin technologies enabled by the Gen4 ASICs work extremely well with SQL Server 2012 databases and data files. The zero fill, and zero padding internal to the data files seamlessly thin out with zero detect and are maintained thin even during use. When both SQL database compression and zero detect are used together, large cost savings related to capacity and media type are achieved.

From a high availability perspective, the array snapshot system is well integrated with SQL Server via the SQL Server remote snapshot management suite. The hardware foundation for snapshots is very effective when compared to other solutions in that the write on copy algorithms only fire once even if multiple snapshots are defined for the same data, reducing the performance penalty for hardware snapshots.

Finally from a resiliency view, the HP 3PAR StoreServ Storage offers excellent redundancy at every level of the platform, including cache mirroring, cage level redundancy, among others that minimize risk while providing fast RAID recovery on the Gen4 ASICs.

All these benefits aggregate to provide a flexible enterprise platform that can be easily leveraged to host multiple databases of mixed workload characteristics. The federation capabilities such as peer motion further extend these benefits by allowing businesses to migrate from multiple storage arrays as part of either hardware refresh cycles or consolidation initiatives.

Implementing a proof-of-concept

As a matter of best practice for all deployments, HP recommends implementing a proof-of-concept test environment that matches the planned production environment as closely as possible. In this way, appropriate configuration and solution deployment can be obtained. For help with a proof-of-concept, contact an HP Services representative or your HP partner (hp.com/large/contact/enterprise/index.html).

Page 32: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

32

Appendix bill of materials The following bill of materials represents the parts as used in this HP 3PAR StoreServ system:

Table A-1. Bill of materials

Quantity Part number Description

1 QR584A HP P10000 3PAR V400 NEMA base

2 QR586A HP P10000 3PAR V400 controller node

2 QR586A #0D1 Factory-integrated

8 QR591A HP P10000 3PAR 4-port FC adapter

8 QR591A #0D1 Factory-integrated

1 TE087B HP 3PAR System Reporter Media kit

1 TE087B #0D1 Factory-integrated

1 TE250B HP 3PAR Host Explorer SW MEDIA kit

1 TE250B #0D1 Factory-integrated

8 TE836A HP 3PAR InForm V400/4X200GB SSD MAG LTU

8 TE836A #0D1 Factory-integrated

40 TE839A HP 3PAR INFORM V400/4X600GB 15K MAG LTU

40 TE839A #0D1 Factory-integrated

8 TE846A HP 3PAR OPT STE V400/4X200GB SSD MAG LTU

8 TE846A #0D1 Factory-integrated

40 TE849A HP 3PAR OPT STE V400/4X600GB 15K MAG LTU

40 TE849A #0D1 Factory-integrated

1 TE921A HP 3PAR System Reporter V400 LTU

1 TE921A #0D1 Factory-integrated

8 QR592A HP P10000 3PAR 40-drive chassis

8 QR592A #0D1 Factory-integrated

8 QR620A HP P10000 3PAR 4X200GB SSD Magazine

8 QR620A #0D1 Factory-integrated

40 QR622A HP P10000 3PAR 4X600GB 15K FC Magazine

40 QR622A #0D1 Factory-integrated

12 QR631A HP 3PAR 6M 50/125 (LC-LC) fibre cable

1 QR596A HP P10000 3PAR 2M Expansion NEMA rack

4 QL266B HP 3PAR 10M 50/125 (LC-LC) fibre cable

Page 33: HP 3PAR StoreServ reference configuration for Microsoft SQL

Technical white paper

For more information

HP 3PAR Architecture Overview white paper http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-3516ENW

HP 3PAR StoreServ Storage best practices guide http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA4-4524ENW

HP 3PAR Windows Server 2012 and Windows Server 2008 Implementation Guide http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c03290621/c03290621.pdf

HP SAN design guide hp.com/go/sandesign

To help us improve our documents, please provide feedback at hp.com/solutions/feedback.

Sign up for updates

hp.com/go/getupdated

© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for

HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as

constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.

4AA4-6016ENW, March 2013