51
Technical Report Best Practice Guide —Dell vRanger with Dell MD38X0f Storage Arrays, Brocade and QLogic 16Gb Fibre Channel, and LSI 12Gb SAS Table of Contents 1. Executive summary .................................................................. 1 1.1 Intended audience ................................................................................2 2. Infrastructure Components ....................................................3 2.1 Dell MD38X0f hardware overview.....................................................3 2.2 Dell R720 server hardware overview ................................................7 2.3 QLogic QLE2600 hardware overview ..............................................7 2.4 Brocade 6505 switch .......................................................................... 8 2.5 Dell 5524 Ethernet switch................................................................... 9 2.6 Network Configuration ...................................................................... 9 3. Solution overview .................................................................. 10 3.1 MDSM Storage Manager ................................................................... 10 3.2 MD3 Series Disk Group..................................................................... 10 3.3 MD3 Series Dynamic Disk Pools...................................................... 11 3.4 MD3 Series Vdisk .................................................................................15 3.5 Provisioning MD3 Series storage using MDSM 11.10 GUI ......... 16 3.6 Provisioning MD3 Series Storage Using MDSM 11.10 CLI ......... 20 3.7 Windows Vdisk mount points.......................................................... 22 3.8 Provisioning Exchange 2013 ........................................................... 23 3.9 System requirements ........................................................................ 23 3.10 Installation ......................................................................................... 23 3.11 Database Availability Groups ......................................................... 23 3.12 High availability ................................................................................ 24 3.13 Exchange high availability ............................................................... 25 3.14 Sizing ................................................................................................... 25 3.15 Microsoft Exchange Sizer................................................................ 25 3.16 Monitoring .......................................................................................... 27 3.17 QLogic QConvergeConsole ........................................................... 28 3.18 Brocade EZSwitchSetup and Switch Manager ........................... 30 3.19 Best Practices .....................................................................................31 4. Summary.................................................................................. 32 5. References............................................................................... 32 5.1 Appendix A........................................................................................... 32 5.2 Appendix B .......................................................................................... 36 5.3 Version History ................................................................................... 39 1 Executive summary Protecting virtual environments with traditional backup and replication software is no small feat. These agent-dependent solutions are slow, expensive, and difficult to manage— sapping virtual host CPU and I/O resources and often wasting large amounts of backup storage. Dell vRanger combined with Dell PowerVault MD38X0f storage offers a better alternative. This simple, fast, and scalable data- protection solution deploys seamlessly into virtual environments. vRanger and MD3 Series storage scale in a virtual environment by maximizing resources while simplifying management with central command and control. The Dell MD38X0f FC storage system, Dell R720 server, Brocade 6505 FC switch, QLogic QLE2600 FC HBA and/or LSI 9300-8e SAS HBA bring together the following advantages: Dell MD38X0f storage array (8) 16Gb Fibre Channel interface ports for high performance, high availability, and data security and (4) 12Gb SAS ports • Excellent storage density • High reliability • Intuitive management Dell R720 server Up to 24 DIMMs PCIe 3.0-enabled expansion slots Choice of NIC technologies Optional hot-plug, front-access PCIe SSDs Optional internal GPU accelerators DRAFT

DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

Technical Report

Best Practice Guide —Dell vRanger with Dell MD38X0f Storage Arrays, Brocade and QLogic 16Gb Fibre Channel, and LSI 12Gb SAS

Table of Contents

1. Executive summary ..................................................................1

1.1 Intended audience ................................................................................2

2. Infrastructure Components ....................................................3

2.1 Dell MD38X0f hardware overview .....................................................3

2.2 Dell R720 server hardware overview ................................................7

2.3 QLogic QLE2600 hardware overview ..............................................7

2.4 Brocade 6505 switch .......................................................................... 8

2.5 Dell 5524 Ethernet switch ................................................................... 9

2.6 Network Configuration ...................................................................... 9

3. Solution overview .................................................................. 10

3.1 MDSM Storage Manager ................................................................... 10

3.2 MD3 Series Disk Group ..................................................................... 10

3.3 MD3 Series Dynamic Disk Pools ...................................................... 11

3.4 MD3 Series Vdisk .................................................................................15

3.5 Provisioning MD3 Series storage using MDSM 11.10 GUI ......... 16

3.6 Provisioning MD3 Series Storage Using MDSM 11.10 CLI .........20

3.7 Windows Vdisk mount points .......................................................... 22

3.8 Provisioning Exchange 2013 ........................................................... 23

3.9 System requirements ........................................................................ 23

3.10 Installation ......................................................................................... 23

3.11 Database Availability Groups ......................................................... 23

3.12 High availability ................................................................................ 24

3.13 Exchange high availability ............................................................... 25

3.14 Sizing ................................................................................................... 25

3.15 Microsoft Exchange Sizer ................................................................ 25

3.16 Monitoring .......................................................................................... 27

3.17 QLogic QConvergeConsole ........................................................... 28

3.18 Brocade EZSwitchSetup and Switch Manager ...........................30

3.19 Best Practices .....................................................................................31

4. Summary.................................................................................. 32

5. References ............................................................................... 32

5.1 Appendix A ........................................................................................... 32

5.2 Appendix B .......................................................................................... 36

5.3 Version History ................................................................................... 39

1 Executive summaryProtecting virtual environments with traditional backup and replication software is no small feat. These agent-dependent solutions are slow, expensive, and difficult to manage—sapping virtual host CPU and I/O resources and often wasting large amounts of backup storage.

Dell vRanger combined with Dell PowerVault MD38X0f storage offers a better alternative. This simple, fast, and scalable data-protection solution deploys seamlessly into virtual environments.

vRanger and MD3 Series storage scale in a virtual environment by maximizing resources while simplifying management with central command and control.

The Dell MD38X0f FC storage system, Dell R720 server, Brocade 6505 FC switch, QLogic QLE2600 FC HBA and/or LSI 9300-8e SAS HBA bring together the following advantages:

Dell MD38X0f storage array

• (8) 16Gb Fibre Channel interface ports for high performance, high availability, and data security and

• (4) 12Gb SAS ports

• Excellent storage density

• High reliability

• Intuitive management

Dell R720 server

• Up to 24 DIMMs

• PCIe 3.0-enabled expansion slots

• Choice of NIC technologies

• Optional hot-plug, front-access PCIe SSDs

• Optional internal GPU accelerators

DRAFT

Page 2: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

2DRAFT

QLogic QLE2600 Fibre Channel HBA• 16Gb per port maximum throughput

• Up to 1.2 million IOPS

• Decreased power and cooling costs

Brocade 6505 switch• Provides exceptional price/performance value in a 24-port, 1U entry-level switch

• Enables fast, easy, and cost-effective scaling from 12 to 24 ports using Ports on Demand (PoD) capabilities

• Management and diagnostic tools simplify administration, increase uptime, and reduce costs

LSI 9300-8e SAS HBA

• Connect up to 1024 SAS and SATA devices with 8 external 12Gb/s SAS and SATA ports

• Fit into 1U/2U rack-mounted servers and workstations with low-profile form factor (full-height bracket included)

• Provide up to 12Gb/s SAS and up to 6Gb/s SATA performance across 8 lanes of PCIe® 3.0 connectivity

Together, these features provide an enterprise storage system that is perfectly suited for backup and replication (like Dell vRanger), high bandwidth streaming applications, as well as high-performance file system I/O, all without sacrificing simplicity and efficiency. In addition, its fully redundant I/O paths, advanced protection features, and extensive diagnostic capabilities deliver the highest levels of availability, integrity, and security.

The Dell MD38X0f, available with up to 1,152 TB of raw capacity, along with the Brocade 6505 FC switch and QLogic QLE2600 FC HBA, or LSI 9300-8e provide capacity and bullet-proof reliability to meet the requirements of the most demanding organizations. This technical report provides an overview of best practices for Dell vRanger with Dell MD38X0f storage, Brocade 6505 FC switch and the QLogic QLE2600 FC HBA, or LSI 9300-8e SAS HBA.

1.1 Intended audience

This technical report is intended for Dell employees, partners, field personnel and customer who are responsible for deploying such a solution. It is assumed that the reader is familiar with the various components of the solution.

List of TablesTable 1 MD38X0f technical specifications ....... 5

Table 2 Feature comparison: Disk group versus DDP ........................................................................... 6

Table 3 DDP features ............................................ 8

List of FiguresFigure 1 MD38X0f model options ...................... 4

Figure 2 Common controller connections. ..... 5

Figure 3 Simple cascading expansion cabling 6

Figure 4 Fault tolerant expansion cabling ........ 6

Figure 5 Dell PowerEdge R720 server ................7

Figure 6 QLogic QLE2600 Fibre Channel Adapter ..................................................................... 8

Figure 7 Brocade 6505 Fibre Channel switch. 8

Figure 8 Dell 5524 Ethernet switch ................... 9

Figure 9 Exchange 2013 architectural diagram 9

Figure 10 D-piece and D-stripes ........................12

Figure 11 DDP reconstruction .............................13

Figure 12 Traditional disk rebuild times: RAID 6 versus DDP ............................................................. 14

Figure 13 MDSM disk structure ...........................15

Figure 14 Example of Snapshot copies ............ 24

Figure 15 Performance data graphical view ....26

Figure 16 Performance data textual view ........26

Figure 17 Background Performance Monitoring 26

Figure 18 QConvergeConsole GUI .....................30

Figure 19 EZSwitchSetup configuration ................ 32

Page 3: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

3DRAFT

2 Infrastructure components2.1 Dell MD38X0f hardware overview

MDSM® 11.10 (with controller firmware 8.10) includes the following new standard features:

• 16Gb FC storage controllers (8Gb and 4Gb speeds also supported)

• 4GB or 8GB of cache memory per controller

• Dual-stack cabling for expansion to provide consistent bandwidth performance

• Improved handling of misbehaving drives, including power cycling/power off nonresponsive drives to improve system availability and performance

• Support for up to 120 SSDs

Dell MD3 Series storage arrays

The next generation of MD3 Series arrays now provides the latest technology in connectivity. The cost-effective series of arrays supports architectures based on 12Gb Serial Attached SCSI (SAS), 10Gb Internet SCSI (iSCSI) and 16Gb Fibre Channel protocols. Enhanced software features can provide virtualization, data protection and an easy-to-use management interface that helps make the MD3 Series arrays an efficient solution for your data storage and performance needs.

Features

• Get up to 2x performance with the new 12Gb SAS and 16Gb Fibre Channel arrays.

• Double connectivity options with 12Gb SAS, 10Gb iSCSI or 16Gb Fibre Channel.

• Get multi-protocol connectivity from the host with 10Gb iSCSI and 12Gb SAS or 16Gb Fibre Channel and 12Gb SAS.

• Premium features now bundled to meet High Performance or Data Protection requirements.

• Choose from 2U enclosures with 12 or 24 drives or a 4U enclosure with up to 60 drives.

• Scale your solution with a mix of both 3.5-inch and 2.5-inch hard drives, or solid state disks (SSDs).

• Standard enterprise-level software features include Dynamic Disk Pools, thin provisioning, Site Recovery Manager (SRA), and vStorage APIs for Array Integration (VAAI) support.

• Recover data from failed drives with Dynamic Disk Pools technology that now supports up to 20 disk pools and 120 SSD drives.

MD3 Series 16Gb Fibre Channel storage array

Add high-throughput, highly efficient Fibre Channel storage to your data center with MD3 Series Fibre Channel array. The high-speed Fibre Channel architecture is ideal for high-performance and high-capacity storage requirements. Each 16Gb Fibre Channel system also comes with two 12Gb SAS host connections per controller that can be used at the same time as the FC ports. Options include 2U enclosures with 12 or 24 drives and the space-saving 4U dense enclosure with up to 60 drives.

MD3 Series 10Gb iSCSI SAN storage array

Simplify data consolidation and management over your 10Gb iSCSI network with the high-performance MD3 Series 10Gb iSCSI array. The high-speed 10GBASE-T architecture is designed to improve your high-performance data infrastructure. Each 10Gb iSCSI system also comes with two 12Gb SAS host connections per controller that can be used at the same time as the iSCSI ports. Options include 2U enclosures with 12 or 24 drives and the space-saving 4U dense enclosure with up to 60 drives.

Page 4: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

4DRAFT

MD3 Series SAS array

Consolidate your storage in a shared-capacity MD3 Series array designed for superb performance and affordable expansion. The Serial Attached SCSI (SAS) architecture is designed to directly connect up to four high-availability (HA)-configured servers or eight non-HA-configured servers to a single storage system. Options that include new 12Gb SAS arrays are available in 2U enclosures with 12 or 24 drives and the space-saving 4U MD3 dense enclosure with up to 60 drives.

Dell MD3 Series Fibre Channel storage arrays

The Dell MD3 Series Fibre Channel family includes three models — MD3800f, MD3820f, and MD3860f. All three models support dual controllers, power supplies, and fan units for redundancy. The modules are sized to support up to 12 disks, 24 disks, or 60 disks, respectively, as shown in Figure 1.

Figure 1 MD3 Series

Fibre Channel model options

MD3800f2U / 12 drive

3.5” drives16Gb FC + 12Gb SAS

MD1200 expansion module

MD3820f2U / 34 drive

2.5” drives16Gb FC + 12Gb SAS

MD1220 expansion module

MD3860f4U / 60 drive

2.5” or 3.5” drives16Gb FC + 12Gb SAS

MD3060e expansion module

Page 5: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

5DRAFT

MD38X0f technical specifications

The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense chassis with support for up to 60 3.5” or 2.5” drives in 5 horizontal drawers of 12 drives per drawer. 10K RPM and 15K RPM 3.5” drives are not supported in the MD3860f (due to power and heat considerations). The MD3800f and MD3820f support 1 or 2 controllers in the array (2 controllers recommended for performance and availability), while the MD3860f supports only dual controllers. All three models support up to 120 drives with no additional licensing. The MD3800f and MD3820f support up to 192 drives (with an additional license), while the MD3860f supports up to 180 drives with an additional license. All have dual Ethernet ports for management-related activities, 4 FC host ports per controller for SAN connectivity, and 2 SAS expansion ports per controller to attach additional expansion trays. Both 2U models have integrated power supply/fan modules. The 4U model has separate power supply and fan modules. Dual controller systems have two of each. A fully configured system requires only 1 of each, providing n+1 redundancy.

Table 1 lists the technical specifications of Dell MD38X0f platform models. For additional information about the technical specifications, including simplex configurations, refer to http://www.Dell.com/in/products/storage-systems/MD38X0f/MD38X0f-tech-specs.aspx.

Spec ID MD3860f MD3820f MD3800f

Maximum raw capacity360TB with expansion to

1,080TB38.4TB with expansion to

1,118.4TB72TB with expansion to

1,152TB

Maximum disk drives180 by using 60 drive expansion modules

192 by using any combination of 24 or 12 drive expansion modules

192 by using any combination of 24 or 12 drive expansion modules

Form factor 4U/60 drives 2U/24 drives 2U/12 drives

3.5 in. drive options300GB, 450GB and 600GB 15K RPM SAS

500GB, 1TB, 2TB, 3TB, 4TB, and 6TB 7.2K RPM NL-SAS

2.5 in. drive options

146GB, 300GB and 600GB 15K RPM SAS 300GB, 600GB, 900GB and 1.2TB 10K RPM SAS

500GB and 1TB 7.2K RPM NL-SAS 200GB, 400GB, 800GB and 1.6TB SSDs

Expansion Tray MD3060e MD1200 MD1200

Cache Memory 8 or 16 GB per array

Onboard I/O 4 ports for 12Gb SAS direct attach, plus 8 ports for 16Gb FC per dual controller array

Figure 2 Common

connections across Dell

MD3 Series FC controllers

16Gb FC controller Quad FC and two 12Gb SAS HIC

2x 6Gb SAS expansion ports Mini USB RS-232

4x 16Gb FC host ports 2x 12Gb SAS host ports

USB (non-functional)

2x 12Gb SAS host ports

2x 1GbE Mgmt ports

7 segment displayController power LED

Controller fault LED System ID LEDCache active LED

Battery fault LED Power supply/fan

Power connectorPower switch

Management displays and connections

Table 1 MD38X0f technical

specifications

Page 6: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

6DRAFT

Expansion and cabling architecture

Simple cascading pathing

A simple cascading (or daisy-chained) cabling scheme may be appropriate when enclosure loss protection is not required. An advantage to using this simplified cabling is ease of initial setup and reduced cable complexity. If expansion enclosures are cabled using the simple cascading method, loss of power or communication to one expansion enclosure can result in loss of access to the remaining expansion enclosures.

Simple cascading expansion cabling example

Connect the RAID controller modules to the EMMs in the expansion enclosures in a daisy-chain fashion*.

Figure 3 Simple

cascading expansion

cabling

Controller module

Expansion 1

Expansion 2

Expansion 3

Controller module

Expansion 1

Expansion 2

Expansion 3

*If incorrect cabling is detected, the RAID controller modules will raise an event and you will be unable to configure the system until the enclosures are cabled correctly. Recovery Guru detects the problem and displays the correct cabling diagram

Fault-tolerant pathing

Although more complex to set up initially, a fault-tolerant asymmetric cabling configuration is the optimal method for connecting expansion enclosures to your storage array. This cabling scheme guards against enclosure loss and guarantees accessibility to data on a virtual disk in a disk group in the event of loss of power to the expansion enclosure or failure of both EMM modules.

Fault-tolerant asymmetric expansion cabling example

The fault-tolerant asymmetric expansion cabling method uses the simple cascading method from one RAID controller module to one set of EMMs, and then uses a “bottom up” cabling method from the second RAID controller module to the bottom EMM. Cabling the RAID controller expansion paths from opposite ends of the expansion enclosure stack provides full path redundancy. So even if an entire expansion enclosure fails, all other expansion enclosures remain available.

Figure 4 Fault tolerant

expansion cabling

MD3800f / MD3820f MD3860f

MD3800f / MD3820f MD3860f

Page 7: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

7DRAFT

Cabling architecture

The MD3 Series expansion enclosures use 6Gbps SAS connections. Special cables are required that have mini-SAS HD 3.0 connectors on one end and mini-SAS 2.0 connectors on the other. This cable acts as a converter between the SAS-3 architecture that is used by the MD3460/3860 and the SAS-2 architecture that is used by the expansion enclosures. The cables used to cable from expansion enclosure to expansion enclosure have the mini-SAS 2.0 connectors on both ends. All other MD3 Series controllers use the mini-SAS 2.0 connectors.

The 3060e expansion enclosure has two “in” ports on each EMM. Enclosure to enclosure cabling should be consistent, using the same “in” port (either the left or right). The MD1200 and MD1220 expansion enclosures have only one “in” port, so it is easier to identify which port to cable to on the EMM.

2.2 Dell R720 server

The Dell PowerEdge R720 servers offer the following features:• Engineered with the right combination of features and performance scalability to handle tough workloads

for both large and small data center environments

• Intended for demanding workloads, including private cloud, VDI, data warehouses, e-commerce, and HPC

• Comes with out-of-band management controller for immediate integration into existing management schemes

Dramatically boost application performance with the latest Intel® Xeon® processor E5-2600 v2 product fam-ily (recommended in this reference architecture) and up to 24 dual in-line memory modules (DIMMs). Built with 22-nanometer process technology and up to 12 cores per processor, it enables super-fast processing for compute-intensive tasks. Enhance your data center performance with the balanced, scalable I/O capa-bilities of the PowerEdge R720 — including integrated PCI Express (PCIe) 3.0-capable expansion slots. Tailor your network throughout to match your application needs with features that allow you to take full advantage of your additional I/O performance.

2.3 QLogic QLE2600 Fibre Channel adapter hardware overview

The 2600 Series 16Gb Gen 5 Fibre Channel Adapters boast industry-leading native Fibre Channel performance—achieving dual-port, line-rate, 16Gb Fibre Channel throughput—at extremely low CPU usage with full hardware offloads. Gen 5 Fibre Channel resolves data center complexities by enabling a storage network infrastructure that supports powerful virtualization features, application-aware services, and simplified management. This achievement provides a next-generation storage networking infrastructure capable of supporting the most demanding virtualized and cloud-enabled environments while fully leveraging the capabilities of high-performance 16Gb Fibre Channel and solid-state disk (SSD) storage. These features help reduce cost and complexity while the unmatched 16Gb performance eliminates potential I/O bottlenecks in today’s powerful multiprocessor, multicore servers.

Figure 5 Dell PowerEdge

R720 server

Page 8: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

8DRAFT

Fibre Channel Specifications

Throughput 16Gb full-duplex line rate per port (maximum)

Logins • Support for 2,048 concurrent logins and 2,048 active exchanges

• Expandable to 16k concurrent logins and 32k active exchanges

Port Virtualization NPIV

Compliance SCSI-3 Fibre Channel Protocol (SCSI-FCP), Fibre Channel Tape (FC-TAPE) Profile, SCSI Fibre Channel Protocol-2 (FCP-2), Second Generation FC Generic Services (FC-GS-2), and Third Generation FC Generic Services (FC-GS-3)

Other Brocade ClearLink (D_Port)

T10 PI high-performance offload

Physical Specifications

Ports • QLE2670: single 16Gb Gen 5 Fibre Channel

• QLE2672: dual 16Gb Gen 5 Fibre Channel

Form Factor • Low profile PCIe card (6.6 inches x 2.54 inches)

• Custom form factors also available

2.4 Brocade 6505 switch

The Brocade 6505 SAN switch has the following design features:• Provides exceptional price/performance value, combining flexibility, simplicity, and enterprise-class

functionality in a 24-port, 1U entry-level switch

• Enables fast, easy, and cost-effective scaling from 12 to 24 ports using Ports on Demand (PoD) capabilities

• Simplifies management through Brocade Fabric Vision technology, reducing operational costs and optimizing application performance

• Simplifies deployment and supports high-performance fabrics by using Brocade ClearLink Diagnostic Ports (D_Ports) to identify optic and cable issues

• Maximizes resiliency with non-disruptive software upgrades and an optional redundant power supply

• Simplifies deployment with the Brocade EZSwitch Setup wizard

• Simplifies server connectivity by deploying as a full-fabric switch or a Brocade Access Gateway

Figure 6 QLogic QLE2600

Fibre Channel Adapter

Figure 7 Brocade 6505 Fibre Channel

switch

Page 9: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

9DRAFT

2.5 LSI 9300-8e SAS adapter hardware overview

The LSI SAS 9300 8 and 4-port, 12Gb/s SAS host bus adapter family provides increased connectivity and maximum performance for high-end servers and appliances within internal storage, or connecting to large scale storage enclosures.

• Four/eight ports of 12Gb/s SAS + SATA ports

• Eight lanes of PCI Express 3.0

• Low-profile form factor

• Mini-SAS HD connectors

• SAS 3008 12Gb/s SAS+SATA controller

• Supports SSDs, HDDs, and tape drives

2.5 Dell 5524 Ethernet switch

The Dell 5500 series switches offer the following features:

• Secure fixed-port Gigabit Ethernet switching solutions to deliver full wire-speed switching performance.

• Total switching capacity of up to 176Gbps to support demanding network environments.

• The switches are designed for Energy Efficient Ethernet (802.3az), which will reduce per port power consumption up to 50%.

• The 5500 series switches feature enhanced VLAN support such as Voice VLANs and Guest VLANs.

• Link Aggregation with support for up to 32 aggregated links per switch and up to 8 member ports per aggregated link.

• 24 10/100/1000BASE-T auto-sensing Gigabit Ethernet switching ports; 2 SFP+ ports for fiber media support; 2 HDMI Stacking Ports 5524P: Up to 15.4 watts per port on all 24 ports.

• Auto-negotiation for speed, duplex mode and flow control, Auto MDI/MDIX, Port mirroring and Broadcast storm control.

• User-definable settings for enabling or disabling Web, SSH, Telnet, SSL management access Port-based MAC Address alert and lock-down

• IEEE 802.1Q tagging and port-based, up to 4,000 user-configurable VLANs, Protocol-based VLANs and Dynamic VLANs with GVRP support

Figure 9 Dell 5524

Ethernet switch

Figure 8 LSI 9300-8e SAS adapter

Page 10: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

10DRAFT

2.6 Network architecture

The solution below shows typical architectural configuration of how the vRanger system would look.

vRanger application

VM VM VM VM VM VM VM VM

Dell PowerEdgeservers

Dell PowerVaultMD34x0 / MD38x0

storage

VMware or Hyper-V

3 Solution overview3.1 MDSM Storage Manager

MDSM Storage Manager is the management interface for all MD Series arrays. MDSM is based on Java®. The MDSM management software can be installed on Microsoft Windows® or Linux® operating systems. The management application should be installed on a management node that does not participate in production data delivery. Both server and client versions of Windows are supported for MDSM. Only the server versions of Windows are supported for I/O attach to the MD38X0f.

MD3 Series controllers support RAID levels 0, 1, 10, 5, and 6 or Dynamic Disk Pools (DDP). To provision storage, either a RAID group or a DDP must be created as the first step. Then a Virtual Disk (Vdisk) is created as the entity that will actually mount on the server and hold data. The subsequent sections describe disk groups and DDP. For Microsoft Exchange Server, the convenience of a DDP is recommended.

3.2 MD3 Series disk group

The disk group and DDPs are the top-level units of storage of an MD38X0f storage array. When a storage system is deployed, the first step in presenting the available disk capacity to various hosts is to create:

• Disk groups or DDP with sufficient capacity

• The number of disks required to meet performance requirements

• The desired level of RAID protection to meet specific business requirements

DDP will be discussed in detail in the next section.

Capacity planning is dependent on detailed customer input and discovery; however, protection and performance planning is a standardized implementation practice.

For example, the MD38X0f supports multiple RAID levels, and each RAID level provides standardized functionality with associated best practices. One of these best practices includes disk selection criteria to achieve disk-level, drawer-level, and shelf-level protection from common disk failure scenarios.

When selecting disks to create a disk group or disk pool, administrators follow a standard pattern that uses both controller channels and spreads the Vdisks across shelves and drawers in the configuration. This method does not offer a level of data protection for RAID level 0 disk groups; however, it does establish the disk selection pattern of allocating disks for RAID levels that offers protection against single- and double-disk fault scenarios.

Size the Disk groups or disk pools to meet business requirements; however, large Disk groups that use RAID levels 1, 3, 5, 6, or 10 require a significant number of hours to complete the reconstruction process from a failed disk. The larger the Disk group, the longer the reconstruction time.

When using DDPs, the reconstruction time from a failed disk is significantly shorter. For more information, refer to section 0, “MD Series Dynamic Disk Pools.”

Figure 10 vRanger

architectural diagram

Page 11: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

11DRAFT

To create a Disk group or disk pool, storage administrators should consider the following:

• The reconstruction time, especially for business-critical, high-availability applications

• The availability of hot-spare disks that meet the following requirements:

- The disk type must match the disks being protected.

- Full disk encryption (FDE)-enabled disks can serve as spares for non-FDE enabled disks; however, the reverse is not true.

- Disk capacity must exceed the used capacity of the protected disks.

- Hot spares are global; protection is extended to all assigned disks in the array regardless of Disk group assignments.

- There must be a sufficient quantity of spare disks to protect multiple Disk groups based on the business-critical nature of the groups.

For more information, refer to the Dell Support Documentation for MD38X0f Series. From within MDSM Storage Manager, refer to the online help documentation.

3.3 MD3 Series Dynamic Disk Pools

Dell MD3 Series Dynamic Disk Pools (DDP) is a data protection technology designed to deliver consistent storage system performance, data protection, and efficiency throughout the lifecycle of the system. DDP simplifies the setup process and reduces the ongoing maintenance requirements of data protection. With DDP, customers do not have to define RAID array sizes, hot spares, and drive maintenance schedules. DDP distributes data, parity information, and spare capacity across a pool of drives. Its intelligent algorithm defines which drives are used for segment placement, making sure data is fully protected.

DDP is able to utilize every drive in the pool for the intensive process of rebuilding a failed drive. This dynamic rebuild technology is the key to its exceptional performance under failure and returns the system to optimal conditions up to eight times more quickly than traditional RAID technology. With shorter rebuild times and patented prioritization reconstruction technology, DDP significantly reduces exposure to numerous cascading disk failures. Flexible disk pool sizing provides optimal utilization of any configuration for maximum performance, protection, and efficiency. DDP can easily be grown by adding up to 12 additional disk drives at one time.

In addition to superior data protection, Dynamic Disk Pools enable customers to structure their storage infrastructure in a way that can greatly reduce drive maintenance schedules. Designing a disk pool with additional drive capacity for growth at system installation leverages the technology’s automatic self-healing capability and can extend drive maintenance schedules by years, driving operational costs down.

Configuration flexibility enables DDP to address wide-ranging requirements. Drives can be configured into one large disk pool to maximize simplicity and protection or into numerous smaller pools to maximize sequential performance. Different drive types can be used to create storage tiers, such as performance pools and capacity pools, and disk pools can reside in the same system with traditional RAID groups.

The following are the four key benefits of DDP technology:

• Reduce performance degradation following a drive (or multiple-drive) failure

• Eliminate complex RAID management without sacrificing data protection

• Eliminate deployment and management of idle hot spare drives

• Expand or contract the disk pool without reconfiguring RAID

Page 12: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

12DRAFT

Feature description

DDPs are composed of several lower level elements, the first of which is known as a D-piece. A D-piece consists of a contiguous 512MB section from a physical disk that contains 4,096 128KB segments. Within a pool, 10 D-pieces are selected from specific drives within the pool by using an intelligent optimization algorithm. Together, the 10 associated D-pieces are considered a D-stripe, which is 5GB (4GB of data and 1GB of parity) in size. The contents within the D-stripe are similar to a RAID 6 (8+2) scenario in which eight of the underlying segments potentially contain user data, one segment contains parity (P) information calculated from the user data segments, and the one segment contains the Q value as defined by RAID 6.

Vdisks are essentially created from an aggregation of multiple 4GB data D-stripes as required to satisfy the defined Vdisk size up to the maximum allowable Vdisk size within a DDP.

In Figure 2, a simplified diagram is shown of a single DDP that contains 12 disk drives. There are 10 D-pieces organized into a D-stripe that is located randomly across 10 of the disks within the pool; this pattern continues for each 5GB D-stripe.

Note: Although the distribution of D-pieces and D-stripes is approximately equal across all disks in Figure 2, this is not always the case.

After a storage administrator has defined a DDP, which largely consists of defining the number of desired drives in the pool, the configurable space on each drive is divided up into 512MB extents, each of which is a placeholder for a potential D-piece. D-pieces and D-stripes are not created until a Vdisk has been configured.

After the DDP has been defined, a Vdisk can be created within the pool. The Vdisk consists of D-stripes located across all the drives within the pool up to the defined value for the Vdisk capacity such that the number of D-stripes equals the capacity divided by 4GB. For example, a 500GB Vdisk consists of 125 D-stripes. Allocation of D-stripes for a given Vdisk starts at the lowest available range of logical block addresses (LBAs) for a given D-piece on a given disk drive.

Multiple Vdisks can be defined within a DDP, and there can be multiple pools created within the supported storage system. Alternatively, the storage administrator can create traditional Disk groups in conjunction with DDPs or any combination thereof. For example, an MD3820f, with 24 drives in which all drives have equal capacity, the following combinations are supported:

• 1x RAID 10 (4+4) and 1x 16-drive DDP

• 1x 24-drive DDP

• 2x 12-drive DDPs

• 1x RAID 5 (4+1), 1x RAID 10 (2+2), and 1x 15-drive DDP

Dynamic Disk Pool

D-StripeD-Piece

Figure 11 D-piece and

D-stripes

Page 13: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

13DRAFT

DDPs and the Vdisks within them allow several operations that are similar in nature to traditional Disk groups as well as some features that are unique to DDPs, as shown in Table 2.

Feature Traditional Vdisks and Disk groups DDP and DDP Vdisks

Dell Snapshot (legacy) Yes No

Dell Snapshot Yes Yes

Virtual Disk Copy Yes Yes

Synchronous mirroring (legacy) Yes Yes

Asynchronous mirroring Yes Yes

Thin provisioning No Yes

Dynamic Vdisk expansion (DVE) Yes Yes

Dynamic capacity expansion (DCE) Yes, maximum of 2 drives Yes, maximum of 12 drives

Dynamic capacity reduction No Yes, maximum of 12 drives

Note: Thin-provisioned Vdisks can be created only within a DDP and are not available for selection with a traditional Disk group.

Similar to traditional Disk groups, DDPs can be expanded by the addition of disk drives to the pool through the DCE process in which up to 12 disks can be added concurrently to a defined disk pool. When a DCE operation is initiated, a small percentage of the existing D-pieces are effectively migrated to the new disks.

Data availability

Another major benefit of a DDP is that the pool contains integrated preservation capacity to provide rebuild locations for potential drive failures and does not require dedicated, stranded hot spares. This simplifies management because planning or managing individual hot spares is no longer required. It also both greatly improves the time of required rebuilds and enhances the performance of the Vdisks during a rebuild.

When a drive in a DDP fails, the D-pieces from the failed drive are reconstructed to potentially every other drive in the pool by using the mechanism normally used by RAID 6. During this process, an algorithm internal to the controller framework verifies that no single drive contains two D-pieces from the same D-stripe. The individual D-pieces are reconstructed at the lowest available LBA range on the selected disk drive.

In Figure 11, disk drive 6 (D6) has failed, and the D-pieces that previously resided on that disk are recreated across several other drives in the pool simultaneously. Because there are multiple disks participating in the effort, the overall performance impact of the failure is lessened, and the length of time required to complete the operation is dramatically reduced.

Figure 12 DDP

reconstruction

D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12

X

Page 14: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

14DRAFT

In the event of multiple disk failures within a DDP, priority reconstruction is given to any D-stripes that are missing two D-pieces, minimizing data availability risk. After those critically affected D-stripes are reconstructed, the remainder of the necessary data continues to be reconstructed.

From a controller resource allocation perspective, there are two user-modifiable reconstruction priorities within a DDP:

• Degraded reconstruction priority is assigned for instances in which only a single D-piece must be rebuilt for the affected D-stripes; the default for this is high.

• Critical reconstruction priority is assigned for instances in which a D-stripe has two missing D-pieces that must be rebuilt; the default for this is highest.

For very large disk pools with two simultaneous disk failures, only a relatively small number of D-stripes are likely to encounter the critical situation in which two D-pieces must be reconstructed. These critical D-pieces are identified and reconstructed initially at the highest priority, which returns the DDP to a degraded state very quickly so that further drive failures can be tolerated.

For example, assume that a DDP that consists of 192 disk drives has been created and has a dual disk failure. In this scenario, it is likely that the critical D-pieces would be reconstructed in less than one minute; after that minute, an additional disk failure could be tolerated. From a mathematical perspective, with the same 192-drive pool, only 5.2% of D-stripes would have a D-piece on one drive in the pool, and only 0.25% of the D-stripes would have two D-pieces on those two particular drives. Therefore, only 48GB of data would have to be reconstructed to exit the critical stage. A very large disk pool can continue to maintain multiple sequential failures without data loss until there is no additional preservation capacity to continue the rebuilds.

After the reconstruction, the failed drive or drives can be subsequently replaced, although this is not specifically required. Fundamentally, this replacement of failed disk drives is treated in much the same way as a DCE of the DDP. Failed drives can also be replaced prior to the DDP exiting from a critical or degraded state.

Aside from the reduced time it takes to move from a critical state to a degraded state, the general rebuild process for a DDP can be significantly faster than that for a traditional Disk group.

In Figure 12, some typical rebalancing improvements are shown for a DDP versus a RAID 6 configuration based on a 24-disk mixed workload for several disk sizes. The tests were conducted with 300GB, 900GB, 2TB, and 3TB 7200 RPM NL-SAS drives. The rebuild time for the 3TB drive using a RAID 6 configuration was more than 4 days, while the rebuild time for a similarly sized DDP was estimated at 96 minutes. As the number of disk drives in a drive pool is increased up to the system limit, there will be a corresponding reduction in rebuild time. However, the time to rebuild the disk in a traditional Disk group remains constant.

Figure 13 Traditional disk

rebuild times: RAID 6 versus

DDP

120

100

80

60

40

20

0

Re

bu

ild H

ou

rs

Business impactusing RAID

99% exposureimprovementwith DDP

Data rebalancing in minutes vs. days

300GB drive 900GB drive 2TB drive 3TB drive

Drive Capacity

DDPRAID 6

1.3 days

2.5 days

.5 day

4+ days

Pe

rfo

rman

ce

Performance Impact of a Drive Failure

Time

Acceptable

Unacceptable

Maintain business SLA’s during drive failure

Page 15: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

15DRAFT

3.4 MD3 Series Vdisk

As shown in Figure 13, a Vdisk is the logical storage entity created for a host to access disks on the storage array. A Vdisk is created from the capacity available on a disk group or DDP. Although a Vdisk might include more than one drive, a Vdisk appears as one logical entity to the host. The Vdisk is presented to the host as a physical disk drive.

Host LUNs

Volumes

Dynamic Disk Poolsor Volume Groups

HDDs or SSDs

Figure 14 MDSM disk

structure

• Ease of configuration and management

• Sustained performance during drive failure

• Maximum performance for small random workloads

• Shortest drive reconstruction time

• Ability to use thin-provisioning Vdisks

• SSD support

• Easy and quick expansion by adding from 1 to 12 drives at a time

• Greater protection of data from multiple drive failures that occur over time

• Simplified administration of spare capacity

• Flexible and efficient capacity utilization

DDP summary

As disk capacities continue to increase, the rebuild times for disk failures within Disk groups also increase, leaving storage systems potentially at risk for additional drive failures and prolonging the performance impact on applications during the rebuild process. DDPs offer an exciting new approach to traditional RAID sets by offering the following features:

• Improved rebuild times

• Limited critical exposure during dual drive failures

• Reduced performance penalty suffered during a rebuild

• Significantly simplified storage administration

DDPs can potentially span large numbers of disk drives versus traditional RAID 5 or RAID 6 Disk groups; therefore, in environments with mixed or nonconcurrent workflows, there can be a tremendous performance advantage because all pool resources are available to all hosts. Table 3 can be used as a reference to compare the relative strengths of the two technologies when deciding whether to use DDPs or traditional Disk groups.

Page 16: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

16DRAFT

DDP Vdisks are similar to physical RAID group Vdisks. However, DDP Vdisks offer the following unique benefits not available with physical RAID group Vdisks:

• Ability to thin provision DDP Vdisks

• Ability to withstand two disk failures followed by additional disk failures within a short period of time (up to 90 minutes) without the loss of data

• Ability to manage all data access from a single pool of disks

• Reduced time to replace or rebuild failed disks

• Performance advantage for small, random workloads

Virtual and physical Vdisks

For standard Vdisks that are not thin provisioned, the full capacity of the Vdisk is immediately reserved from the pool. When a Vdisk is thin provisioned, two Vdisk entities are created: a virtual Vdisk and a repository Vdisk.

Virtual Vdisks are presented to the host, and like standard Vdisks, virtual Vdisks represent the full capacity of the Vdisk to the host but do not reserve that capacity from the disk pool.

Repository Vdisks have previously been used for Snapshot and mirroring features, but they have also been adopted as the underlying physical capacity for thin Vdisks. When thin Vdisks are created using the recommended capacity settings, the repository Vdisk is automatically created with a default physical capacity of 4GB and mapped to the virtual or thin Vdisk. When the custom capacity option is selected, the physical size of the repository Vdisk can be increased in 4GB increments, and the virtual Vdisk must be manually mapped to the repository Vdisk.

Limits/limitations

DDP Vdisks have the following general limits/limitations:

• The segment size of a DDP Vdisk cannot be changed from 128KB.

• The maximum single Vdisk capacity is 64TB.

DDP thin Vdisks have the following additional limits/limitations:

• The minimum physical capacity associated with a thin Vdisk is 4GB.

• The maximum physical capacity is 64TB.

• The preread redundancy check for a thin Vdisk cannot be enabled.

• Thin Vdisks cannot be used as the target Vdisk in a virtual disk copy operation.

• Thin Vdisks cannot be used in a synchronous mirroring operation.

• Capacity must be added in increments of 4GB to avoid stranding odd increments of capacity.

Note: Thin-provisioned Vdisks are not recommended for use with Microsoft Exchange due to performance limitations.

3.5 Provisioning MD38X0f using MDSM 11.10 GUI

Turn Off Dynamic Disk Pools automatic configuration wizard

The Dynamic Disk Pools automatic configuration wizard can be used to provision a new storage system so that a single DDP uses all of the available capacity that meets the criteria for a disk pool. However, in this case, Dell recommends creating disk pools manually. This is because there might be a need to delete some default Vdisks and because the wizard does not offer user selection of all the features available with DDP.

Page 17: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

17DRAFT

To turn off the DDP automatic configuration wizard, complete the following steps:

1. Log in to the Array Management Window (AMW) and click the Storage & Copy Services tab.

2. In the dialog box, select Do Not Display Again and click No to dismiss the automatic configuration wizard.

Prepare new storage system for provisioning

The following procedures use MDSM 11.10. The procedures assumes that the MD3 storage system is new, and has no logical objects, such as RAID groups, Dynamic Disk Pools, and associated Vdisks. If logical objects are configured, the instructions below can be used to create a new DDP pool and Vdisk in addition to the logical objects that currently exist. If you wish to remove existing logical objects, please refer to the MDSM online help pages, or the MDSM administration manual.

Create disk pool manually

To create a disk pool manually, complete the following steps:

1. From the AMW Storage & Copy Services tab, select Total Unconfigured Capacity.

2. From the Storage menu, select Disk Pool > Create.

Page 18: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

18DRAFT

3. Create the disk pool and configure its attributes:

a. Enter a descriptive name for the disk pool.

b. Select the appropriate filters for the use of drive security and data assurance (DA).

c. Select the size of the pool from the disk pool candidates.

d. Select View Notification Settings.

e. Select the desired critical warning notification threshold based on the specific environment requirements for maintaining unprovisioned capacity in the pool.

f. Select the desired early warning notification threshold based on the specific environment requirements for maintaining unprovisioned capacity in the pool.

Note: Setting both thresholds to 100% disables both capacity threshold warnings completely.

g. Click Create.

4. From the Storage & Copy Services tab, monitor the Disk Pools pane to confirm that the new disk pool was successfully created.

Page 19: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

19DRAFT

Change disk pool settings

To change disk pool settings, such as the disk rebuild priority, the number of reserve disks, or the disk pool capacity warning thresholds, complete the following steps:

1. From the AMW Storage & Copy Services tab, select a disk pool and, from the Storage menu, select Disk Pool > Change > Settings.

2. Review and change the disk pool settings as required:

a. Confirm that the warning thresholds are set based on how the capacity will be used and how the customer prefers to manage the storage system.

b. Review and change the disk reconstruction priority settings by dragging the sliders to the left to lower the reconstruction priority or to the right to increase it.

Note: Make sure that the number of reserved drives meets your data protection requirements for reserve disk capacity. The default settings are sufficient for most use cases; however, additional drives can be placed in reserve.

c. Confirm or set the number of drives that are reserved and dedicated for preservation capacity.

Page 20: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

20DRAFT

Create DDP standard Vdisk

Note: The use of thin-provisioned Vdisks is not recommended with Microsoft Exchange.

To create a standard Vdisk in a DDP, complete the following steps:

1. In the MDSM Array Management Window (AMW), click the Storage & Copy Services tab.

2. Expand the storage system tree and then expand the disk pool in which the Vdisk will be created.

3. Right-click Free Capacity and select Create Vdisk.

The storage system is referred to as storage array in the MDSM GUI.

4. Create the Vdisk:

a. Enter the Vdisk capacity and select the appropriate unit (MB, GB, or TB).

b. Enter a descriptive name for the Vdisk.

Note: Vdisk names must not exceed 30 characters and cannot contain spaces. Names can contain letters, numbers, underscores (_), hyphens (-), and pound signs (#).

c. From the Map to Host list, either select Map Later or select a predefined host group or host.

d. Select the desired quality of service attributes.

e. Click Finish.

Page 21: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

21DRAFT

5. From the Storage & Copy Services tab, confirm that the newly created Vdisk is displayed in the storage system tree and that it is associated with the intended disk pool. Select the newly created Vdisk and review its properties in the right pane to verify that the Vdisk has the attributes that you selected and confirm its status.

Note: Newly created Vdisks can be written to immediately, as the initialization will be done in the background over a period of hours depending on workload and Vdisk characteristics.

3.6 Provision MD3 Series storage array using MDSM 11.10 CLI

The creation of the DDP and Vdisk shown in the previous section can also be scripted and run from MDSM Storage Manager.

To create a script that will recreate the storage objects using the MDSM CLI, complete the following steps:

In the AMW, from the Storage Array menu, select Configuration > Save.

1. Select the Vdisk configuration to save and click Yes.

Page 22: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

22DRAFT

2. Follow the Windows prompt to save the configuration file to a location of your choice.

3. After the storage configuration is saved, edit it to recreate the same configuration on multiple storage arrays as desired.

Note: Verify that the disk pool drives are the same as in the original array or modify as needed.

4. The following script was generated using the functionality in MDSM Storage Manager to save a storage configuration in a text file.

5. In the Enterprise Management window (EMW), click Tools > Execute Script.

6. Paste the script into the Script Editor and verify the syntax before executing.

Page 23: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

23DRAFT

7. As shown in the previous section from the Storage & Copy Services tab, confirm that the new DDP is displayed in the storage system tree and that the new Vdisk is branching from the new DDP.

3.7 Windows Vdisk mount points

Dell storage solutions and Microsoft SQL Server® 2005 onward support mount points. Mount points are directories in a file system that can be used to mount a Vdisk. Mounted Vdisks can be accessed by referencing the path of the mount point. Mount points eliminate the Windows 26-drive-letter limit and offer greater application transparency when moving data between Vdisks, moving Vdisks between hosts, and unmounting and mounting Vdisks on the same host. This is because the underlying Vdisks can be moved around without changing the mount point path name.

Dell recommends:

• Using NTFS mount points instead of drive letters to surpass the 26-drive-letter limitation in Windows.

• When using Vdisk mount points, the name given to the Vdisk label and mount point must be the same.

3.8 Provisioning vRanger

vRanger installation overview

A complete vRanger installation includes four components: the vRanger server; the vRanger database; the vRanger virtual appliances; and at least one repository. The sections below provide information on the options available for each component.

• Installing the vRanger server

• Installing the vRanger database

• Adding a repository

• Deploying and configuring a Virtual Appliance

Page 24: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

24DRAFT

Installing the vRanger server

vRanger can be installed either on a physical server or in a virtual machine. As long as the vRanger machine meets the specifications detailed in System requirements and compatibility, application performance should be similar regardless of which option is chosen.

• Virtual Machine – When installing vRanger in a virtual machine, you eliminate the need for dedicated hardware while maintaining high performance. Due to the lower cost and increased flexibility, this is the recommended approach.

• Physical Server – The primary benefit of installing vRanger on a physical server is that the resource consumption of backup activity is off-loaded from the virtual environment to the physical proxy.

Regardless of which approach you chose, vRanger can leverage the vRanger virtual appliances to perform backup, restore, and replication tasks. This provides greater scalability while distributing the resource consumption of data protection activities across multiple hosts.

Available backup transports

vRanger supports multiple data transport options for backup and restore tasks. The vRanger backup and restore wizards will automatically select the best transport option available based on your configuration. The available transports are:

• VA-based HotAdd – will mount the source VM’s disk to the vRanger virtual appliance deployed on the source host (or cluster). This allows vRanger to have direct access to the VM data through VMware’s I/O stack rather than the network.

This is the preferred transport method, and is available regardless of where vRanger is installed. The vRanger virtual appliance must be deployed to the source host (or cluster) for this transport to be available.

NOTE: If the host is not properly licensed, or the VA cannot access the storage for the source VM, HotAdd will not be

available. If a virtual appliance is configured and HotAdd is not available, a network backup will be performed from the

virtual appliance.

• Machine based HotAdd – if vRanger is installed in a virtual machine, this method will mount the source VM’s disk to the vRanger virtual machine. This allows vRanger to have direct access to the VM data through VMware’s I/O stack rather than the network.With this method, the backup processing activity occurs on the vRanger server.

• VA-based LAN – will transfer the source VM’s data from the source disk to the vRanger virtual appliance over the network. With this method, the backup processing activity occurs on the vRanger virtual appliance.

• Machine-based LAN – If there is no vRanger VA deployed, vRanger will transfer the source VM’s data from the source disk to the vRanger machine over the network. With this method, the backup processing\activity occurs on the vRanger server. For ESXi servers (which do not have the service console), data will be sent via VMware’s VDDK transport.

• Machine-based SAN – If there is no virtual appliance configured, vRanger will perform a check to see the vRanger server is configured for SAN backups. This is a high performance configuration that requires vRanger to be connected to your fibre or iSCSI network. In addition, the VMFS volumes containing the VMs to be protected must also be properly zoned/mapped to the vRanger server.

NOTE: For machine-based transports, the “machine” referenced is the vRanger machine (physical or virtual).

The transport method describes only how data is read from the source server, not how the data is sent to the repository.

Page 25: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

25DRAFT

Installing vRanger in a virtual machine

When vRanger is installed in a virtual machine, you can perform backups and restores either over the network or in a LAN-Free mode which uses the SCSI HotAdd functionality on VMware ESX (i). The sections below provide a summary of each method. Note that replication and physical backup tasks are always performed over the network.

NOTE: The backup transport method describes only how data is read from the source server, not how thedata is sent to

the repository.

Available transports

The transports available when vRanger is installed in a virtual machine are listed below:

• With vRanger VA:

• VA-based HotAdd

• VA-based LAN

• Machine-based HotAdd

• Machine-based LAN

• Without the vRanger VA

• Machine-based HotAdd

• Machine-based LAN

HotAdd backups [virtual machines only]

When vRanger is installed in a virtual machine, LAN-Free backups are made possible by VMware’s HotAdd disk transport.

During backups with HotAdd, the source VM’s disks are mounted to the vRanger virtual machine, allowing vRanger direct access to the VM’s data through VMware’s I/O stack. Backup processing occurs on the vRanger VM, with the data then being send to the configured repository.

Requirements for a HotAdd configuration

In order to use vRanger with HotAdd, vRanger must be installed in a VM, and that VM must be able to access the target VM’s datastore(s). In addition, all hosts that the vRanger VM could be vMotioned to must be able to seethe storage for all VMs that vRanger will be configured to back up.

NOTE: The use of HotAdd requires that the target hosts are licensed with VMware Enterprise or higher licensing.

Configuring vRanger for HotAdd

When using HotAdd, make sure to disable automount on the vRanger machine. This will prevent Windows on the vRanger VM from assigning a drive letter to the target VMDK.

To configure vRanger for HotAdd

1 From the start menu, click Run, and then enter diskpart.

2 Run the automount disable command to disable automatic drive letter assignment.

3 If using a SAN, verify that the SAN policy is set to Online All by typing san and hitting Enter.

If it is not, set it to online all by typing san policy=onlineAll.

4 Run the automount scrub command to clean any registry entries pertaining to previously mounted

Page 26: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

26DRAFT

volumes.

LAN backups

vRanger can perform LAN backups one of two ways - either through the vRanger machine, or by using the vRanger VA.

VA-based LAN

This option will transfer the source VM’s data from the source disk to the vRanger virtual appliance over the network using VMware’s VDDK LAN transport.The backup processing activity occurs on the vRanger virtual appliance, then the data is sent to the repository directly.

Machine-based LAN

If there is no vRanger VA deployed, vRanger will transfer the source VM’s data from the source disk to the vRanger VM over the network. With this method, the backup processing activity occurs on the vRanger server. For network-based backups when using ESX, or for physical server backups, the backup data flows “direct to target” from the source server to the target repository. This means that the vRanger server does not process any of the backup traffic. For ESXi servers (which do not have the service console), data will be sent via VMware’s VDDK transport

NOTE: Generally, this configuration will yield the slowest performance, and should be avoided if possible. A better option

would be to deploy a virtual appliance to any ESXi servers, and use that virtual appliance for backup and restore tasks.

Considerations for installing vRanger in a virtual machine

Read the notes below regarding limitations and considerations about installing vRanger in a VM:

• When installing vRanger in a VM, it is not supported to perform a machine-based backup of the vRanger VM. In other words, the vRanger VM cannot back itself up. You may, however, perform a VA-based backup of the vRanger VM.

• When creating the virtual machine for vRanger, it is recommended to create a fresh VM rather than cloning an existing VM or template.

In recent versions of Windows, volumes are recognized by a serial number assigned by Windows. When VMs are cloned, the serial number for each VM volume is cloned as well. During normal operations, this is not an issue, but when vRanger is cloned from the same source or template as a VM being backed up, the vRanger volume will have the same serial number as the source volume.

For backup operations using HotAdd, source disk volumes are mounted to the vRanger VM. If the source VM volumes have the same disk serial number as the vRanger volume (which will be the case with cloned VMs), the source VM’s serial number will be changed by Windows when mounted to the vRanger VM. When restoring from these backups, the boot manger will not have the expected serial number, causing the restored VM not to boot until the boot information is corrected.

Installing vRanger on a physical server

Installing vRanger on a physical server provides a method to off-load backup resource consumption from the ESX/ESXi host and network. While you can perform Machine-based LAN in this configuration, LAN-free backups [virtual machine backups only] are the primary driver for using vRanger in a physical server.

NOTE: With vRanger installed on a physical server, you can still take advantage of the vRanger virtual appliances for

backup, restore, and replication activity.

Page 27: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

27DRAFT

Available transports

The transports available when vRanger is installed in a physical machine are listed below:

• With vRanger VA:

• VA-based HotAdd

• VA-based LAN

• Machine-based SAN

• Machine-based LAN

• Without the vRanger VA

• Machine-based SAN

• Machine-based LAN

LAN-free backups [virtual machine backups only]

With vRanger installed on a physical machine, you may perform LAN-Free backups with either the VA-Based HotAdd or Machine-based SAN transports.

VA-based HotAdd

This transport will mount the source VM’s disk to the vRanger virtual appliance deployed on the source host (or cluster). This allows vRanger (through the VA) to have direct access to the VM data through VMware’s I/O stack rather than the network. In this configuration, data is sent directly from the VA to the repository.

This is the recommended transport option due to the simplicity and flexibility of the configuration. In order to use this option, you must have a vRanger virtual appliance deployed on every host or cluster for which you wish to configure backups.

Machine-based SAN

This transport option uses your fibre-channel infrastructure to transport backup data to the vRanger machine.

In order to perform machine-based SAN backups, vRanger must be installed on a physical system attached to your SAN environment. This is a high performance configuration that requires vRanger to be connected to your fibre or iSCSI network. In addition, the VMFS volumes containing the VMs to be protected must also be properly zoned/mapped to the vRanger server.

Configuring vRanger for machine-based SAN backups

With vRanger will be installed on a physical server, the following configurations must be made:

• Disable automount on the vRanger machine: From the start menu, select “Run” and enter diskpart. Run the automount disable command to disable automatic drive letter assignment.

• Run the automount scrub command to clean any registry entries pertaining to previously mounted volumes.

• On your storage device, zone your LUNs so that the vRanger HBA (or iSCSI initiator) can see and read them.

• Only one vRanger server should see a set of VMFS LUNs at one time. For backups only, The vRanger server should have only read-only access to the LUNs. In order to perform LAN-Free restores, ensure that the vRanger server has Read + Write access to any zoned VMFS LUNs to which you wish to restore.

LAN backups

vRanger can perform LAN backups one of two ways - either through the vRanger machine, or by using thevRanger VA.

VA-based LAN

This option will transfer the source VM’s data from the source disk to the vRanger virtual appliance over the

Page 28: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

28DRAFT

network using VMware’s VDDK LAN transport.The backup processing activity occurs on the vRanger virtual appliance, then the data is sent to the repository directly.

Machine-based LAN

If there is no vRanger VA deployed, vRanger will transfer the source VM’s data from the source disk to the vRanger machine over the network. With this method, the backup processing activity occurs on the vRanger server. For network-based backups when using ESX, or for physical server backups, the backup data flows “direct to target” from the source server to the target repository. This means that the vRanger server does not process any of the backup traffic. For ESXi servers (which do not have the service console), data will be sent via VMware’s VDDK transport.

NOTE: Generally, this configuration will yield the slowest performance, and should be avoided if possible. A better option

would be to deploy a virtual appliance to any ESXi servers, and use that virtual appliance for backup and restore tasks.

3.9 System requirements

Requirements for the vRanger machine

In order to maximize application performance, and to ensure error-free operation, you must ensure that the machine on which vRanger is installed meets the requirements as documented in this section.

Requirements for the vRanger machine are divided among three sections:

• Hardware requirements

• Supported operating systems for installation

Review each of these sections thoroughly before installing vRanger.

Hardware requirements

The hardware requirements to run vRanger can vary widely based on a number of factors. Therefore, you should not do a large scale implementation without first completing a scoping and sizing exercise.

vRanger - physical machine

The hardware recommendations for the vRanger physical machine are described below.

CPU Any combination equaling 4 cores of CPUs are recommended. Example 1 quad-core CPU; 2dual-core CPUs.

RAM 4GB RAM is required.

Storage At least 4 GB free hard disk space on the vRanger machine.

HBA. For LAN-Free, it is recommended to use two HBAs - one for read operations and one forwriting

vRanger - virtual machine

CPU Four (4) vCPUs.

RAM 4GB RAM is required.

Storage At least 4 GB free hard disk space on the vRanger machine.

Requirements for Physical Backup and Restore

When backing up from and restoring to a physical server, vRanger uses a client run on that server to perform backup and restore operations. To effectively process the backup workload, the physical server must meet the requirements below:

CPU Any combination equaling 4 cores of CPUs are recommended. Example 1 quad-core CPU; 2dual-core CPUs.

RAM 2GB RAM is required.

Page 29: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

29DRAFT

Supported operating systems for installation

The following operating systems are supported for installation of vRanger.

Operating system Service pack level Bit level

Windows 7 All service packs (x64)

Windows 8 All service packs (x64)

Windows 8.1 All service packs (x64)

Windows Server 2008 All service packs (x64)

Windows Server 2008 R2 a All service packs (x64)

Windows Server 2012 All service packs (x64)

Windows Server 2012 R2 All service packs (x64)

Additional required software

In addition to a supported version of Windows and a supported VMware infrastructure, you may need some additional software components, depending on your configuration.

• Microsoft .Net Framework – vRanger requires the .Net Framework 4.5. The vRanger installer will install it if not detected.

• SQL Server [optional] – vRanger utilizes two SQL databases for application functionality. vRanger can install a local version of SQL Express 2008 R2 or you can chose to install the vRanger databases on your own SQL instance.

• vRanger Virtual Appliance – The vRanger virtual appliance is a small, pre-packaged Linux distribution that servers as a platform for vRanger operations away from the vRanger server. vRanger uses the virtual appliance for the functions below:

• replication to and from ESXi hosts

• file-level recovery from Linux machines

• optionally for backups and restores.

Supported SQL Server versions

The default installation option is to install vRanger with the SQL Server Express 2008 R2 database, but you may use your own SQL Server instance if you prefer.

If you chose to use your own SQL Server instance, and wish to use the vRanger Cataloging function, you will need to install the SQL Server instance on the vRanger server as the Catalog database must be local to vRanger. The following versions of Microsoft SQL Server are supported by vRanger.

Version Service pack level

SQL Server 2008 R2 Express [Embedded option] SP 2

SQL Server 2005 (all editions) All service packs

SQL Server 2008 (all editions) All service packs

SQL Server 2008 R2 (all editions) All service packs

SQL Server 2012 (all editions) All service packs

SQL Server 2014 (all editions) All service packs

Page 30: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

30DRAFT

Supported platforms

The sections below list the platforms and operating systems supported for backup, restore, and replication operations.

Supported vSphere versions

vRanger supports backup, restore, and replication operations against the following versions of VMware Infrastructure:

Component Supported versions

ESX(i) Servers • 5.0 • 5.1 • 5.5 • 6.0 NOTE: ESXi replication requires the use of the vRanger virtual appliance.

vCenter • 5.0 • 5.1 • 5.5 • 6.0

vSphere License vRanger supports all vSphere editions, with the exception of the free versions of ESX(i). The free versions do not provide the necessary APIs for vRanger to function.

Equivalent version support policy

In addition to what is listed in this guide, vRanger provides support for VMware versions where the following criteria have been met:

NOTE: The naming convention used in this policy section follows the standard product release versioning scheme of

(Major.Minor.Update.Patch)

• VMware updates or patches to a supported major or minor release are also supported, unless otherwise stated.

• Major or minor versions that are newer than what is listed in this guide are not supported and require a separate qualification effort, unless otherwise stated.

vRanger and VM snapshots

vRanger’s backup and replication functionality requires the ability to create a snapshot. In certain circumstances, the creation of VM snapshots is not supported by VMware. In these cases, backup and replication of these VMs or disks is not possible. Some common examples are:

• RDM Disks in physical compatibility mode

• Disks in independent mode

• Fault tolerant VMs.

NOTE: This list is not exhaustive. Any configuration in which snapshots are not supported by VMware, or not possible, is

not supported by vRanger.

Supported Hyper-V Versions

vRanger supports backup and restore operations against the following versions of Microsoft Hyper-V Server:

Component Supported versions

Hyper-V Servers • Windows Server 2012 • Windows Server 2012 R2

System Center VMM • Windows Server 2012 • Windows Server 2012 R2

Page 31: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

31DRAFT

Supported platforms for physical machine backup

vRanger supports backup and restore operations against the following operating systems:

Operating system Bit level

Microsoft Windows 2003 Server (x86 or x64)

Microsoft Windows 2003 R2 (x86 or x64)

Windows Server 2008 (x86 or x64)

Windows Server 2008 R2 (x64)

Windows Server 2012 (x64)

Windows Server 2012 R2 (x64)

Supported virtual appliance versions

vRanger 7.2 supports the virtual appliance versions below:

• 7.0.x or later.

IMPORTANT: In vRanger 7.0, the virtual appliances have been updated to a 64-bit architecture.If you have previously

deployed vRanger virtual appliances, you should upgrade these virtual appliances to the 64-bit version before running the

jobs in vRanger 7.0 in order to get the expected results.

3.10 Installation

IInstalling the vRanger database

vRanger utilizes a SQL Server database to store application and task configuration data. The database can be either the embedded SQL Server Express instance (the default option)or a SQL Server database running on your own SQL Server or SQL Server Express instance.

Database options

The database deployment occurs during the initial installation of vRanger. The default option installs a SQL Server Express database on the vRanger server. You may, if desired, install vRanger using a separate SQL Server instance. If you are going to use your own SQL Server instance and wish to use the vRanger cataloging feature, the SQL Server instance must be installed on the vRanger server.

Default

The Installation Wizard will default with a selection to install vRanger with the embedded SQL Server Express 2008 R2 database. The SQL Server Express database can only be installed on the vRanger server.

NOTE: While the embedded SQL Server Express database is free and simple to install, there is a size limit of 10 GB per

database.

External SQL Server Instance

The Installation Wizard will guide you through configuring vRanger with an external SQL Server database. There is also an option in the Install Wizard to configure the database connection manually, but the guided approach is recommended.

IMPORTANT: See System Requirements And Compatibility for a list of supported SQL Server database versions.

Page 32: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

32DRAFT

Installing the databases

When installing vRanger, consider the database selection carefully as migrating from a SQL Server Express installation to an external SQL Server database carries a risk of corrupting application data.

The cataloging function of vRanger requires that the application and catalog database be installed on the vRanger server. There are two options to accomplish this:

• Use the default SQL Server Express 2008 R2 installation, which will install vRanger, the vRanger database, and the Catalog database on the same machine. While this is the most straight forward option, SQL Server Express 2008 R2 databases are limited in size to 10 GB.

• If you don’t want to use the default SQL Server Express database, you can also install a supported Microsoft SQL Server version on the vRanger machine, and install the vRanger databases on that instance. While there is no hard-coded limit to database size, this is a more complicated installation.

If you will not be using cataloging, in order to provide the most flexibility, it is recommended to install vRanger using an external SQL Server database server. This will allow you to relocate the vRanger installation simply by installing the application in another location, and pointing the Install Wizard to the existing database.

Sizing the catalog database

The vRanger catalog process collects and records metadata and path information for files updated since the last backup and catalog entry. Depending on the number of VMs protected, and the number of files in each VM, the catalog database may grow quite rapidly.

Actual database growth will vary depending on the Guest OS and the number of files changed between backups, but the information below can be used as an approximate guide.

• With default filtering, the full catalog of a generic Windows 2008 VM is approximately 500 files, or approximately 0.2 MB.

NOTE: Many Windows files are not cataloged due to filtering (see “About catalog filtering” in the Dell vRanger Pro

User’s Guide). An amount of data equal to a standard Windows 2008 installation will result in a larger catalog

footprint.

• Incremental and differential backups will only catalog changed files, making the catalog record for these backups considerably smaller. Using incremental and/or differential backups will allow you to store catalog data for many more savepoints than if you used only full backups.

Installing vRanger

This procedure assumes that you have already downloaded the vRanger software and saved it to an accessible location.

vRanger setup1 Double-click the vRanger installation executable. The vRanger Backup and Replication Setup Wizard

opens.

2 In the Language drop-down, select the language for the interface or accept the default setting. Click Next.NOTE: This setting applies to both the vRanger installation process and the product interface.

3 The License Agreement screen displays. Read the license terms and accept the agreement. Click Next.

vRanger service credentials1 The vRanger Services Information dialog displays. This configures the credentials that will be used to run

the services installed by vRanger.

WARNING: The user account needed for this step must have administrator privileges on the vRangermachine.

• In the Domain field, enter the domain in which the user account is located. To use an account on the local machine, leave this field blank.

• In the Username field, enter the username for the account.

IMPORTANT: If you choose to install the vRanger service with an account other than the account with which you are

currently logged in, please select Mixed-Mode authentication when installing the vRanger database.

Page 33: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

33DRAFT

• In the Password field, enter the password for the account.The Choose Components screen displays.

• Click Next.

vRanger database installation

vRanger utilizes a SQL Server database to store application and task configuration data. The database can be either the embedded SQL Server Express instance (the default option) or a SQL Server database running on your own SQL Server or SQL Server Express instance.

NOTE: This step is omitted if an existing SQL Server instance is detected.

1 The vRanger Database Installation dialog appears.

2 The vRanger installer, by default, will install vRanger with the embedded SQL Server Express database.

To proceed with this option, leave Install a new local instance of SQL Server Express selected and proceed to Step 3.

OR

To install vRanger on an existing SQL Server instance, clear Install a new local instance of SQL Server Express and click Next.

3 Select a server authentication mode:

• Windows: When a user connects through a Windows user account, SQL Server validates the account name and password using information in the Windows OS. Windows Authentication uses Kerberos security protocol, provides password policy enforcement (complexity validation for strong passwords), provides support for account lockout, and supports password expiration.

• Mixed Mode: Mixed mode enables both Windows Authentication and SQL Server Authentication. Enter and confirm the system administrator (sa) password when you select Mixed Mode authentication. Setting strong passwords is essential to the security of your system. Never set a blank or weak sa password.

IMPORTANT: If you choose to install the vRanger service with an account other than the account with which you

are currently logged in, please select Mixed-Mode authentication when installing the vRanger database.

If you selected SQL Server, you will be prompted to enter a password for the SA account.If you selected Windows, the installation will continue using the account specified in vRanger service credentials.

4 Click Next.

vRanger database runtime credentials

The vRanger Database Runtime Credentials dialog appears. This dialog allows you to configure different credentials for database installation and for normal runtime operations. In addition, this dialog is where you configure a connection to an existing SQL Server.

1 To select an external database, select the server and database name in the drop-down boxes. If your desired server is not visible, click the refresh icon to perform another discovery. When using an external SQL Server, ensure the configurations below are made.

• The external SQL Server must have “Named Pipes” and “TCP/IP” enabled in SQL Server Configuration Manager. “Named Pipes” is found under SQL Server Network Configuration, while “TCP/IP” is under Protocols for Database_Instance_Name.

• The SQL Server must be restarted after making these changes.

• SQL Server Browser services must also be running for vRanger to discover the external database.

2 Configure the credentials for your database installation and connection as follows:

Page 34: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

34DRAFT

• Database Installation Credentials – If you select Windows, the database will install using the credentials chosen in vRanger service credentials. If you are using SQL Server authentication, the credentials used must have administrative privileges on the SQL Server instance.

• Runtime DB Connection Credentials – You may choose different credentials for use during normal vRanger operations.

• If you select Windows, the database will install using the credentials chosen in vRanger service credentials.

• If you select SQL Server, enter and confirm the system administrator (sa) password when you select Mixed Mode authentication. Setting strong passwords is essential to the security of your system. Never set a blank or weak sa password.

3 Click Next.

vRanger Catalog Service

The vRanger Catalog Service provides a searchable catalog of files in cataloged backups. This enables faster searches during file-level recovery.

1 To install the Catalog Service, select Install vRanger Catalog Service. To proceed without installing the Catalog Service, clear the checkbox, and click Next.

NOTE: If you have previously installed the Catalog Service, you will be unable to clear the checkbox.

2 Choose the credentials to use for the database installation. Click Next when finished.

Select Use the same credentials as vRanger Database, or configure the credentials for the Catalog database per the options below:

• Database Installation Credentials – If you select Windows, the database will install using the credentials chosen in vRanger service credentials. If you are using SQL Server authentication, the credentials used must have administrative privileges on the SQL Server instance.

• Runtime DB Connection Credentials – You may choose different credentials for use during normal vRanger operations.

• If you select Windows, the database will install using the credentials chosen in vRanger service credentials.

• If you select SQL Server, enter the credentials for vRanger to use when connecting to the vRanger database. If the account entered does not exist, it will be created.

Complete the installation

The Ready to Install dialog appears.

1 Review and confirm your selected configurations.

2 To change any configuration, click Back. To continue, click Install.

3 After the installation is complete, click Finish.

NOTE: Refer to “Configuring vRanger” in the Dell vRanger Pro User’s Guide for procedures on completing the vRanger

Startup Wizard or performing other configurations.

Installing the vRanger Catalog Service

Page 35: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

35DRAFT

If you wish to install the Catalog Manager after vRanger Backup & Replication is already installed, you may modify your installation with the standard vRanger installer.

vRanger setup1 Double-click the installer file. The Setup Wizard displays. Select Modify the installation, then click Next.

2 Click Next to proceed through the vRanger Services Information and vRanger Database Runtime Credentials dialog.

3 The vRanger Catalog Service dialog displays. Select Install vRanger Catalog Service, and click Next.

4 Choose the credentials to use for the database installation. Click Next when finished.

Select Use the same credentials as vRanger Database, or configure the credentials for the Catalog database per the options below:

• Database Installation Credentials – If you select Windows, the database will install using the credentials chosen in vRanger service credentials. If you are using SQL Server authentication, the credentials used must have administrative privileges on the SQL Server instance.

• Runtime DB Connection Credentials – You may choose different credentials for use during normal vRanger operations.

• If you select Windows, the database will install using the credentials chosen in vRanger service credentials.

• If you select SQL Server, enter the credentials for vRanger to use when connecting to the vRanger database. If the account entered does not exist, it will be created.

5 The Ready to Install dialog appears. Click Install.

3.11 Adding a repository

Adding a repository

vRanger uses repositories to store backup archives. Repositories can be one of the following types:

• CIFS

• NFS (version 3)

• FTP

• SFTP

• NetVault SmartDisk - Dell Software’s disk-based data-deduplication option which reduces storage costs with byte-level, variable-block-based software deduplication. For more information on NetVault SmartDisk, see http://software.dell.com/products/netvault-smartdisk/ or the Dell vRanger Integration Guide for NetVault SmartDisk.

• Dell Rapid Data Access (RDA) - Provided by the Dell DR Series appliances - purpose-built, disk backup appliances that use Dell deduplication technology to significantly improve backup and recovery processes. For more information on Dell DR Series appliances, see http://software.dell.com/products/dr-series-disk-backup-appliances/ or the Dell vRanger Integration Guide for Dell DR Series Disk Backup Appliance.

The procedure below shows mounting a CIFS share to the My Repositories pane. Additional procedures can be found in the vRanger Installation Guide.

Page 36: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

36DRAFT

To add a repository1 Under Repositories, select Windows Share (CIFS).

2 Populate the Repository Name text box.

This value displays in the My Repositories pane.

3 Populate the Description text box.

4 Enter a username and password in the appropriate text boxes.

5 Select a Security Protocol from the drop-down: NTLM (Default), or NTLM v2

6 In the Server text box, type the UNC path to the preferred repository directory. Alternatively, you may enter a partial path and click Browse to find the target directory.

NOTE: You must enter a valid username and password before using the browse functionality.

WARNING: If you want to use the Encrypt all backups feature, make certain to retain the password you enter

in the following steps. There is no back-door or admin-level password. If the password is unknown, the backups

are not usable.

7 Select Encrypt all backups to this repository if you want these backups to be password-protected.

NOTE: Encryption is not supported for NetVault SmartDisk and Data Domain Boost repositories.

8 Enter a Password for the encrypted repository - confirm the password by re-entering it.

9 Click Save — the connection to the repository is tested and the repository is added to the My Repositories pane and the Repository Information dialog.

10 vRanger checks the configured repository location for existing manifest data to identify existing savepoints. If any are found, you are prompted to take one of three actions:

• Import as Read-Only – With this option, all savepoint data is imported into the vRanger database, but only for restores. You cannot back up to this repository.

• Import – All savepoint data is imported into the vRanger database. vRanger can use the repository for backups and restores. vRanger requires read and write access to the directory.

• Overwrite – The savepoint data is retained on the disk, but cannot be imported into vRanger. vRanger ignores the existing savepoint data and treats the repository as new.

11 Click Next.

Page 37: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

37DRAFT

3.12 Adding Hosts to vRanger

vRanger uses hosts as the source machines for backup jobs. vRanger supports a multitude of different machine types. The following machine types are supported by vRanger:

• vCenter

• ESX(i) host

• Hyper-V System Center VMM

• Hyper-V Failover Cluster

• Hyper-V Host

• Physical Machine

• vCloud Director

To add a vCenter1 In the VirtualCenters section, click Add.

The Add VirtualCenter Credentials dialog box appears.

2 In the DNS Name or IP text box, enter the FQDN or IP address of the vCenter server.

3 In the User Name text box, enter the user name of an account with privileges on the vCenter server. For the required permissions for a vRanger vCenter account, see “Configuring vCenter permissions” in the “Before You Install” chapter of the vRanger Installation and Upgrade Guide.

NOTE: The user name for the vCenter credential should be entered in the “username@domain” format, rather

than “domain\username”. In some cases, the domain may not be required. Avoid special characters in the user

name.

If these credentials are changed in the future, you will need to restart the vRanger Service to recognize the

changes.

4 In the User Password text box, enter the password for the account used above.

NOTE: Avoid special characters in the password.

5 In the Port Number text box, enter the port to be used for communication. The default port is 443.

6 Click Connect.

The dialog box closes and the vCenter displays in the VirtualCenters section and on the VirtualCenter and Host Information page. The hosts managed by that vCenter display in the Hosts section. Note that the hosts displayed show the icon . There are four key indicators shown in the icon:

• The large gold key indicates that the host has been issued a vRanger license.

• The green dot indicates that the host has been assigned a backup license.

• The blue dot indicates that the host has been assigned a replication license.

• The authentication method for the host is indicated by the icon superimposed on the host icon:

• If the host is authenticated with vCenter credentials only, the vCenter icon appears superimposed over the host icon: .

• If the host is authenticated with host credentials, a gold key is superimposed over the host icon: .

WARNING: vCenter credentials are sufficient for operations that use only the vStorage API. You need to apply credentials to

each host for backup and replication operations to use the Service Console.

Page 38: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

38DRAFT

Adding a Hyper-V System Center Virtual Machine Manager

Complete the steps in the following procedure to add a Hyper-V System Center VMM.

IMPORTANT: This port is configured during System Center VMM installation. If you chose a port number different from the

default value, enter that value here.

To add a Hyper-V System Center VMM1 In the Hyper-V System Center Virtual Machine Managers section, click Add.

2 In the DNS Name or IP text box, enter the FQDN or IP address of the Hyper-V System Center VMM.

3 In the User Name text box, enter the user name of an account with domain administrator privileges on the System Center VMM.

4 In the User Password text box, enter the password for the account used above.

5 If this is a new System Center VMM, or if you have removed the vRanger Hyper-V agent, select Install agent on host.

6 Configure the ports as follows:

• In the Agent Port Number text box, enter the port you want vRanger to use to communicate with the vRanger agent installed on each Hyper-V host. This port must be open between vRanger and each Hyper-V server.The default port is 8081.

• In the SCVMM Port Number text box, enter the port you want vRanger to use to communicate with the System Center VMM server. The default port number is 8100.Click Connect.

IMPORTANT: This port is configured during System Center VMM installation. If you chose a port number different

from the default value, enter that value here.

7 Click Next.

The Hyper-V System Center VMM displays in the Hyper-V System Center Virtual Machine Managers section. The Hosts managed by that System Center Virtual Machine Manager displays in the Hosts section.

Page 39: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

39DRAFT

Adding a Hyper-V Cluster

Complete the steps in the following procedure to add a Hyper-V cluster.

To add a Hyper-V cluster1 In the Hyper-V Cluster section, click Add.

2 In the DNS Name or IP text box, enter the FQDN or IP address of the Hyper-V cluster.

3 In the User Name text box, enter the user name of an account with domain administrator privileges on the cluster.

4 In the User Password text box, enter the password for the account used above.

5 If this is a new cluster, or if you have removed the vRanger Hyper-V agent, select Install agent on host.

6 In the Port Number text box, enter the preferred port you want vRanger to use to communicate with the Hyper-V cluster on the source server. This port must be open between vRanger and each Hyper-V server. The default port number is 8081.

7 Click Connect.

8 Click Next.

The Hyper-V cluster displays in the Hyper-V Clusters section. The Hosts managed by that cluster displays in the Hosts section.

Page 40: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

40DRAFT

Adding hosts

If you have hosts that are not part of a cluster, you can add them individually.

To add hosts1 In the Host section, click Add.

2 In the DNS Name or IP text box, enter the FQDN or IP address of the Host.

3 In the User Name text box, enter an account for the host.

4 In the User Password text, enter the password for the account used above.

5 If this is a new host, or if you have removed the vRanger Hyper-V agent, select Install agent on host.

6 In the Port Number text box, enter the preferred port you want vRanger to use to communicate with the Hyper-V host on the source server. This port must be open between vRanger and each Hyper-V server. The default port number is 8081.

7 Click Connect.

The Host displays in the Hosts section.

Page 41: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

41DRAFT

Adding a physical machine

The Physical Machine Information page displays. Before vRanger can add physical source objects to the My Inventory pane, you must configure a connection to a physical server.

To add a physical machine1 In the Physical Machines section, click Add.

2 In the DNS Name or IP text box, enter the FQDN or IP address of the server.

3 In the User Name text box, enter an account for the server.

4 In the User Password text box, enter the password for the account used above.

5 If this is a new server, or if you have removed the vRanger agent, select Install agent on machine. In the Agent Location text box, enter the preferred directory (on the physical machine) to which the physical client should be installed. The default installation location is: C:\Program Files\Dell\vRangerPhysicalClient

6 In the Port Number text box, enter the preferred port for vRanger to use to communicate with the physical client on the source server. This port must be open between vRanger and each physical server. The default port number is 51000

7 Click Connect.

The server displays in the My Inventory pane. You may also create a Backup Group to combine multiple physical servers into one backup job. See the section Adding a custom backup group for more information.

Deploying and configuring a virtual appliance from the Startup Wizard

vRanger uses a virtual appliance (VA) for both Linux file-level recovery and for replication to and from VMware ESXi servers.

There are two ways to deploy and configure a VA: the Startup Wizard and the Tools menu. If you do not want to complete the Virtual Appliance Information page of the Startup Wizard now, you may skip this step and continue with the Startup Wizard. You can access the Virtual Appliance Configuration dialog at any time by way of the Options available from the Tools drop-down menu.

Page 42: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

42DRAFT

To complete the Virtual Appliance Information page of the Startup Wizard and deploy and configure a VA now, complete the following procedure.

To deploy and configure the virtual appliance from the Startup Wizard1 On the Virtual Appliance Information page of the Startup Wizard, launch the Virtual Appliance Deployment

Wizard from the Startup Wizard by clicking Deploy Virtual Appliance.

2 Complete the deployment wizard by following the steps in Deploying the virtual appliance by using the Virtual Appliance Deployment Wizard.

3 To add a new virtual appliance configuration, click Add, and then complete the following steps:

a In the Add Virtual Appliance Configuration dialog, select a virtual appliance from the inventory tree.

b Under Virtual Appliance Properties, you do any of the following:

• Select Override IP Address, and then enter a new IP address in the IP Address text box.

• Enter a root password for the VA in the Root Password text box.

• Select Use as default virtual appliance for cluster, to use this VA for all machines that are a part of the associated cluster.

c Click OK.

4 To configure an existing virtual appliance, select a VA from the list, and then click Edit.

In the Modify Virtual Appliance Configuration dialog, you can edit any of the following settings:

• Virtual Appliance Properties

• Virtual Appliance Options

• Replication

• Scratch Disk

• Password

• Linux File Level Restore

5 To delete a virtual appliance, do the following:

1 Select a VA from the list.

2 Click Remove.

The Removing VA dialog appears.

3 Select the job you want to remove.

4 Click OK.

5 In the Confirm Delete dialog, click OK.

If you want remove the entire VA rather than a single job, first select Delete the virtual appliance from the host, and then click OK.

6 Under Linux FLR Virtual Appliance, if you want to plan for Linux File Level Recovery, select the virtual appliance you want to use from the drop-down list.

7 Click Next.

Page 43: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

43DRAFT

3.12 High availability

MD3 Series and MDSM 11.10

The Dell MD3 Series storage system has been architected for highest reliability and availability with features such as:

• Dual-active controller with automated I/O path failover

• RAID levels 0, 1, 5, 6, 10, or Dynamic Disk Pools (DDP)

• Redundant, hot-swappable controllers, disk drives, power supplies, and fans

• Automatic drive failure detection and rebuild

• Mirrored data cache with battery backup and destage to persistent flash device

• Nondisruptive controller firmware upgrades

• Proactive drive health monitoring

• Background media scan with autoparity check and correction

All components are fully redundant and can be swapped without powering off the system or even halting operation. This includes controllers, disk drives, power supplies, and fans. The MD3 Series power supplies offer an 80-plus efficiency rating. The MD3 Series features several functions designed to protect data in every circumstance. Multiple RAID levels are available for use with varying levels of redundancy. Failover from one path to another in the case of a lost connection is also automatically included with the system. Within the shelf, each drive has a connection to each controller so that even internal connection issues can be quickly overcome. Vdisks on the system are available for host I/O from the moment they are created and can even have significant properties altered without stopping I/O.

Other features of the MD3 Series that protect data include mirroring and backing up controller cache. If power is lost to the system during operation, onboard batteries are used to destage the data from cache memory to internal controller flash so that it will be available when power is restored. The RAID algorithms allow the system to recreate any lost data in the rare case of drive failure. Users also have the option of confirming data with RAID parity at all times and even continuing a rebuild when hitting an unreadable sector.

Behind the scenes, the system performs other tasks that protect data at all times. The optional media scan feature looks for inconsistencies even on sectors not currently being accessed by any host. All types of diagnostic data are constantly collected for later use by support if necessary.

Not only does the MD3 Series offer the detailed reliability and availability features already described, but using the MDSM software features it is also possible to maximize availability:

• High-speed, high-efficiency Snapshot copies

• Robust disaster recovery protection

− Synchronous mirroring for no-data-loss protection of content

− Asynchronous mirroring for long-distance protection and compliance

• Flexible protection to maximize ROI

− Recovery target can be flash, NL-SAS, or mixed based on cost/performance needs

− Delivers speed without breaking budgets

Figure 14 illustrates a graphical representation of the HA possibilities using Snapshot copies and mirroring. For more information, refer to the Dell Support site Documentation library and the MDSM online help.

MD3820f MD3820fFigure 14 Example of

Snapshot copies

Page 44: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

44DRAFT

3.14 Sizing

Sizing the storage is a balance between the application’s I/O and capacity requirements and the physical size and speed of the storage media. Dynamic Disk Pools are recommended and provide excellent performance, faster disk rebuilding, and efficient usable capacity.

Dell does not recommend using MD3 Series SSD read cache to cache the data from the base Vdisks to improve the application I/O performance and response times for large read workloads with Microsoft Exchange Server 2013, which is primarily a random write-based workload.

3.16 Monitoring

MD3 Series performance monitoring Using MDSM 11.10

While a storage system is in operation, it is often useful to be able to see how the storage system is performing. Using MDSM, it is possible to view MD3 Series performance data in both textual and graphical dashboard formats. Additional details of the monitor feature are located in the Concepts for MDSM version 11.10 can be found on the Dell Support site Documentation library and the MDSM online help.

The performance monitor provides visibility into performance activity across your monitored storage devices. You can use the performance monitor dashboard to perform these tasks:

• View different performance metrics on six graphs in real time for up to five monitored devices per graph

• Performance metrics include:

− I/O latency for drives and Vdisks

− Current or maximum I/O per second

− Throughput for the entire storage array and Vdisks

− Cache hit percentage

− Total I/Os

• Links that provide a convenient way to start textual performance monitoring or background performance monitoring

Figure 8 illustrates the performance data that can be collected and viewed by using MDSM Performance Monitor Dashboard.

Real-time performance data can be monitored in tabular format as well (actual values of the collected metrics) and saved to a file for later analysis. The textual view is illustrated in Figure 8.

MDSM also provides for background performance monitoring. Various reporting attributes, such as time increments and filtering criteria, can be specified to examine performance trends and to pinpoint the cause of availability and performance issues.

3.17 QLogic QConvergeConsole

Management tools make all the difference between a difficult-to-use or easy-to-configure product in the enterprise data center, and QLogic provides a full suite of tools for storage and networking I/O manageability. Deploy Fibre Channel, Converged Networking, Intelligent Ethernet, FabricCache™, or virtualized I/O solutions using QConvergeConsole® (QCC). The comprehensive graphical and command line user interfaces centralize I/O management of multiple generations of QLogic storage and network adapters across operating systems and protocols.

QConvergeConsole management suite includes a browser-based graphical user interface (GUI), a lightweight command line interface (CLI), and an integrated VMware® vCenter™ Plug-in (VCPI) to provide single-pane-of-glass management. Leverage QCC’s rich feature set to query and modify driver parameters, monitor statistics, run diagnostics, troubleshoot, administer updates, provision bandwidth, change I/O personality, and assign access control. QCC is designed to streamline the management of storage and networking I/O operations.

Major features are available across all three tools (GUI, CLI and VCPI). Other features may be available on one or two offerings based on OS functionality and/or the nature of the tool.

Page 45: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

45DRAFT

QCC Features GUI CLI VCPi

Centralized Management: Discover, configure, monitor, update firmware, diagnose, provision, and control access across multiple generations of adapters, operating systems, and protocols. • • •

Personality Management: Configure flexible I/O personality at the adapter level and at the port function level. • • •

I/O Virtualization: Manage I/O virtualization with NIC partitioning (NPaR) and virtual Fibre Channel I/O port with N_Port ID Virtualization (NPIV). • • •

I/O Provisioning: Dynamically manage efficient I/O bandwidth usage and scaling with NPaR and streamline quality of service (QoS). • • •

Assets, Statistics, and Reports: Discover connected hosts and attached QLogic adapters, generate reports, and collect detailed adapter and port statistics and logs. • • •

Notifications and Alerts: automatically send notifications with host configuration to an e-mail distribution list and set up alerts for error and information management. •

Access Control: assign passwords, protect critical operations, and prevent unauthorized access for SaN and LaN administration. •

Topology Maps: Visually manage storage and network maps with an end-to-end view of the adapter’s connections to the hardware storage and network I/O components. • •

Wizards: easy-to-use wizards walk through Flash and driver parameter file updates on adapters and deploy firmware and driver updates across the data center. • •

Web Client: establish secure https connections using a GUI from widely supported Web browsers to the servers deployed with QLogic adapters and management agents. •

Command-Line Operations: execute an extensive command set from an interactive menu-driven or scriptable interface and automate the adapter management tasks. •

vSphere® Integration: Manage comprehensive I/O operations across generations of adapters from the vSphere environment with the vCenter Plug-in and vSphere Web Client Plug-in. •

CIM Providers: QLogic’s vCenter Plug-in CIM provider and Fibre Channel-FCoe CIM provider allow integration of management applications for distributed adapters. •

Parameter and Firmware Management: Update Flash memory and activate firmware across multiple generations of adapters. Display and manage configurable parameters for Fibre Channel, iSCSI, and ethernet functions. Capture detailed firmware debug logs for faster support, analysis, and resolution.

• • •

Target Device Connectivity: Display information tree for discovered target devices and LUNs connected to the Fibre Channel and Converged Networked adapter ports. • • •

QConvergeConsole is a free product and can be used to manage not only the local server but also all QLogic adapters in the datacenter. Driver updates, firmware updates and parameter changes can be performed on the local system or to remote systems using QConvergeConsole. QConvergeConsole supports all QLogic adapters, most mainstream operating systems, browsers and architectures.

QConvergeConsole Specifications

GUI and CLI operating systems

• Windows Server® • Windows Hyper-V® • Red Hat® Enterprise Linux

• SUSe® Linux Enterprise Server • Solaris® SPaRC®, x86 • Citrix® XenServer®

VCPI operating system • VMware vSphere

Browsers Supported by GUI • Microsoft® Internet Explorer® • Mozilla® Firefox®

• Google® Chrome® • Opera • Safari®

QLogic Adapters • 2400, 2500 and 2600 Series Fibre Channel adapters

• 3200 Series Intelligent ethernet adapters

• 8200 and 8300 Series Converged Network adapters

• 10000 Series FabricCache adapters

Architectures • x86 • SPaRC

• x64 • PowerPC

Figure 11 is a screenshot of QConvergeConsole GUI and displays the four LUNs available to the server. In this example, there are four LUNs presented to the server. Since Microsoft MPIO is installed and configured, each LUN is displayed in the QCC GUI under Port 1 and Port 2. All aspects of the QLogic adapter can be configured and managed via the QCC GUI. Figure 11 is merely a representation of how LUNs are displayed in QCC GUI.

Page 46: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

46DRAFT

3.18 Brocade EZSwitchSetup and Switch Manager

EZSwitchSetup is an easy-to-use graphical user interface application for setting up and managing the Brocade 6505 switch. It has the following components:

• EZSwitchSetup wizard (on the installation CD)

• EZSwitchSetup switch configuration wizard

• EZSwitchSetup Switch Manager

EZSwitchSetup can be run on a SAN host computer or you can use a different computer that is not part of the SAN, such as a laptop.

EZSwitchSetup requires a browser that conforms to HTML version 4.0 and JavaScript version 1.0. The EZSwitchSetup installation CD automatically installs the correct Java Runtime Environment (JRE) for the disk-based installation wizard. This does not affect any pre-installed JREs, but other program components launched from the switch require “Oracle JRE 1.7.0_80 or later” or “JRE 1.8.0_45 or later” installed on the host.

Brocade has certified and tested EZSwitchSetup on the platforms shown in the following table.

Operating System Browser JRE version

Red Hat Enterprise Linux 6.6 Adv (32-bit) Firefox 34 Oracle JRE 1.7.0_80 or JRE 1.8.0_45

Windows 7 Professional (x86) SP1 Chrome 40 Firefox 34 Internet Explorer 10.0/11.0

Oracle JRE 1.7.0_80 or JRE 1.8.0_45

Windows 8.1 Chrome 40 Firefox 34 Internet Explorer 11.0

Oracle JRE 1.7.0_80 or JRE 1.8.0_45

Windows Server 2008 R2 Standard SP1 Chrome 40 Firefox 34 Internet Explorer 10.0/11.0

Oracle JRE 1.7.0_80 or JRE 1.8.0_45

Windows 2012 R2 Chrome 40 Firefox 34 Internet Explorer 10.0/11.0

Oracle JRE 1.7.0_80 or JRE 1.8.0_45

EZSwitchSetup is supported on the platforms shown in the following table.

Operating System Browser JRE version

SUSE Linux Enterprise Server 11 (SP2) (32-Bit) Firefox 34 Oracle JRE 1.7.0_80 or JRE 1.8.0_45

SUSE Linux Enterprise Server 12 Firefox 34 Oracle JRE 1.7.0_80 or JRE 1.8.0_45

Windows 2008 Standard Firefox 34 Internet Explorer 11.0

Oracle JRE 1.7.0_80 or JRE 1.8.0_45

Windows 7 Professional (32-Bit) Chrome 40 Firefox 34 Internet Explorer 8.0/9.0/10.0/11.0

Oracle JRE 1.7.0_80 or JRE 1.8.0_45

Page 47: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

47DRAFT

Windows 7 SP1 Chrome 40 Firefox 34 Internet Explorer 8.0/9.0/11.0

Oracle JRE 1.7.0_80 or JRE 1.8.0_45

Windows 2008 (SP2) Enterprise (64-Bit) Chrome 40 Firefox 34 Internet Explorer 9.0/11.0

Oracle JRE 1.7.0_80 or JRE 1.8.0_45

Windows 8 Enterprise (64-Bit) Chrome 40 Firefox 34 Internet Explorer 10.0/11.0

Oracle JRE 1.7.0_80 or JRE 1.8.0_45

Windows Server 2008 R2 (SP1) Enterprise (64-Bit)

Chrome 40 Firefox 34 Internet Explorer 9.0/10.0/11.0

Oracle JRE 1.7.0_80 or JRE 1.8.0_45

Windows Server 2012 Standard (64- Bit) Chrome 40 Firefox 34 Internet Explorer 10.0/11.0

Oracle JRE 1.7.0_80 or JRE 1.8.0_45

The minimum hardware requirements for a Windows system are as follows:

• 90MB of hard drive space for the EZSwitchSetup installation directory

• 2GB or more RAM for fabrics containing up to 15 switches

• A minimum of 8MB of video RAM is also recommended

• An Ethernet port

• A serial (COM) port if you plan to connect to the serial port on the switch

The EZSwitchSetup Switch Manager is a simplified version of Web Tools. It streamlines switch management by providing an easy-to-use subset of basic switch management tasks. The EZSwitchSetup Switch Manager performs the following functions:

• Monitor the switch, including port and field-replacable unit (FRU) status

• Manage Custom Zoning

• Perform basic switch configurations

• Add Ports on Demand (POD)

The Switch Manager works for a single-switch fabric only. It displays only the switch and associated tasks, without fabric information.

Page 48: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

48DRAFT

3.19 Best practices

Server Best Practices• Ensure Multipath I/O is installed and configured on the server. This feature provides alternate paths

between storage devices and hosts in case the primary path fails. This feature also provides load balancing between paths.

• Configure the page file size to be 10MB larger than the physical RAM installed in the server. For more information on how to configure page file size on a Windows 2012 R2 server, please see here. Note: The link is the page file configuration from Microsoft, but only applies to Windows 7/Vista. The configuration was tested in Windows Server 2012 R2, and the process is the same.

HBA Best Practices• Set the queue depth value in QLogic QConvergeConsole to 254 for each fiber port. The queue depth field

specifies how many instructions can be stored in the HBA. If a queue fills up, any extra instructions remain with the server’s CPU. This overflow of instructions can result in wasted CPU clock cycles.

Storage Best Practices• Set the start demand cache flushing value to 80% in the Dell Modular Disk Storage Manager.

Virtual Disk Best Practices• When creating volumes in the Modular Disk Storage Manager, make sure read and write cache are both

enabled. Also confirm that dynamic cache read prefetch is enabled. These three settings increase the performance of the storage system.

• When creating volumes in Windows Server, use vDisk mount points instead of drive letters. For more information on Windows Vdisk mount points please see section 3.3 of this guide. For information on how to configure Vdisk mount points, please click here. Note: The link only covers the mount point configuration for Windows 2008, but the same process was tested and confirmed to work in Widows Server 2012 R2.

• Finally, make sure to assign an allocation unit size of 64KB when creating volumes in Windows Server 2012. This option increases the block size of the volume being created. This setting can result in increased performance because it uses the most efficient block size for data transfer on the system bus.

vRanger Best Practices• Uncompressed backups should be used in order to achieve high performance.

• Virtual Appliances can help ease the load on the vRanger proxy server. This practice only works when vRanger is used with ESX and vCenter.

• If there is a database connection error when starting vRanger, restart the Dell vRanger service using the Windows administrative tools services snap-in.

• vRanger should not be installed on a vCenter server. Instead, it should be installed on a dedicated server. vRanger could take up valuable resources on the vCenter infrastructure if it was installed on a vCenter server. A virtual appliance (VA) can be deployed on the vCenter infrastructure to increase backup performance instead of a full vRanger install.

• SAS Drives are recommended for repositories used with vRanger. They are 30% faster than SATA drives.

• If vRanger performance seems slow, there are several configuration settings that can be edited. The maximum number of tasks on a LUN, maximum number of tasks running on a host, and maximum number of tasks per repository should all be tuned to increase slow performance. These three options should be set at 3, 1, and 2 respectively. If jobs can be completed with these settings without errors, then the maximum number of tasks per repository can be increased by 1 until performance begins to drag.

• It is important to consider the number of backup jobs an infrastructure can support. A poorly designed vRanger infrastructure design can saturate a network and slow down performance. To avoid this, limit the number of simultaneous repository backups in vRanger settings.

• vRanger repositories should be configured as GPT. GPT partitioning allows volumes to grow larger than 2TB, which is the limit for MBR partitioned drives.

• There are many different types of replication options that can be used within vRanger. The following points explains which replication type should be used based on replication size:

Page 49: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

49DRAFT

a. < 20GB – Use Changed Block Tracking (CBT) if your hardware supports it. If not, differential replication should be used. Small VMs are replicated very easily with differential replication.

b. 20 – 150GB – If the replication interval of the machines is very frequent, CBT should be used if your hardware supports it. If not, hybrid replication should be used. Hybrid replication reduces scan time, which can increase the replication interval. If the replication interval of the machines is infrequent, CBT should still be used if available. If not, differential replication should be used. Differential replication prevents the snapshots from growing throughout the day, as they would with Hybrid replication.

c. > 500GB+ - Use CBT is your hardware supports it. If not, use Hybrid replication with a frequent interval. Hybrid replication decreases scan time, and a frequent interval ensures snapshots do not grow too large. Ensure the replication interval provides enough time for the entire VM to be replicated.

• If a machine needs to be backed up over a WAN link, it is better to copy the initial backup onto removable media and manually copy it to the remote backup target than transfer it over the WAN link. This reduces the stress on a WAN link an initial backup can cause. This is called pre-seeding replication jobs.

4 SummaryThe Dell MD38X0f, QLogic QLE2600 and Brocade 6505 provide exceptional performance under varying workloads and in a variety of configurations. These systems are designed to be flexible, robust, and easy to manage. In addition, they provide superior value to an organization that requires the utmost performance for its mission-critical applications.

Dell’s Dynamic Disk Pools technology is an excellent fit with vRanger, not only meeting performance requirements but also reducing maintenance by providing exceptional performance on failure of a disk.

5 ReferencesDell Support Site Documentation Library

Dell Support Documentation for MD38X0f Series

Dell MD38X0f Series Controller Technical Specifications

QLogic QLE2600 Gen 5 16Gb Fibre Channel Adapter

QLogic QConvergeConsole

Brocade 6505 Switch

Dell 5524 Ethernet Switch

5.2 Appendix B

MD38x0f tech specs:

Power

Wattage MD3800f/MD3820f/MD3600f/MD3620f support DC power supply MD3800f/MD3820f/MD3600f/MD3620f: AC - 600W peak output; DC – 700W MD3860f/MD3660f: AC - 1755W

Maximum heat dissipation

MD3800f/MD3820f/MD3600f/MD3620f: 2047 BTU/hr MD3860f/MD3660f: 5988 BTU/hr

Voltage MD3600f/MD3620f: 100 to 240 VAC ; 48V DC

MD3860f/MD3660f: 220V AC, auto ranging, 50 Hz/60Hz

Frequency range 50/60Hz

Temperature Operating: 10° to 35°C (50° to 95°F) with a maximum temperature gradation of 10°C per hour MD3800f/MD3820f/MD3600f/MD3620f support Fresh Air cooling, up to 35°C

Relative humidity Operating: 20% to 80% (non-condensing) with a maximum humidity gradation of 10% per hour

Altitude

MD3800f/MD3820f/MD3600f/MD3620f

Operating: -16 to 3048 m (-50 to 10,000 ft) Note: For altitudes above 2950 feet, the maximum operating temperature is de-rated 1°F/550 ft.

MD3860f/MD3660f Operating: –30.5 m to 3000 m (–100 ft to 9,840 ft) NOTE: For altitudes above 2950 ft, the maximum operating temperature is derated 1.8°F/1000 ft.

Page 50: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

50DRAFT

Connectivity

RAID and Controllers

Hot-swappable controllers

MD34x0 and MD38x0x controllers are not compatible with the older generation arrays (MD32x0x and MD36x0x).

Single controller models

MD3600f and MD3620f: Support up to 64 servers when configured with a Fibre Channel switch, available with 2GB cache controllers MD3800f and MD3820f: Support up to 64 servers when configured with a Fibre Channel switch, available with 4GB cache controllers

Dual controller models MD3800f/MD3820f/MD3860f/MD3600f/MD3620f/MD3660f: Support up to 64 servers when configured with Fibre Channel switches, available with an option of 2GB, 4GB or 8GB cache controllers. MD3060e contains two hot-swappable enclosure management modules (EMMs).

Controller features Each controller contains 2GB, 4GB or 8GB of battery-backed cache Dual controllers operate in an active-active environment mirroring each other’s cache Cache protection is provided via flash memory for permanent data protection DDP eliminates the need for complex RAID management while optimizing data protection, and can be used with existing RAID configurations Dual controllers required for the MD3860f and MD3660f dense arrays 4GB or 8GB cache controller option is only available in dual controller configuration

RAID levels Support for RAID levels 0, 1, 10, 5, 6 Up to 180/192 physical disks per group in RAID 0, 10 Up to 30 physical disks per group in RAID 5, 6 Up to 512 virtual disks DDP

Compatible routers QLogic ISR 6240 (provides Fibre Channel and iSCSI connectivity)

Storage

Hard disk drives Hot-pluggable hard drives

MD3800f/MD3600f Up to twelve (12) 3.5 inch SAS, Near-line SAS and SSD drives

MD3820f/MD3620f Up to twenty-four (24) 2.5 inch SAS, Near-line SAS and SSD drives

MD3860f/MD3660f Up to sixty (60) 2.5 inch and 3.5 inch SAS, Near-line SAS and SSD drives

MD3800f/MD3600f 3.5”--15,000 RPM SAS drives available in 300GB and 600GB 3.5”--7,200 RPM Near-line SAS drives available in 500GB, 1TB, 2TB, 3TB, 4TB and 6TB SSDs available in 200GB and 400GB and Read Intensive SSDs in 800GB and 1.6TB (available with 3.5” HDD carriers)

MD3820f/MD3620f 2.5”--15,000 RPM SAS drives available in 146GB and 300GB 2.5”--10,000 RPM SAS drives available in 600GB, 900GB, 1.2TB 2.5”--7,200 RPM Near-line SAS drives available in 500GB and 1TB 2.5” -- SSD available in 200GB and 400GB, and Read Intensive SSDs in 800GB and 1.6TB

MD3860f/MD3660f 3.5”-- 7,200 RPM Near-line SAS drives available in 500GB, 1TB, 2TB, 3TB and 4TB, 6TB 2.5”--15,000 RPM SAS drives available in 146GB and 300GB 2.5”--10,000 RPM SAS drives available in 300GB, 600GB, 900GB and 1.2TB 2.5”--7,200 RPM Near-line SAS drives available in 500GB and 1TB 2.5”--SSD available in 200GB and 400GB, and Read Intensive SSDs in 800GB and 1.6TB (available with 3.5” HDD carriers)

MD3060e 3.5”--7,200 RPM Near-line SAS drives available in 500GB, 1TB, 2TB, 3TB and 4TB, 6TB 2.5”-- 15,000 RPM SAS drives available in 146GB and 300GB 2.5”--10,000 RPM SAS drives available in 300GB, 600GB, 900GB and 1.2TB 2.5”--7,200 RPM Near-line SAS drives available in 500GB and 1TB 2.5”--SSD available in 200GB and 400GB, and Read Intensive SSDs in 800GB and 1.6TB (available with 3.5” HDD carriers

See PowerVault MD3800f/MD3820f/MD3600f/MD3620f Support Matrix for the most current list of supported hard drives and PowerVault MD3860f/MD3660f Support Matrix for the most current list of supported hard drives on the dense array.

Expansion capabilities MD3800f/MD3820f/MD3600f/MD3620f expand up to 192 drives with the MD1200 and MD1220 MD3860f/MD3660f expand up to 180 drives with two MD3060e dense enclosures

Page 51: DRAFT...DRAFT 5 MD38X0f technical specifications The MD3800f and MD3820f are 2U chassis with support for up to 12 3.5” or 24 2.5” drives, respectively. The MD3860f is a 4U dense

51DRAFT

MD3060e dense enclosure

EBOD expansion enclosure designed for use with the MD3 dense array series only (MD3260, MD3260i, MD3660i, MD3660f, MD3860i, MD3860f)

Dimensions are identical to the dense array series (MD3260, MD3260i, MD3660i, MD3660f, MD3860i, MD3860f)

Controllers on the MD3 dense series arrays have either one 6Gb SAS Out port (MD3260, MD3260i, MD3660i, and MD3660f) or one 12Gb SAS Out port (MD3460, MD3860i, and MD3860f) for connecting to the MD3060e enclosure expansion for additional capacity (maximum of 2 per dense array). For systems without premium feature activation, the physical disk limit is 120.

MD3060e contains two hot-swappable enclosure management modules (EMMs).

RAID

Hot Swappable controllers

Single controller models (MD3600f and MD3620f)

Supports up to 64 servers when configured with Fibre Channel switches.

Dual controller models (MD3620f, MD3620f, and MD3660f)

Supports up to 64 servers when configured with Fibre Channel switches MD3060e contains two hot-swappable enclosure management modules (EMMs)

Controller features Each controller contains 2GB of battery-backed cache Dual controllers operate in an active-active environment mirroring each other’s cache Cache protection is provided via flash memory for permanent data protection Dynamic Disk Pools (DDP) eliminates the need for complex RAID management while optimizing data protection, and can be used alongside existing RAID configurations

RAID levels Support for RAID levels 0, 1, 10, 5, 6 Up to 180/19211 physical disks per group in RAID 0, 10 Up to 30 physical disks per group in RAID 5, 6 Up to 512 virtual disks

Chassis

Height x Width x Depth

MD3800f/MD3600f: 8.68cm (3.42”) x 44.63cm (17.57”) x 56.1cm (22.09”) MD3820f/MD3620f: 8.68cm (3.42”) x 44.63cm (17.57”) x 50.8 (20”) MD3860f/MD3660f: 177.80mm (7.0”) x 482.60mm (19.0”) x 85.50mm (32.5”) MD3060e: 177.80mm (7.0”) x 482.60mm (19.0”) x 85.50mm (32.5”)

Weight MD3800f/MD3600f: 29.3kg (64.59 lbs.) (w configuration) MD3820f/MD3620f: 24.2kg (53.35 lbs.) (maximum configuration) MD3860f/MD3660f: 105.23kg (232 lbs.) (maximum configuration) MD3060e: 105.23kg (232 lbs.) (maximum configuration)

Rack Support for MD3800f, MD3820f, MD3600f and MD3620f

Dell ReadyRails™ II static rails for tool-less mounting in 4-post racks with square or unthreaded round holes or tooled mounting in 4-post threaded-hole racks

Regulatory PowerVault 3600f Regulatory Model: E03J PowerVault 3600f Regulatory Type: E03J001 PowerVault 3620f Regulatory Model: E04J PowerVault 3620f Regulatory Type: E04J001 PowerVault 3660f Regulatory Model: E08J PowerVault 3660f Regulatory Type: E08J001 PowerVault 3800f Regulatory Model: E03J PowerVault 3800f Regulatory Type: E03J001 PowerVault 3820f Regulatory Model: E04J PowerVault 3820f Regulatory Type: E04J001 PowerVault 3860f Regulatory Model: E08J PowerVault 3860f Regulatory Type: E08J001

Product Safety, EMC and Environmental Datasheets Dell Regulatory Compliance Home Page Dell and the Environment

ENERGY STAR® The Dell Storage MD3200i, MD3620i, and MD3620f arrays with the MD1200 expansion enclosure have earned the ENERGY STAR Data Center Storage designation. The Dell Storage MD3220i, MD3620i, and MD3620f arrays with the MD1220 expansion enclosure have earned the ENERGY STAR Data Center Storage designation. The Dell Storage MD3260i and MD3660i arrays have earned the ENERGY STAR Data Center Storage designation.

Click Here to learn more about the ENERGY STAR Storage 1.0 designation.