108
EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.EMC.com EMC ® PowerPath ® Family Version 5.2, 5.3, and 5.5 Product Guide P/N 300-006-627 REV A12

300-006-627 a12 PowerPath Family 5.2, 5.3, 5.5 Product Guide · PDF fileEMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 EMC® PowerPath® Family Version

  • Upload
    ngodung

  • View
    216

  • Download
    2

Embed Size (px)

Citation preview

EMC CorporationCorporate Headquarters:

EMC® PowerPath® FamilyVersion 5.2, 5.3, and 5.5

Product GuideP/N 300-006-627

REV A12

Hopkinton, MA 01748-9103

1-508-435-1000www.EMC.com

2

Copyright © 1997–2010 EMC Corporation. All rights reserved.

Published November 2010

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Contents

Preface

Chapter 1 IntroductionAbout PowerPath Multipathing licenses ...................................... 20

CLARiiON AX-series support ................................................. 21PowerPath and related documentation ......................................... 21

PowerPath documentation....................................................... 21Storage system documentation................................................ 23Other documentation .................................................................23

Chapter 2 PowerPath OverviewIntroduction ....................................................................................... 26

Path management ...................................................................... 26PowerPath features.................................................................... 26

Using multiple ports......................................................................... 35Paths ............................................................................................ 35Active-active, active-passive, and ALUA storage systems ..37Path sets ...................................................................................... 38Native devices ............................................................................ 40Pseudo devices ........................................................................... 41

Dynamic multipath load balancing................................................ 44Load balancing with and without PowerPath .......................45Load-balancing and failover policies.......................................46

Automatic path failover ................................................................... 47Proactive path testing and automatic path restoration ............... 48

Path states ................................................................................... 48When are path tests done? ........................................................48Periodic testing of live paths.....................................................50

EMC PowerPath Family Version 5.2, 5.3 and 5.5 Product Guide 3

Contents

Periodic testing and autorestore of dead paths ..................... 51How often are paths tested?..................................................... 52

Application tuning in a PowerPath environment........................ 52 Channel groups......................................................................... 52

PowerPath management tools ........................................................ 54PowerPath CLI........................................................................... 54

Chapter 3 PowerPath Configuration RequirementsPowerPath connectivity ................................................................... 56

HBA and transport considerations .......................................... 56High availability ........................................................................ 57

Fibre Channel configuration requirements................................... 59FCoE configuration requirements .................................................. 63

High availability configurations .............................................. 63iSCSI configuration requirements .................................................. 66

Sample iSCSI configurations..................................................... 66SCSI configuration requirements ................................................... 70Storage configuration requirements and recommendations...... 71

All storage systems.................................................................... 71Symmetrix storage systems....................................................... 72Supported Hitachi Lightning, Hitachi TagmaStore, HP StorageWorks XP, and IBM ESS storage systems ........... 72Supported HP StorageWorks EVA storage systems ............. 72CLARiiON storage systems ...................................................... 73Invista storage devices............................................................... 73VPLEX storage devices .............................................................. 74

Dynamic reconfiguration................................................................. 74Hot swapping an HBA .............................................................. 74

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide4

Contents

Appendix A PowerPath SEPowerPath SE functionality ............................................................. 76Installing PowerPath SE................................................................... 77Using PowerPath SE ......................................................................... 78

Appendix B PowerPath Family Functionality Summary

Appendix C PowerPath Family End-of-Life Summary

Glossary

Index

5EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Contents

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide6

Title Page

Figures

1 Without PowerPath: One path to each logical device ............................... 352 With PowerPath: Multiple paths to each logical device ........................... 363 Path sets ........................................................................................................... 394 Native devices ................................................................................................. 405 Pseudo devices ................................................................................................ 426 I/O queuing without PowerPath ................................................................. 457 I/O queuing with PowerPath ....................................................................... 458 Physical I/O path failure points .................................................................. 479 Channel groups .............................................................................................. 5310 Highly available Fibre Channel configuration with PowerPath ............. 5911 High-availability (multiple-fabric) Fibre Channel configuration ........... 6112 Single-switch Fibre Channel configuration ................................................ 6213 High-availability (multiple-fabric) Fibre Channel over Ethernet

configuration ...................................................................................................6314 High-availability (multiple-fabric) Fibre Channel over Ethernet

to active-passive arrays ..................................................................................6415 Single NIC/HBA configuration ................................................................... 6716 Multiple NICs/HBAs to multiple subnets ................................................. 6817 Multiple NICs/HBAs to one subnet ........................................................... 6918 Single-initiator connections .......................................................................... 7019 PowerPath SE supported configuration ..................................................... 76

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide 7

Figures

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide8

Title Page

Tables

1 PowerPath Multipathing licenses .................................................................202 PowerPath documentation set ......................................................................213 Reconfiguring pseudo devices ......................................................................434 PowerPath Family functionality summary by version and platform .....805 PowerPath end-of-life summary ...................................................................84

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide 9

Tables

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide10

Preface

As part of an effort to improve and enhance the performance and capabilities of its product lines, EMC periodically releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes.

If a product does not function properly or does not function as described in this document, please contact your EMC representative.

Audience This document is part of the PowerPath documentation set, and is intended for use by storage administrators and other information system professionals responsible for using, installing, and maintaining PowerPath.

Readers of this manual are expected to be familiar with the host operating system on which PowerPath runs, storage system management, and the applications used with PowerPath.

Note that this Product Guide contains PowerPath versions 5.2, 5.3, and 5.5 updates.

Note: This manual applies to PowerPath on all supported platforms and storage systems, unless indicated otherwise in the text.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide 11

12

Preface

Relateddocumentation

The PowerPath documentation set includes:

◆ EMC PowerPath Family Product Guide (this document)◆ EMC PowerPath Family CLI and System Messages Reference Guide◆ EMC PowerPath for AIX Installation and Administration Guide◆ EMC PowerPath for HP-UX Installation and Administration Guide◆ EMC PowerPath for Linux Installation and Administration Guide◆ EMC PowerPath for Solaris Installation and Administration Guide◆ EMC PowerPath and PowerPath/VE for Windows Installation and

Administration Guide◆ EMC PowerPath Family for AIX Release Notes◆ EMC PowerPath Family for HP-UX Release Notes◆ EMC PowerPath Family for Solaris Release Notes◆ EMC PowerPath Family for Linux Release Notes◆ EMC PowerPath and PowerPath/VE Family for Windows Release

Notes◆ EMC PowerPath Encryption with RSA User Guide◆ EMC PowerPath Migration Enabler User Guide◆ EMC PowerPath Management Pack for Microsoft Operations Manager

User GuideThese PowerPath manuals are updated periodically. Electronic versions of the updated manuals are available on the Powerlink website: http://Powerlink.EMC.com.

If your environment includes Symmetrix storage systems, to the EMC host connectivity guides, which are available on the Powerlink website provide more information.

If your environment includes CLARiiON storage systems, refer also to the following manuals:

◆ EMC host connectivity guides

◆ CLARiiON Storage-System Support website (www.emc.com\clariionsupport)

If your environment includes other vendors’ storage systems, refer to the appropriate documentation from your vendor.

PowerPathdocumentation set

changes

Note the following changes in the PowerPath documentation set:

◆ For version 5.5 and moving forward, we are including a revision history in all documents of the documentation set. Note that, in the PowerPath documents that include information for versions previous to version 5.5, the revision history reflects changes to the document from version 5.5 and moving forward.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Preface

◆ For version 5.3 and moving forward, the information for the PowerPath family of products is combined into one release note per platform. PowerPath multipathing, Migration Enabler, and Encryption with RSA is covered in one document.

◆ For version 5.3 and moving forward, the document title has been changed to PowerPath Family Product Guide, to reflect the inclusion of PowerPath/VE for Windows Hyper-V. PowerPath/VE for Windows Hyper-V provides the same multipathing functionality as PowerPath for Windows in nonvirtual environments, and it supports Migration Enabler and PowerPath Encryption with RSA. Throughout this document, all references to PowerPath (for Windows) refer to PowerPath and PowerPath/VE for Windows Hyper-V, unless otherwise noted.

PowerPath/VE for VMware vSphere is covered in the PowerPath/VE for VMware vSphere documents, available on Powerlink.

Revision history The following table presents the revision history of this document:

Revision Date Description

A10 August 27, 2010 Modification of the following sections in relation to the release of PowerPath and PowerPath/VE 5.5 for Windows:• “About PowerPath Multipathing licenses” on

page 20• “PowerPath and related documentation” on

page 21• “PowerPath Encryption with RSA” on page 32• “Application tuning in a PowerPath environment”

on page 52• “HBA and transport considerations” on page 56• “FCoE configuration requirements” on page 63• “iSCSI configuration requirements” on page 66• “Supported HP StorageWorks EVA storage

systems” on page 72• “VPLEX storage devices” on page 74• “PowerPath Family Functionality Summary” on

page 79• “PowerPath Family End-of-Life Summary” on

page 83

PowerPath Family 5.2, 5.3, and 5.5 Product Guide 13

14

Preface

A11 October 8, 2010 Modification of the following sections in relation to the release of PowerPath 5.5 for Linux:• “PowerPath Migration Enabler”, on page 34• Appendix B, “PowerPath Family Functionality

Summary,” on page page 79,• Appendix C, “PowerPath Family End-of-Life

Summary,” on page 83.

A12 November 30, 2010 Modification of “High availability configurations” on page 63 and addition of “Zoning to active-passive arrays” on page 64.

Revision Date Description

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Preface

Conventions used inthis document

EMC uses the following conventions for special notices.

Note: A note presents information that is important, but not hazard-related.

CAUTION!A caution contains information essential to avoid data loss or damage to the system or equipment. The caution may apply to hardware or software.

IMPORTANT!An important notice contains information essential to operation of the software. The important notice applies only to software.

Typographical conventionsEMC uses the following type style conventions in this document:

Normal Used in running (nonprocedural) text for:• Names of interface elements (such as names of windows,

dialog boxes, buttons, fields, and menus)• Names of resources, attributes, pools, Boolean expressions,

buttons, DQL statements, keywords, clauses, environment variables, filenames, functions, utilities

• URLs, pathnames, filenames, directory names, computer names, links, groups, service keys, file systems, notifications

Bold Used in running (nonprocedural) text for:• Names of commands, daemons, options, programs,

processes, services, applications, utilities, kernels, notifications, system call, man pages

Used in procedures for:• Names of interface elements (such as names of windows,

dialog boxes, buttons, fields, and menus)• What user specifically selects, clicks, presses, or types

Italic Used in all text (including procedures) for:• Full titles of publications referenced in text• Emphasis (for example a new term)• Variables

Courier Used for:• System output, such as an error message or script • URLs, complete paths, filenames, prompts, and syntax when

shown outside of running text

PowerPath Family 5.2, 5.3, and 5.5 Product Guide 15

16

Preface

Courier bold Used for:• Specific user input (such as commands)

Courier italic Used in procedures for:• Variables on command line• User input variables

< > Angle brackets enclose parameter or variable values supplied by the user

[ ] Square brackets enclose optional values

| Vertical bar indicates alternate selections - the bar means “or”

{ } Braces indicate content that you must specify (that is, x or y or z)

... Ellipses indicate nonessential information omitted from the example

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Preface

Where to get help EMC support, product, and licensing information can be obtained as follows.

Product information — For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Powerlink website (registration required) at:

http://Powerlink.EMC.com

Technical support — For technical support, go to EMC Customer Service on Powerlink. To open a service request through Powerlink, you must have a valid support agreement. Please contact your EMC sales representative for details about obtaining a valid support agreement or to answer any questions about your account.

Your comments Your suggestions help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send your opinion of this document to [email protected].

PowerPath Family 5.2, 5.3, and 5.5 Product Guide 17

18

Preface

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

1Invisible Body Tag

This chapter introduces PowerPath and provides a road map of PowerPath and related documentation. Topics include:

◆ About PowerPath Multipathing licenses........................................ 20◆ PowerPath and related documentation .......................................... 21

Introduction

Introduction 19

20

Introduction

About PowerPath Multipathing licensesThe EMC® PowerPath® Mulitpathing license type determines the PowerPath functionality available. Table 1 on page 20 summarizes the PowerPath licenses.

Table 1 PowerPath Multipathing licenses

PowerPatha PowerPath SEb PowerPath/VEc

Supported Storage Systems (Fibre Channel, Fibre Channel over Ethernet, and iSCSId)

Symmetrix® Yes Yes Yes

CLARiiON® Yes Yes Yes

Invista® Yes Yes Yes

VPLEX™ Yes Yes Yes

Third-party Yese No Yes

Supported Operating Systems

Windows, Linux Yes Yes N/A

UNIX Yes Yes No

VMware vSphere No No Yesf

Windows 2008 Hyper-V Requires separate license

Yes Yes

Windows 2008 Hyper-V Server

Yes Yes Yes

Features

Failover End-to-end failover. Backend failover only.g End-to-end failover.

Load balancing Yes No Yes

HBA support Two or more HBAs. Single HBA. Two or more HBAs.

Path support 32 paths per logical device.

Two paths only. 32 paths per logical device.

a. Formerly called PowerPath Enterprise, PowerPath Enterprise Plus.

b. PowerPath SE, PowerPath Fabric Failover, and Utility Kit PowerPath refer to the same product.

c. PowerPath/VE license available only through Powerlink® Licensing.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Introduction

CLARiiON AX-series support

Certain versions of PowerPath (as listed in the E-Lab™ Interoperability Navigator on Powerlink) provide full functionality with or without a license when the host is connected exclusively to CLARiiON AX-series storage systems. Note that AX models earlier than CLARiiON AX4-5 are not supported with PowerPath for AIX or HP-UX.

PowerPath and related documentationThis section includes information about the PowerPath documentation set and about other related documentation.

PowerPath documentation

Table 2 on page 21 shows the PowerPath documentation set.

d. The EMC Support Matrix PowerPath Family Protocol Support, available on Powerlink, provides protocol support information for the EMC PowerPath family of products

e. PowerPath 5.0, 5.1, and 5.2 for Windows does not support third-party arrays.

f. Linux and Windows are supported as Guest OSs on VMware vSphere. See the PowerPath/VE for VMware vSphere Release Notes for supported OS versions. Additional versions are supported via RPQ.

g. PowerPath Fabric Failover only for Symmetrix. PowerPath Storage Processor Failover only for CLARiiON.

Table 2 PowerPath documentation set (page 1 of 2)

Title Description

PowerPath and PowerPath/VE for Windows Installation and Administration Guide

Describes how to install and remove the PowerPath and PowerPath/VE for Microsoft Hyper-V software and install and configure PowerPath and PowerPath/VE in Microsoft cluster environments. Discusses other issues and administrative tasks specific to PowerPath and PowerPath/VE on a Windows host.a

PowerPath for AIX Installation and Administration Guide

Describes how to install and remove the PowerPath software, install and configure PowerPath in AIX cluster environments, configure a PowerPath device as the boot device. Discusses other issues and administrative tasks specific to PowerPath on an AIX host.

PowerPath for HP-UX Installation and Administration Guide

Describes how to install and remove the PowerPath software, install and configure PowerPath in HP-UX cluster environments, configure a PowerPath device as the boot device. Discusses other issues and administrative tasks specific to PowerPath on an HP-UX host.

PowerPath for Linux Installation and Administration Guide

Describes how to install and remove PowerPath on a Linux host, install and configure a PowerPath device as the boot devicea. Discusses other issues and administrative tasks specific to PowerPath on a Linux host.

PowerPath and related documentation 21

22

Introduction

These manuals are updated periodically and posted on the Powerlink website (http://Powerlink.EMC.com).

PowerPath for Solaris Installation and Administration Guide

Describes how to install and remove the PowerPath software, install and configure PowerPath in Solaris cluster environments, configure a PowerPath device as the boot device. Discusses other issues and administrative tasks specific to PowerPath on a Solaris host.

PowerPath Family Product Guide Describes the load-balancing and failover features and configuration requirements of PowerPath and PowerPath/VE.

PowerPath Family CLI and System Messages Reference Guide

Describes the command line utility used to monitor and manage a PowerPath environment. Discusses messages returned by the PowerPath driver, PowerPath installation process, powermt utility, and other PowerPath utilities, and suggests how to respond to them.

PowerPath Family Release Notesb Describes hardware and software requirements for the host and storage systems for your PowerPath physical and, where applicable, virtual, multipathing environment; describes hardware and software requirements for Migration Enabler, as well as known issues and limitations; and describes new features, configuration information, and supplemental information about PowerPath Encryption with RSA.

Note: PowerPath Migration Enabler and PowerPath Encryption with RSA both require a separate license key.

PowerPath Migration Enabler User Guide Describes how to migrate data from one storage system to another with Migration Enabler and either Open Replicator or Invista. PowerPath Migration Enabler requires a separate product license.

PowerPath Encryption with RSA User Guide Describes the PowerPath Encryption with RSA product, how to configure a PowerPath Encryption with RSA environment, how to enable and disable encryption, and how to encrypt data. PowerPath Encryption with RSA requires a separate product license.

PowerPath Management Pack for Microsoft Operations Manager User Guide

Contains information about EMC PowerPath Management Pack 2.0 for MOM (Microsoft Operations Manager) 2005 and SCOM (Systems Center Operations Manager) 2007. Provides installation and configuration procedures for the Windows SNMP Service, describes events supported by the management pack, and gives use cases and troubleshooting tips. This document is not available in the PowerPath documentation library on Powerlink; rather, it is packaged with the software and is available with the software on Powerlink.

a. The PowerPath installer has been localized for this platform. Localized versions of the installation and administration guide are available in Brazilian Portuguese, French, German, Italian, Korean, Japanese, Latin American Spanish, and simplified Chinese.

b. Beginning with PowerPath version 5.3, we have combined information for the PowerPath family of products into one release note per platform. PowerPath multipathing, Migration Enabler, and Encryption with RSA are covered in one document.

Table 2 PowerPath documentation set (page 2 of 2)

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Introduction

Storage system documentation

The following documentation provides information on setting up your storage systems for PowerPath:

◆ EMC host connectivity guides (available on the Powerlink website)

◆ CLARiiON Storage-System Support website (www.emc.com/clariionsupport)

◆ The EMC product guide or configuration planning guide for your storage-system model

◆ The EMC installation guide and vendor documentation for your HBA (host bus adapter)

Other documentationSome PowerPath features can be administered through other EMC applications:

◆ EMC ControlCenter Overview provides information on using PowerPath with EMC ControlCenter®.

◆ Limited PowerPath functions are supported by the Navisphere® application for CLARiiON systems. Refer to CLARiiON Storage-System Support website (www.emc.com/clariionsupport).

◆ EMC Invista documentation.

◆ EMC VPLEX documentation.

PowerPath and related documentation 23

24

Introduction

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

2

This chapter is an overview of PowerPath. Topics include:

◆ Introduction ........................................................................................ 26◆ Using multiple ports.......................................................................... 35◆ Dynamic multipath load balancing................................................. 44◆ Automatic path failover .................................................................... 47◆ Proactive path testing and automatic path restoration ................ 48◆ Application tuning in a PowerPath environment ......................... 52◆ PowerPath management tools ......................................................... 54

PowerPath Overview

PowerPath Overview 25

26

PowerPath Overview

IntroductionPowerPath is a host-based software that provides path management. PowerPath operates with several storage systems, on several operating systems, with Fibre Channel and iSCSI data channels (and with Windows Server 2003, Windows Server 2008, and Windows Server 2008 R2 non-clustered hosts only, parallel SCSI channels).

Path management PowerPath works with the storage system to intelligently manage I/O paths.

PowerPath supports multiple paths to a logical device, enabling PowerPath to provide:

◆ Automatic failover in the event of a hardware failure. PowerPath automatically detects path failure and redirects I/O to another path.

◆ Dynamic multipath load balancing. PowerPath distributes I/O requests to a logical device across all available paths, thus improving I/O performance and reducing management time and downtime by eliminating the need to configure paths statically across logical devices.

PowerPath path management features and functionality are described in this guide.

PowerPath features PowerPath features include:

◆ Multiple paths, for higher availability and performance — PowerPath supports multiple paths between a logical device and a host. Having multiple paths enables the host to access a logical device even if a specific path is unavailable. Also, multiple paths can share the I/O workload to a given logical device.

◆ Dynamic multipath load balancing — PowerPath improves a host’s ability to manage heavy I/O loads by continually balancing the load on all paths, eliminating the need for repeated static reconfigurations as workloads change.

◆ Proactive I/O path testing and automatic path recovery — PowerPath periodically tests failed paths to determine if they are fixed. If a path passes the test, it is restored automatically, and PowerPath resumes sending I/O to it. During path restoration, the storage system, host, and application remain available.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

PowerPath also periodically tests live paths that are idle. This allows PowerPath to report path problems quickly, avoiding delays that would otherwise result from trying to use a defective path when I/O is started on the logical device.

◆ Automatic path failover — PowerPath automatically redirects I/O from a failed path to an alternate path. This eliminates loss of data and application downtime. Failovers are transparent and nondisruptive to applications.

◆ High-availability cluster support — PowerPath is particularly beneficial in cluster environments because it can prevent operational interruptions and costly downtime. PowerPath’s path failover capability avoids node failover, maintaining uninterrupted application support on the active node in the event of a path disconnect (as long as another path is available).

◆ Thin device support — PowerPath supports thin (virtually provisioned) devices on any PowerPath platform that supports Symmetrix Enginuity™ 5773 and CLARiiON FLARE® 04.28.000.5.501 and later arrays. The EMC Support Matrix, EMC Virtual Provisioning Support and the EMC host connectivity guide for your platform, available on Powerlink, provide more information on thin device support.

◆ Installation and administration features:

• Unattended installation

PowerPath 5.0 and later for Windows supports unattended PowerPath installation that uses command-line parameters which do not require any user input.

PowerPath 5.2 and later supports unattended installation in other PowerPath-supported platforms. The PowerPath installation and administration guide for your platform provides more information.

• NRU (no reboot upgrade)

You need not reboot the host after the upgrade, provided you close all applications that use PowerPath devices before you install PowerPath. The PowerPath installation and administration guide for your platform provides platform-specific information on NRU.

Introduction 27

28

PowerPath Overview

• R1/R2 boot

If a storage system device corresponding to a bootable emcpower device is mirrored through SRDF®, it is possible in the event of a server failure at the local storage system for PowerPath to fail over the boot disk to the remote mirror disk and then boot the server on an identical remote host. The PowerPath installation and administration guide for your platform provides platform-specific information on R1/R2 boot.

Virtual technologies

PowerPath supports the following virtual technologies:

◆ Hyper-V — PowerPath supports Hyper-V with Windows 2008 64 bit. Specifically:

• On the parent partition, all the features supported in physical environments are supported in the virtual environment on PowerPath 5.3 and later for Windows:

– Multipathing load balancing and failover functionalities– Migrations– Encryption – MSCS Clusters– iSCSI LUNs– Fibre Channel LUNs

• On the child partition, the following features are supported on PowerPath 5.2 and later for Windows:

– PowerPath installation for Windows operating systems supported by PowerPath 5.2 for iSCSI LUNs exposed through Microsoft software iSCSI initiator

– Multipathing functionalities only for iSCSI LUNs exposed through Microsoft software iSCSI initiator

The the E-Lab Interoperability Navigator provides updated Hyper-V support information. The PowerPath and PowerPath/VE Family for Windows Release Notes provides information on supported child partition operating systems.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

Remote managementPowerPath supports remote management of the PowerPath environment through compatibility with the following software products:

◆ Systems Management Server (SMS) — SMS is a systems management software product by Microsoft for managing large groups of Windows-based computer systems. Configuration Manager, a feature of SMS, provides remote control, patch management, software distribution, operating system deployment, and hardware and software inventory. PowerPath 5.3 for Windows and later supports SMS. The PowerPath and PowerPath/VE Family for Windows Release Notes for more information on SMS. SNMP management daemon — PowerPath for Windows supports a management daemon that monitors PowerPath devices and alerts the administrator when access to devices is disrupted. This functionality is delivered through System Center Operations Manager (SCOM) 2007 and Microsoft Operations Manager (MOM) 2005 management packs. The PowerPath Event monitoring feature issues MOM alerts and SNMP traps for specific PowerPath events. The MOM alerts are viewable using the respective version of the MOM consoles. The SNMP traps are viewable through an SNMP manager. These events are generally multipathing events such as Path is Dead. Apart from event management, it also implements a couple of MOM tasks that can be run to retrieve the version and license capabilities of PowerPath installations.

The PowerPath Management Pack for Microsoft Operations Manager User Guide provides for more information.

A similar SNMP management daemon is supported in PowerPath 5.3 and later on AIX, Solaris, and Linux. The PowerPath installation and administration guides for AIX, Solaris, and Linux provide information on support and other information on the SNMP management daemon.

PowerPath Migration EnablerPowerPath Migration Enabler is a host-based migration tool that allows you to migrate data between storage systems. PowerPath Migration Enabler is independent of PowerPath Multipathing and does not require that you use PowerPath for multipathing. PowerPath Migration Enabler works in conjunction with another underlying technology, such as Open Replicator (OR) or Invista.

Introduction 29

30

PowerPath Overview

Note: PowerPath Migration Enabler requires a separate product license. A Migration Enabler license is technology specific; it allows you to migrate data with either Open Replicator or Invista, but not with both technologies.

Migration Enabler features include:

◆ Virtual encapsulation — When using Migration Enabler with Invista, Invista encapsulates the source-device name. The original Symmetrix or CLARiiON element is the source logical unit in the migration, and the Invista Virtual Volume is the target. The PowerPath Migration Enabler User Guide provides information on migrating data to an Invista Virtual Volume.

◆ PowerPath Migration Enabler with Open Replicator — When using Migration Enabler with Open Replicator for Symmetrix, data is copied through the fabric from the source logical unit to the target logical unit. Software on the Symmetrix system where the target resides controls the data movement. Migration Enabler mirrors I/O to keep the source and target logical units synchronized throughout the migration process. The PowerPath Migration Enabler User Guide provides information on Open Replicator.

Some features available on PowerPath 5.2 and later on PowerPath Migration Enabler are:

◆ PowerPath Migration Enabler Host Copy — When using Migration Enabler with host-based copy (also called Host Copy), Migration Enabler works in conjunction with the host operating system to migrate data from the specified source logical unit to the target logical unit. A Host Copy migration does not use or require a direct connection between the arrays containing the source and target logical units. Host Copy can be used to migrate plaintext data, or it can be used to migrate data to or from an encrypted logical unit.

Because Host Copy migrations consume host resources, Migration Enabler provides parameters that allow you to control the degree of host-resource usage. The PowerPath Migration Enabler User Guide provides information on Migration enabler Host Copy.

You can also pause and resume a Host Copy migration that is in the Syncing state. Pausing a migration allows host resources to be released for other operations. The synchronization can then be

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

resumed at a later time that is convenient. The PowerPath Migration Enabler User Guide provides information on Migration Enabler pause and resume.

Some features available on PowerPath 5.3 and later on PowerPath Migration Enabler are:

◆ PowerPath Migration Enabler TimeFinder/Clone — TimeFinder/Clone is a Solutions Enabler technology that can be used with Migration Enabler to create a full volume copy of a source device when the source and target devices are in the same Symmetrix array. The PowerPath Migration Enabler User Guide provides information on PowerPath Migration Enabler with TimeFinder/Clone.

◆ Thin device support — PowerPath Migration Enabler supports migrations with thin (virtually provisioned) devices on Symmetrix Enginuity 5773 and CLARiiON FLARE 04.28.000.5.501 and later arrays. Devices on these Symmetrix and CLARiiON arrays are auto-detected by Migration Enabler Host Copy during powermig setup. Thin devices with Migration Enabler are supported on PowerPath 5.3 and later for Linux, Windows, Solaris, and AIX. The Migration Enabler section of the PowerPath family release notes for your platform provides more information on Migration Enabler support of thin devices. The EMC Host Connectivity Guide for your platform and the E-Lab Interoperability Navigator, available on Powerlink, provide more background on thin devices.

Some features available on PowerPath 5.5 and later on PowerPath Migration Enabler are:

◆ Host Copy Ceiling—Host Copy Ceiling lets you specify an upper limit on the aggregate rate of copying for all Host Copy migrations. A new powermig option command is used to set the ceiling value. Host Copy Ceiling is supported on PowerPath 5.5 for Linux. the Migration Enabler section of the PowerPath family release notes provides more information.

◆ SuspendTime on Host Copy— The Host Copy suspendTime argument has been removed from the powermig throttle command because Host Copy has been enhanced to no longer suspend application I/O while copying from source to target. This has been removed on PowerPath 5.5 for Linux. The Migration Enabler section of the PowerPath family release notes provides more information.

Introduction 31

32

PowerPath Overview

Remote management on PowerPath Migration EnablerPowerPath Migration Enabler supports the following remote management tools:

◆ Solutions Enabler (SE) Thin Client — This allows you to run PowerPath's migration features using a remotely installed Solutions Enabler package instead of having to run PowerPath Migration Enabler and Solutions Enabler on the same host. This feature is supported on PowerPath 5.3 and later for Windows and Solaris. The Migration Enabler section of the PowerPath Family release notes for your platform provides for more information on remote SE.

The following documents, available on Powerlink, provide more information about PowerPath Migration Enabler:

◆ EMC PowerPath Migration Enabler User Guide

◆ EMC PowerPath Family Release Notes

PowerPath Encryption with RSAPowerPath 5.2 introduces support for PowerPath Encryption with RSA software. PowerPath Encryption with RSA is host-based software distributed as part of the PowerPath 5.2 package. A separate product license is needed to use PowerPath Encryption.

PowerPath Encryption provides the following security benefits:

◆ Ensures the confidentiality of data on a disk drive that is physically removed from a data center.

◆ Prevents anyone who gains unauthorized access to the disk from reading or using the data on that device.

PowerPath Encryption uses strong encryption protocols to safeguard sensitive data on disk devices. It transparently encrypts data written to a disk device and decrypts data read from it.

Some features available on PowerPath 5.2 and later with PowerPath Encryption with RSA are:

◆ Interoperability with PowerPath Migration Enabler software — Interoperability with Migration Enabler provides data migration capabilities. In a PowerPath Encryption environment, Migration Enabler migrates:

• Plaintext data on an unencrypted logical unit to an encrypted virtual logical unit.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

• Encrypted data on a virtual logical unit to plaintext data on an unencrypted logical unit.

• Encrypted data on a virtual logical unit to a different virtual logical unit (rekeying).

The PowerPath Migration Enabler User Guide provides general information on performing migrations.

◆ Encryption and volume managers — PowerPath Encryption supports the Logical Volume Manager (LVM) on AIX, the Sun Volume Manager (SVM) on Solaris hosts and the VERITAS Volume Manager (VxVM) on AIX, Solaris, and Windows hosts. You can use PowerPath Encryption encrypted virtual logical units to create LVM and SVM disk sets and VxVM disk groups, and to allocate volumes within the disk set or group. All LVM, SVM, and VxVM operations work with PowerPath Encryption encrypted virtual logical units, subject to the best practices described in the PowerPath Encryption with RSA User Guide. The Encryption with RSA section of the PowerPath Family release notes for your platform provide information on supported LVM, SVM, and VxVM versions.

Some features available on PowerPath 5.3 and later with PowerPath Encryption with RSA are:

◆ Thin device support — PowerPath Encryption supports thin devices as described in “PowerPath Migration Enabler” on page 27. The Encryption with RSA section of the PowerPath Family release notes for your platform provides more information on thin devices.

◆ Host Client Configuration — For encryption configuration and enablement, a utility is required for the host configuration files. This will assist in preventing errors during installation. PowerPath Encryption supports this utility.

• Encryption with RKM appliance support —

Encryption with RKM appliance is supported on PowerPath 5.2 and later. The PowerPath Encryption User Guide provides information on the RKM appliance. The Encryption with RSA section of the PowerPath Family release notes for your platform provides information on supported RKM appliances for your environment.

Introduction 33

34

PowerPath Overview

Some features available on PowerPath 5.5 and later with PowerPath Encryption with RSA are:

◆ HBA-assisted Encryption — PowerPath Encryption supports an optional encrypting HBA to which PowerPath Encryption offloads encryption and decryption processing onto the HBA thereby decreasing CPU consumption on the PowerPath Encryption host and improving host performance. HBA-assisted encryption combines all of the PowerPath Encryption functionality with support for an encrypting HBA. If you are using PowerPath Encryption with HBA-assist, use the same PowerPath Encryption license key as for PowerPath software-based encryption. The PowerPath Encryption with RSA User Guide provides information on HBA-assisted encryption.

Note that, due to performance issues, PowerPath Encryption is not supported on PowerPath for AIX. Support may be available in a future release.

The following documents, available on the Powerlink website, contain more information about PowerPath Encryption:

◆ EMC PowerPath Encryption with RSA User Guide

◆ EMC PowerPath Family Release Notes

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

Using multiple portsPowerPath can use multiple ports to each logical device. You can configure a logical device as a shared device using two or more interfaces. (An interface is, for example, a Fibre Adapter [FA] on a Symmetrix system or a Storage Processor [SP] on a CLARiiON system.) In this way, all logical devices can be visible on all ports, to enhance availability.

Paths In the configuration shown in Figure 1 on page 35, without PowerPath, there can be at most one path to each logical device. A physical path comprises a route between a host and a logical device: the host bus adapter (HBA, a port on the host computer, through which the host can issue I/O), cables, a switch, a storage system interface and port, and the logical device. Since there are two logical devices in the figure, there can be at most two interface ports and two HBAs—one port and one HBA per logical device. (A port is an access point for data entry or exit, or a receptacle on a device, where a cable for another device is attached.)

Figure 1 Without PowerPath: One path to each logical device

Without PowerPath, the host’s SCSI driver cannot take advantage of multiple paths to a logical device. This is because most operating systems view each path as a unique logical device, even when multiple paths lead to the same logical device; this can result in data corruption or a system crash. PowerPath eliminates this restriction.

With PowerPath, you can take advantage of the multiple paths to a logical device that shared host and storage ports provide. The number of shared paths possible with fabric configurations is even greater. For example, PowerPath manages 1600 paths on a host with

Using multiple ports 35

36

PowerPath Overview

4 HBAs connected via a fabric to 4 ports on a storage system with 100 logical devices (4 HBAs x 4 FAs x 100 logical devices = 1600).

In contrast to Figure 1 on page 35, Figure 2 on page 36 shows multiple paths to each logical device, in a configuration with PowerPath.

Figure 2 With PowerPath: Multiple paths to each logical device

With PowerPath, both logical devices are accessible through both interface ports. This allows I/O to a logical device to flow through multiple paths. As shown, two paths lead to logical device 0 and two lead to logical device 1.

PowerPath exploits the multipathing capability of storage systems. Depending on the capabilities of the storage system, PowerPath provides load-balanced or failure-resistant paths between a host and a logical device. This allows PowerPath to:

◆ Increase I/O throughput, by sending I/O requests targeted to the same logical device over multiple paths.

◆ Prevent loss of data access, by redirecting I/O requests from a failed path to a working path.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

Active-active, active-passive, and ALUA storage systemsPowerPath works with three types of storage systems:

◆ Active-active — For example, Symmetrix, Celerra, IBM TotalStorage Enterprise Storage Server (ESS), Hitachi Lightning, Hitachi TagmaStore, HP StorageWorks XP systems, HP StorageWorks EVA systems

◆ Active-passive — For example, CLARiiON systems

◆ ALUA (asymmetric logical unit access) — For example, CLARiiON CX3 systems with FLARE version 03.26

Your platform release notes provide information on the supported storage systems for your PowerPath environment.

Active-active Active-active means all interfaces to a device are active simultaneously. In an active-active storage system, if there are multiple interfaces to a logical device, they all provide equal access to the logical device.

Active-passive Active-passive means only one interface to a device is active at a time, and any others are passive with respect to that device and waiting to take over if needed.

In an active-passive storage system, if there are multiple interfaces to a logical device, one of them is designated as the primary route to the device; the device is assigned to that interface card. Typically, assigned devices are distributed equally among interface cards. I/O is not directed to paths connected to a nonassigned interface.

Normal access to a device through any interface card other than its assigned one is either impossible (for example, on CLARiiON systems) or possible but much slower than access through the assigned interface card.

In the event of a failure—of an interface card or all paths to an interface card—logical devices must be moved to another interface. If an interface card fails, logical devices are reassigned from the broken interface to another interface. This reassignment is initiated by the other, functioning interface. If all paths from a host to an interface fail, logical devices accessed on those paths are reassigned to another interface, with which the host can still communicate. This reassignment is initiated by PowerPath, which instructs the storage system to make the reassignment.

The CLARiiON term for these reassignments is trespassing.

Using multiple ports 37

38

PowerPath Overview

Reassignment can take several seconds to complete; however, I/Os do not fail during this time. After devices are reassigned, PowerPath detects the changes and seamlessly routes data through the new route.

After a reassignment, logical devices can be reassigned (trespassed back, in CLARiiON terminology) to their originally assigned interface. This occurs automatically if PowerPath’s periodic autorestore feature is enabled. (See “Periodic testing and autorestore of dead paths” on page 51.) It occurs manually if powermt restore is run; this is the faster approach. (See “PowerPath CLI” on page 54 for information powermt commands.) Periodic autorestore reassigns logical devices only when restoring paths from a failed state. If paths to the default interface are not marked dead, you must use powermt restore. The PowerPath Family CLI and System Messages Reference Guide provides more information on powermt commands.

ALUA Asymmetric logical unit access (ALUA) is an array failover mode available on CLARiiON systems with FLARE version 03.26 or later in which one array controller is designated as the active/optimized controller and the other array controller is designated as the active/non-optimized controller. As long as the active/optimized controller is viable, I/O is directed to this controller. Should the active/optimized array controller become unavailable or fail, I/O is directed to the active/non-optimized array controller until a trespass occurs.

The E-Lab Interoperability Navigator provides the most recent qualification information.

Path sets PowerPath groups all paths to the same logical device into a path set. PowerPath creates a path set for each logical device, and then populates the path set with all usable paths to that logical device. For the configuration in Figure 2 on page 36, with two logical devices, PowerPath creates two path sets, as shown in Figure 3 on page 39.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

Each contains the same two physical paths, for a total of four logical paths.

Figure 3 Path sets

In an active-active system, once PowerPath creates a path set, it can use any path in the set to service an I/O request. If a path fails, PowerPath can redirect an I/O request from that path to any other viable path in the set. This redirection is transparent to the application, which does not receive an error.

In an active-passive system, path sets are divided into two load-balancing groups. The active group contains all paths to the interface to which the target logical device is assigned; the other group contains all paths to the other, nonassigned interface. Only one load-balancing group processes I/O requests at a time, and PowerPath load balances I/O across all paths in the active group. If a path in the active load-balancing group fails, PowerPath redirects the I/O request to another path in the active group. If all paths in the active load-balancing group fail, PowerPath reassigns the logical device to the other interface, and then redirects the I/O request to a path in the newly activated group.

From an application's perspective, a path set appears as a single, highly available path to storage. PowerPath hides the complexity of paths in the set, between the host and the storage system. With the logical concept of a path set, PowerPath hides multiple HBAs, cables, ports, hubs, and switches. Applications such as DBMSs get the benefits of multiple I/O paths—faster I/O throughput and highly available data access—without the complexity of multiple paths or the vulnerability of single paths.

Using multiple ports 39

40

PowerPath Overview

Native devices This section discusses native devices.

Note: This section does not apply to all PowerPath-supported platforms and versions. On Windows, native devices are not exposed to users. On AIX, PowerPath does not support native devices. On Linux, native device support varies by version; the PowerPath Family for Linux Release Notes provides information on native device support.

The operating system creates native devices to represent and provide access to logical devices. The device is native in that it is provided by the operating system for use with applications. A native device is path specific (as opposed to path independent) and represents a single path to a logical device.

Figure 4 shows PowerPath’s view of native devices.

Figure 4 Native devices

In the figure, there is a native device for each path. The storage system in the figure is configured with two shared logical devices, each of which can be accessed by four paths. There are eight native devices, four (in white, numbered 0, 2, 4, and 6) representing a unique path set to logical device 0, and four (in black, numbered 1, 3, 5, and 7) representing a unique path set to logical device 1. These are not shared logical devices: A shared device is accessed by multiple hosts simultaneously.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

How applicationsaccess native devices

You need not reconfigure applications to use native devices; you simply use the existing disk devices created by the operating system. When using native devices, PowerPath is transparent to applications. PowerPath maintains the correspondence between an individual native device and the path set to which it belongs.

On Solaris and Linux, you have the option of either using native devices (with no conversion of applications) or converting to pseudo devices (see “Pseudo devices” on page 41).

Note: Native device support for Linux varies by version; the PowerPath Family for Linux Release Notes provides information on native device support.

Example Suppose you have three native devices in a path set. PowerPath maintains the association among these paths. When an application writes to any one of them, PowerPath redirects the I/O to whichever native device in the path set will provide the best throughput. Also, a problem with one native device does not disrupt data access. Instead, PowerPath shifts I/O processing to another native device in the path set, allowing applications to continue reading from and writing to native devices in the same path set.

Pseudo devices A PowerPath pseudo device represents a single logical device and the path set leading to it, which can contain any number of physical paths. There is one (and only one) pseudo device per path set.

For example, in Figure 5 on page 42, logical devices 0 and 1 are referred to by pseudo device names emcpower1c and emcpower2c, respectively. Each pseudo device represents the set of paths connected to its respective logical device: emcpower1c represents the

Using multiple ports 41

42

PowerPath Overview

set of paths connected to logical device 0, and emcpower2c represents the set of paths connected to logical device 1.

Figure 5 Pseudo devices

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

Table 3 describes whether applications need to be reconfigured to use pseudo devices.

Table 3 Reconfiguring pseudo devices

Pseudo devicemapping

This section applies to Solaris systems only.

The mapping of pseudo names to storage system devices on new installations is carried out such that devices are assigned in the order in which devices are returned to the readdir() function on the /dev/rdsk directory.

Pseudo deviceattributes

This section applies to AIX systems only.

The attribute values of the PowerPath pseudo (hdiskpower) devices are inherited from the last alternative native hdisk device. We recommend that you change a value using the pseudo device, which changes all the underlying native devices as well.

Windows note As shown in Figure 5 on page 42, Windows users see only pseudo devices, not native devices. On Windows, each logical device has one name. They follow standard Windows naming conventions and appear like any other devices on a Windows system. These standard devices are pseudo because they are path independent. They also are native because, although created by PowerPath, they cannot be differentiated from devices created by the operating system.

Platform Must applications be reconfigured to use pseudo devices?

Windows No

UNIX AIX No — LVMYes — applications that do not use LVM

HP-UX No

Solaris Yes (including filesystem mounting tables and volume managers)

Linux Yes (including filesystem mounting tables and volume managers)

Using multiple ports 43

44

PowerPath Overview

Dynamic multipath load balancingWithout PowerPath, you must statically load balance paths to logical devices to improve performance. For example, based on current usage, you might configure three heavily used logical devices on one path, seven moderately used logical devices on a second path, and 20 lightly used logical devices on a third path. As usage changes, these statically configured paths may become unbalanced, causing performance to suffer. You must then reconfigure the paths, and continue to reconfigure them as I/O traffic between the host and the storage system shifts in response to usage changes.

PowerPath tries to maintain maximum performance and reduce management through dynamic load balancing. PowerPath is designed to use all paths at all times. PowerPath distributes I/O requests to a logical device across all available paths, rather than requiring a single path to bear the entire I/O burden. (On active-passive storage systems, available paths are those paths leading to the active SP for each logical device.) PowerPath can distribute the I/O for all logical devices over all paths shared by those logical devices, so all paths are equally burdened.

PowerPath load balances I/O on a host-by-host basis. It maintains statistics on all I/O for all paths. For each I/O request, PowerPath intelligently chooses the least-burdened available path, depending on the load-balancing and failover policy in effect. If an appropriate policy is specified, all paths in a PowerPath system have approximately the same load.

PowerPath uses all the I/O processing and bus capacity of all paths. A path need never be overloaded and slow while other paths are idle.

In addition to improving I/O performance, dynamic load balancing reduces management time and downtime, because administrators no longer need to configure paths statically across logical devices. With PowerPath, no setup time is required, and paths are always configured for optimum performance.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

Load balancing with and without PowerPathFigure 6 on page 45 shows I/O queuing on a host without PowerPath installed. The paths are out of balance.

Figure 6 I/O queuing without PowerPath

Figure 7 on page 45 shows I/O queuing on a host with PowerPath installed. I/O is balanced across all available paths.

Figure 7 I/O queuing with PowerPath

PowerPath confers the greatest benefit to environments that are pathbound. In a pathbound I/O environment, the time it takes to execute the I/O load from a particular job is limited by bus capacity for a given path. In pathbound environments, enough I/O regularly queues up on a single path to overload it. By spreading the load evenly across the paths, PowerPath significantly improves I/O performance.

Dynamic multipath load balancing 45

46

PowerPath Overview

Load-balancing and failover policiesPowerPath selects a path for each I/O request according to the load-balancing and failover policy set by the administrator for that logical device.

Note: Unlicensed versions of PowerPath support EMC arrays only. This configuration is supported if the host has a single HBA only. This configuration is also referred to as PowerPath/SE (“PowerPath SE” on page 75 provides more information).With third-party arrays in an unlicensed PowerPath environment, unmanage the third-party array class (powermt unmanage class=class) or upgrade to a licensed version of PowerPath.

Table 1 on page 20 provides a summary of the platform, array, and feature support available with each type of PowerPath license. The PowerPath Family CLI and System Messages Reference Guide, available on Powerlink, provides more information on all load-balancing and failover policies.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

Automatic path failoverPowerPath enhances application availability by eliminating the I/O path as a point of failure. Figure 8 on page 47 identifies points of failure in the I/O path:

◆ HBA/NIC◆ Interconnect (cable and patch panel)◆ Switch◆ Interface◆ Interface port

Figure 8 Physical I/O path failure points

With the proper hardware configuration, PowerPath can compensate for the failure of any of these components.

If a path fails, PowerPath redistributes I/O traffic from that path to functioning paths. PowerPath stops sending I/O to the failed path and checks for an active alternate path. If an active path is available, PowerPath redirects I/O along that path. If no active paths are available, alternate, standby paths (if available) are brought into service, and I/O is routed along the alternate paths. On active-passive storage systems, all paths to the active SP are used before any paths to the passive SP.

PowerPath continues testing the failed path. If the path passes the test, PowerPath resumes using it.

This path failover and failure recovery process is transparent to applications. (Occasionally, however, there is a short delay.)

Automatic path failover 47

48

PowerPath Overview

Proactive path testing and automatic path restorationThe PowerPath multipath module is responsible for selecting the best path to a logical device to optimize performance and for protecting applications from path failures. It detects failed paths and retries failed application I/O requests on other paths.

To determine whether a path is operational, PowerPath uses a path test. A path test is a sequence of I/Os PowerPath issues specifically to ascertain the viability of a path. If a path test fails, PowerPath disables the path and stops sending I/O to it.

After a path fails, PowerPath continues testing it periodically, to determine if it is fixed. If the path passes a test, PowerPath restores it to service and resumes sending I/O to it. The storage system, host, and application remain available while the path is restored.

The time it takes to do a path test varies. Testing a working path takes milliseconds. Testing a failed path can take several seconds, depending on the type of failure.

Path states PowerPath manages the state of each path to each logical device independently. From PowerPath’s perspective, a path is alive or dead:

◆ A path is alive if it is usable; PowerPath can direct I/O to this path.

◆ A path is dead if it is not usable; PowerPath does not direct user I/O to this path. PowerPath marks a path dead when it fails a path test; it marks the path alive again when it passes a path test.

A path’s state changes based only on the results of a path test, if a live path failed the test or a dead path passes it. Path states are listed with the powermt display command. “PowerPath CLI” on page 54 provides more information on powermt commands.

When are path tests done?

PowerPath tests a path under the following conditions:

◆ A new path is added. Before any new path is brought into service, it must be tested. This is true for newly configured paths to both existing logical devices and newly configured logical devices.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

◆ PowerPath is servicing an I/O request and there are no more live paths to try. PowerPath always tries to issue application I/Os, even if all paths to the target logical device are dead when the I/O request is presented to PowerPath. Before PowerPath returns the I/O with an error condition, it tests every path to the target logical device. Only if all these path tests fail does PowerPath return an I/O error.

◆ You run powermt load, powermt restore, or powermt config. These commands issue many path tests, so the state of many paths may change as a result of running the commands. “PowerPath CLI” on page 54 provides more information on powermt commands.

Note: In PowerPath 4.1, the powermt config command was removed on the Windows platform. PowerPath now automatically configures all detected logical devices as PowerPath devices and adds these devices to the PowerPath configuration.

PowerPath marks a path to be tested by the periodic test process when an I/O error is returned by the HBA driver. In this case, PowerPath marks for testing both the path with the error and related paths (for example, those paths that share an HBA and storage port with the failed path). Meanwhile, PowerPath reissues the failed I/O on another path.

PowerPath avoids issuing I/Os on any path marked for a path test.

Paths marked for testing are tested when the path test process next runs. Refer to “How often are paths tested?” on page 52.

In addition, all paths—alive and dead—are tested periodically, as described in “Path-testing optimization: Testing related paths” on page 49.

Path-testing optimization: Testing related pathsWhen a path fails due to an I/O error, PowerPath marks all related paths (for example, paths on the same bus) for testing. Until these related paths are tested, PowerPath avoids selecting them for I/O. This optimization process avoids sending I/Os to a failed path, which in turn avoids timeout and retry delays throughout the entire I/O subsystem (application, operating system, fabric, and storage system). It also is important, however, to quickly identify paths that are still alive, so that overall I/O bandwidth is not unnecessarily reduced longer than necessary.

Proactive path testing and automatic path restoration 49

50

PowerPath Overview

PowerPath orders the testing of related paths, to minimize the time live paths are unavailable. The ordering is done to minimize the number of path tests needed to identify which path components failed. In simple topologies, where an HBA and storage port are directly attached to each other, a failed HBA makes the storage port inaccessible, so all related paths are dead. In this case, test ordering is relatively unimportant. In complex fabric topologies, however, where multiple paths share components (ports, switches, and cables), a failed HBA does not necessarily make any storage port inaccessible. In this case, well ordered path testing can substantially reduce the amount of time live paths are unavailable.

Periodic testing of live pathsPowerPath tests live paths periodically to identify failed paths, especially among those not used recently. This helps prevent application I/O from being issued on dead paths that PowerPath otherwise would not detect as dead. This in turn reduces timeout and retry delays.

Periodic testing of live paths is a low-priority task. It is not designed to test all paths within a specific time, but rather to test all paths within a reasonable time, without interfering with application I/O.

Live paths are tested when the path test process runs, provided the paths:

◆ Have not been tested for at least 1 hour.

◆ Are idle. An idle path is one that was not used for I/O within the last minute.

Typically, all live, idle paths are tested at least hourly, although this is not guaranteed. In an active system, with few idle paths, live paths are rarely tested. Such testing is not necessary in an active system—with application I/O on most paths—since path testing is triggered promptly by I/O failures.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

Periodic testing and autorestore of dead pathsPowerPath also tests dead paths periodically and, if they pass the test, automatically restores them to service.

Like periodic testing of live paths, periodic autorestore is low priority. It is not designed to restore a path immediately after it is repaired, but rather to restore the path within a reasonable time after it is repaired.

Dead paths are tested when the path test process is run, provided the paths have not been tested for at least 1 hour. This frequency limits the number of I/Os that fail (the PowerPath test path I/Os fail on dead paths), so the impact on normal operations is negligible.

The time it takes for all paths to be restored varies greatly. In lightly loaded or small configurations, paths typically are restored within an hour after they are repaired (on average, much sooner). In heavily loaded or large configurations, it may take several hours for all paths to be restored after they are repaired because periodic autorestore is pre-empted by higher priority tasks.

The fastest way to restore paths is to use powermt restore. “PowerPath CLI” on page 54 provides more information on powermt commands.

HP-UX note With HP-UX, if you manually disable or remove a single logical device from a storage group, and then return the device to the storage group, PowerPath takes an unusually long time (up to 24 hours) to recognize and restore the device.

Use powermt restore to restore the logical device immediately.

Note that autorestore of entire paths occurs in the normal time frame.

Windows note Because PowerPath is tightly integrated with Windows Plug and Play, it detects if Plug and Play has brought a device online or taken a device offline very quickly. If you use the powermt set periodic_autorestore=off command to disable PowerPath periodic autorestore functionality (which is enabled by default), you may notice that paths continue to be automatically restored, as a result of the tighter integration with Plug and Play. EMC recommends that you leave periodic autorestore enabled for cases where Plug and Play is not invoked when a path comes online.

Proactive path testing and automatic path restoration 51

52

PowerPath Overview

Note: When a cable is pulled on a host with iSCSI connections, there is no immediate Plug and Play event. If there is no I/O on the affected paths, it may take up to 60 seconds for the paths to display as dead.

How often are paths tested?

PowerPath periodically runs the path test process. This process sequentially visits each path and tests it if required:

◆ Live paths are tested periodically if they have not been tested for at least one hour and are idle.

◆ Dead paths are tested periodically if they were marked for testing at least one hour ago and are idle.

◆ Any paths marked for testing as a result of the conditions listed in “When are path tests done?” on page 48 are tested the next time the path test process runs.

Tests are spaced out such that at least one path on every HBA and every port is tested much more often than hourly. A path state change detected in this way is propagated quickly to all related paths.

After all paths are visited and those marked for testing have completed their tests, the process sleeps for 10 seconds, and then restarts. This 10-second period is a compromise between using nonapplication system resources (CPU and I/O cycles) and keeping the state of paths current so the maximum number of paths is always available for use.

The more paths that need testing, the longer it takes to complete the path test process. As a result, it is hard to predict exactly when a path will be tested.

Application tuning in a PowerPath environmentYou can use PowerPath to tune application performance by:

◆ Manually load balancing with channel groups.

Channel groups PowerPath allows you to form a channel group of dedicated paths to a logical device, to increase application performance. (Note, however, that reserving paths for one application makes those paths unavailable for other applications, potentially decreasing their performance.)

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Overview

Channel groups keep a second set of paths in reserve in case the first set fails. As a form of manual load balancing, channel groups reserve bandwidth more precisely than automatic means. Channel groups require at least two paths; they work best in environments with more than two paths and at least two separately managed applications on the same host that use different logical devices.

You create a channel group by using the powermt set mode command to label a group of paths to a logical device as active or standby. (“PowerPath CLI” on page 54 provides more information on powermt commands.) An application accessing one or more logical devices designates one group of paths as active and another group as standby. A second application accessing different logical devices designates the first group of paths as standby and the second group as active. Each application has its own dedicated group of active paths, while the overall configuration provides channel failover protection.

If a path in an application’s active group fails, the application’s I/O is redirected automatically to other active paths in the group. If all paths in the active group fail, the application’s I/O is redirected automatically to the standby paths.

Figure 9 shows an environment with two channel groups. The first channel group (everything in black) contains paths from HBAs a0 and a1, used by application 1 to access logical device 0. The second channel group (everything in dark gray) contains paths from HBAs a2 and a3, used by application 2 to access logical device 1. For application 1, the first channel group (black) is active and the second channel group (gray) is standby; for application 2, the first channel group is standby and the second channel group is active.

Figure 9 Channel groups

Application tuning in a PowerPath environment 53

54

PowerPath Overview

PowerPath management toolsThis section describes the tools used to manage the PowerPath environment.

PowerPath CLI The PowerPath environment is managed by a CLI consisting of several commands:

◆ powermt — Used to manage the PowerPath environment

◆ emcpadm — Used to list or rename PowerPath pseudo devices; often undergoes enhancements for increased functionality

◆ emchostid — Used to set the Host ID

◆ emcpreg — Used to manage the PowerPath license registration

◆ powermig — Used to manage migration operations (Migration Enabler)

◆ powervt — Used to manage encryption of virtual logical units via Encryption with RSA

The PowerPath Family CLI and System Messages Reference Guide provides more information on the PowerPath Family command line interface, which also includes the syntax and arguments, as applicable.

Windows note On Windows, PowerPath Administrator is the easiest way to access powermt functions. PowerPath Administrator is a GUI that allows you to interactively manage PowerPath on Windows platforms. All powermt functions are accessible through PowerPath Administrator except powermt set write_throttle and powermt set write_throttle_queue. PowerPath Administrator is described in the EMC PowerPath and PowerPath/VE for Windows Installation and Administration Guide and the PowerPath Administrator online help.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

3

This chapter provides a high-level overview of configuring PowerPath in Fibre Channel, iSCSI, and SCSI environments. Topics include:

Note: The material in this chapter is not intended to be comprehensive. Consult the E-Lab Navigator available at the Powerlink website (http://Powerlink.EMC.com) for configuration information for a specific environment.

◆ PowerPath connectivity .................................................................... 56◆ Fibre Channel configuration requirements .................................... 59◆ FCoE configuration requirements ................................................... 63◆ iSCSI configuration requirements.................................................... 66◆ SCSI configuration requirements..................................................... 70◆ Storage configuration requirements and recommendations ....... 71◆ Dynamic reconfiguration .................................................................. 74

PowerPathConfigurationRequirements

PowerPath Configuration Requirements 55

56

PowerPath Configuration Requirements

PowerPath connectivityPowerPath works with:

◆ Fibre Channel physical connections in UNIX, Linux, and Windows environments. Each Fibre Channel HBA connects to a port on a Fibre Channel interface on the storage system or to a Fibre Channel hub or switch. Hubs are not supported on all storage systems.

◆ Fibre Channel over Ethernet (FCoE) physical connections in Linux and Windows environments. Each FCoE CNA connects to an FCoE switch, which in turn connects to an Ethernet LAN and an FC SAN.

◆ iSCSI physical connections. Each iSCSI NIC or HBA connects to an iSCSI switch or router.

The E-Lab Interoperability Navigator on Powerlink provides detailed information on supported configurations. The EMC Support Matrix PowerPath Family Protocol Support, available on Powerlink, provides protocol support information for the EMC PowerPath family of products.

HBA and transport considerations

When mapping paths to logical devices, observe the following considerations with respect to HBAs and transport protocols:

Solaris, Linux, WindowsObserve the following considerations with respect to HBAs and transport protocols for Solaris, Linux, and Windows:

◆ PowerPath does not support a logical device that has paths mapped from two different HBA vendors. This includes cluster nodes that share logical devices.

◆ For PowerPath version 5.5 and later on platforms that support HBA-assisted encryption, PowerPath does support a logical device that has paths mapped from both standard HBA cards and encrypting HBA cards for HBA-assisted encryption, as long as the HBAs are from the same vendor. “HBA-assisted Encryption” on page 34 provides more information on HBA-assisted encryption. The PowerPath release notes for your platform

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Configuration Requirements

provide information on PowerPath versions that support HBA-assisted encryption. The PowerPath Encryption with RSA User Guide provides information on HBA-assisted encryption.

◆ PowerPath does not support a logical device that has paths mapped using different transport protocols (iSCSI, FC, FCoE).

AIX, HP-UX Observe the following considerations with respect to HBAs and transport protocols for AIX and HP-UX:

◆ PowerPath does support a logical device that has paths mapped from two different HBA vendors.

◆ PowerPath does not support a logical device that has paths mapped using different transport protocols (iSCSI, FC, FCoE).

High availability An Enterprise Storage Network (ESN) provides high availability by configuring multiple paths between connections, configuring alternate paths to storage area network (SAN) components, and deploying redundant SAN components. Some SAN switches (such as the EMC Connectrix®) have redundant subsystems to ensure high availability and a reliable fabric.

PowerPath supports multiple paths between an HBA and a logical device. This can offer higher availability or better performance, and it usually simplifies zoning.

Multiple connections between hosts and multiple fabrics can insulate I/Os from fabric-wide failure. PowerPath delivers maximum system-wide availability when dual HBAs in each server connect to separate fabrics, as in Figure 11 on page 61. An application that cannot use a path on one fabric can fail over to a different fabric, protecting the application from a fabric-wide outage.

PowerPath supports load balancing and redundancy across the storage ports in a fabric. Be aware, however, that the number and complexity of connection points in a fabric multiplies rapidly in a multipath configuration The number of connections you create depends on the bandwidth you require.

Device configurationnote

PowerPath does not alter the allowable access to storage system logical devices. Devices with the following access without PowerPath have the same access with PowerPath installed:

◆ Read/write◆ Read-only◆ Not-ready

PowerPath connectivity 57

58

PowerPath Configuration Requirements

On Symmetrix storage systems, PowerPath associates paths to a Business Continuance Volume (BCV) that is split from its standard logical device and has read/write access.

PowerPath controls:

◆ All Symmetrix logical devices except SAN Manager™ Volume Configuration Management Databases (VCMDBs).

◆ All PowerPath-enabled CLARiiON LUNs.

◆ Supported third-party storage system devices.

Note: Third-party storage devices are not supported in iSCSI environments.

You can exclude devices from PowerPath control through the powermt unmanage command. The PowerPath Family CLI and System Messages Reference Guide provides more information on powermt commands.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Configuration Requirements

Fibre Channel configuration requirementsThis section provides high-level guidelines for configuring PowerPath Fibre Channel environments. For more information on Fibre Channel configuration requirements, refer to the E-Lab Navigator on the Powerlink website (http://Powerlink.EMC.com) and the configuration planning guide for the Fibre Channel storage system.

Observe the following guidelines when configuring Fibre Channel connections for a PowerPath environment:

◆ EMC requires that no more than one HBA be configured in any zone. Figure 10 on page 59 shows a system in which both hosts (each with PowerPath installed) are connected to a storage system through two fabrics. There are four zones, each with one HBA:

• HBA 1, fabric 1, port X• HBA 2, fabric 2, port Y• HBA 3, fabric 1, port X• HBA 4, fabric 2, port Y

Figure 10 Highly available Fibre Channel configuration with PowerPath

Fibre Channel configuration requirements 59

60

PowerPath Configuration Requirements

In Figure 10 on page 59 (and the other figures in this section), an interface is, for example, an FA on a Symmetrix or other active-active system or an SP on a CLARiiON system.

This configuration has several advantages:

• If either fabric fails, both hosts can still access both logical devices, through the other fabric.

• If either interface fails, both hosts can still access both logical devices, through the other interface.

• If either HBA in a host fails, that host can still access both logical devices, through the other HBA on that host.

If multiple hosts access the same storage-system ports, PowerPath does not add any access control. Normal sharing considerations apply, using products such as SAN Manager, Volume Logix, Access Logix™, Oracle Parallel Server, and clustering software.

◆ For redundancy, if multiple ports are used, they must be on multiple physical interfaces. In Figure 10 on page 59, the two ports are divided between two interfaces.

◆ For maximum availability, configure at least two fully redundant paths from each host to each logical device. Two paths are fully redundant if the paths do not share HBAs, fabrics, switches or hubs, or storage interface cards. Figure 10 on page 59 shows two redundant paths from each host to each logical device.

A single EMC Connectrix ED-1032 or ED-64M switch can be used in such a configuration because it has redundant components and is a highly available product.

The redundant design ensures minimum application impact on component failures and microcode loads. For maximum availability when designing a fabric topology, use dual fabrics. PowerPath insulates I/Os from a fabric-wide outage.

◆ For optimum performance, especially in a degraded state, present each logical device to different HBAs from different interface boards. Although some interface boards contain two (or more) ports, the second connection is used for performance purposes and does not provide high availability. The second connection may result in bandwidth loss if a back-end path (the part of a path between a port and a logical device) fails.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Configuration Requirements

In Figure 11 on page 61, each HBA has two paths to each logical device through different interfaces. This configuration greatly improves performance if one fabric fails, as there are still two ports to handle the load.

Figure 11 High-availability (multiple-fabric) Fibre Channel configuration

PowerPath Base supports only configurations with one path to each interface. This includes direct-attached configurations as well as appropriately zoned switch (SAN) configurations.

With a PowerPath license, some active-passive storage systems must be zoned so each host sees only one path to each interface. Refer to your storage-system documentation or contact EMC Customer Support.

◆ Any port can be configured into multiple Fibre Channel zones. Frequently, multiple HBAs share a port, making multiple zones per port typical. For example, in Figure 11 on page 61, there are four zones altogether. Each zone consists of one HBA and two ports. Each port is in two zones.

◆ In a single-fabric configuration, a host with one HBA can have multiple paths to a logical device. For example, in Figure 12 on page 62, each single-HBA host has four paths to each logical device. While this does not provide maximum availability (as

Fibre Channel configuration requirements 61

62

PowerPath Configuration Requirements

described previously), it offers back-end path redundancy and load balancing, reduces zoning complexity, and enables multiple hosts to share a storage-system port.

Figure 12 Single-switch Fibre Channel configuration

◆ Only paths from identical HBAs can be mapped to the same logical device. That is, the HBAs must be comparable in every way; they cannot even be different revisions of the same HBA.

For more information on Fibre Channel configuration requirements, refer to the configuration planning guide for your storage system.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Configuration Requirements

FCoE configuration requirementsThis section provides high-level guidelines for configuring PowerPath FCoE environments. For definitive information on FCoE configuration requirements, refer to the EMC Networked Storage Topology Guide, available on the E-Lab Interoperability Navigator on Powerlink.

Observe the following guidelines when configuring FCoE connections for a PowerPath environment.

High availability configurations

This section discusses high-availability FCoE configurations and zoning.

Figure 13 on page 63 depicts an FA on a Symmetrix or other active-active system or an SP on a CLARiiON system.

Figure 13 High-availability (multiple-fabric) Fibre Channel over Ethernet configuration

This configuration has several advantages:

◆ If either fabric fails, both hosts can still access both logical devices, through the other fabric.

◆ If either interface fails, both hosts can still access both logical devices, through the other interface.

GEN-001605

HostFCoE Switch

FCoE Switch

PowerPath

CNA

CNA

Host

PowerPath

CNA

CNA

Storage System

Port

Interface

Port

Port

Interface

PortLogical

Device 2

LogicalDevice 1

Fabric

Fabric

FCoE configuration requirements 63

64

PowerPath Configuration Requirements

◆ If either CNA in a host fails, that host can still access both logical devices, through the other CNA on that host.

◆ If either FCoE switch fails, that host can still access both logical devices, through the other FCoE switch.

◆ For redundancy, if multiple ports are used, they must be on multiple physical interfaces. In Figure 13 on page 63, the two ports are divided between two interfaces.

◆ For maximum availability, configure at least two fully redundant paths from each host to each logical device. Two paths are fully redundant if the paths do not share CNAs, fabrics, switches, or hubs. Figure 13 on page 63 shows two redundant paths from each host to each logical device.

The redundant design ensures minimum application impact on component failures. For maximum availability when designing a fabric topology, use dual fabrics. PowerPath insulates I/Os from a fabric-wide outage.

Zoning to active-passive arraysFor PowerPath in a cluster environment connected to active-passive arrays, such as CLARiiON arrays, all nodes in the cluster should be connected to the array through both fabrics, as depicted in Figure 14 on page 64.

Figure 14 High-availability (multiple-fabric) Fibre Channel over Ethernet to active-passive arrays

GEN-001605

Node 1FCoE Switch

FCoE Switch

PowerPathHost

CNA 1

CNA 2

Node 2

PowerPathHost

CNA 1

CNA 2

CLARiiON Storage System

Port A0

SP A

Port A1

Port B0

SP B

Port B1Logical

Device 2

LogicalDevice 1

FCFabric 1

FCFabric 2

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Configuration Requirements

Additionally, each node should have a minimum of one path to each Storage Processor (SP). Any port from the same SP must be connected to the same fabric. This is to avoid multiple LUN trespasses in the event of a FC or FCoE switch or fabric failure.

For higher availability and performance, EMC recommends mapping at least four paths from each cluster node, with two paths connected to each SP. In such a configuration, it is fine to have ports from same SP connected to different fabric because each node continues to have access to both SPs in the case of a fabric failure. For example:

◆ Node 1:

• Node 1–CNA 1, fabric 1, SPA0• Node 1–CNA 1, fabric 1, SPB0• Node 1–CNA 2, fabric 2, SPA1• Node 1–CNA 2, fabric 2, SPB1

◆ Node 2:

• Node 2–CNA 1, fabric 1, SPA0• Node 2–CNA 1, fabric 1, SPB0• Node 2–CNA 2, fabric 2, SPA1• Node 2–CNA 2, fabric 2, SPB1

In Figure 13 on page 63 and Figure 14 on page 64, each CNA has two paths to each logical device through different interfaces. This configuration greatly improves performance if one fabric fails, as there are still two ports to handle the load.

FCoE configuration requirements 65

66

PowerPath Configuration Requirements

iSCSI configuration requirementsThis section provides high-level guidelines for configuring PowerPath iSCSI environments. For definitive information on iSCSI configuration requirements, refer to the E-Lab Interoperability Navigator on Powerlink. Observe the following guidelines when configuring iSCSI physical connections for a PowerPath environment:

◆ A single host can attach to multiple storage systems: Fibre Channel storage systems (through Fibre Channel HBAs) and iSCSI storage systems (through iSCSI HBAs only or NICs only).

Note: With CLARiiON FLARE operating environment version 03.26, PowerPath supports concurrent host access to Fibre Channel and iSCSI devices, as specified in the E-Lab Interoperability Navigator.

◆ You can connect hosts with all HBAs and hosts with all NICs to the same storage system.

Sample iSCSI configurations

Three example PowerPath iSCSI configurations follow:

◆ Single NIC/HBA to one subnet

◆ Multiple NICs/HBAs to multiple subnets

◆ Multiple NICs/HBAs to one subnet (Windows only)

Note that only the data paths are represented in the following figures. The management ports are assumed to be connected to a separate sub-network. All of the following configurations support the management ports on the same subnet as the data ports. (Note that you can only manage an array through a NIC.) An interface is an FA on a Symmetrix or an SP on a CLARiiON system.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Configuration Requirements

Single NIC/HBA to one subnetFigure 15 on page 67 shows a single NIC/HBA connecting to a single subnet.

Figure 15 Single NIC/HBA configuration

When using one NIC or HBA, you can have one connection to each port on the storage system.

iSCSI configuration requirements 67

68

PowerPath Configuration Requirements

Multiple NICs/HBAs to multiple subnetsFigure 16 on page 68 shows multiple NICs/HBAs connecting to multiple subnets.

Figure 16 Multiple NICs/HBAs to multiple subnets

Note the following:

◆ When using multiple NICs, you can have one connection per host to each port on the storage system.

◆ When using multiple HBAs, you can have one connection per HBA to each port on the storage system.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Configuration Requirements

Multiple NICs/HBAs to one subnetFigure 17 on page 69 shows multiple NICs/HBAs connecting to a single subnet.

Figure 17 Multiple NICs/HBAs to one subnet

Note the following:

◆ This configuration is supported in Windows only.

◆ When using multiple NICs, you can make one connection per host to each port on the storage system. Multiple NICs on the same subnet are ignored by the Microsoft iSCSI Initiator default configuration. Using the Advanced button in the Log On to Target dialog box of the Microsoft iSCSI Initiator GUI allows a specific NIC to be associated with a specific port.

iSCSI configuration requirements 69

70

PowerPath Configuration Requirements

SCSI configuration requirementsWhen configuring the SCSI physical connections for a PowerPath environment, follow these rules:

◆ For PowerPath to provide failover and load balancing, the host must have at least two HBA ports.

◆ Logical devices must be properly configured on the SCSI bus.

◆ A SCSI HBA port must connect to one and only one interface port. (An interface port is the front-end interface on the array that connects the array to the SAN.) No path to a single logical device can share either an HBA or an interface port.

◆ EMC requires single-initiator connections from the host to the storage system. Figure 18 on page 70 shows two SCSI single-initiator connections from the host to the storage system.

Figure 18 Single-initiator connections

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Configuration Requirements

Storage configuration requirements and recommendationsThis section includes high-level requirements and recommendations for configuring EMC and third-party storage systems.

All storage systems EMC Customer Support should configure the storage systems.

The E-Lab Interoperability Navigator at http://Powerlink.EMC.com provides information on the storage systems that can act as boot devices on your platform.

To enable PowerPath to load balance I/Os and provide path redundancy, configure each logical device to be used with PowerPath for access on two or more interface boards (array boards).

To gain the PowerPath benefits of high availability, reliability, and performance for your storage network, configure multiple paths from hosts to storage devices.

A host that is part of a cluster cannot have both active-active and active-passive storage devices in the same disk group.

When not in a cluster, a host can be connected to both active-active and active-passive storage systems; however, specific hardware (such as HBAs and cables) and software may be required for this configuration. See the E-Lab Navigator at http://Powerlink.EMC.com

Mixed storage environmentsIn a mixed storage environment, the fabric components (switches, directors, and HBAs) along with the operating system level, HBA models, drivers, and firmware must all be at the EMC-supported levels. A third-party storage system connecting to this fabric environment must also be supported through that system’s OEM vendor in the stated environment. Deviations in any of the supported levels for any component can be handled through either EMC's RPQ process or that of the OEM vendor. This will ensure that all storage arrays remain supported through their respective OEM vendors.

Regardless of the sharing model you choose (Shared HBA, Shared Server, or Shared SAN), EMC recommends that you limit the amount of possible interactions between the arrays. This will assist in troubleshooting, maintenance, and management of the environment. To limit the interactions and dependencies, we recommend that you not include storage array ports from different vendors in the same

Storage configuration requirements and recommendations 71

72

PowerPath Configuration Requirements

zone. Multiple zones can be created that use the same HBA, as long as the storage arrays are in separate zones with that common HBA. Zoning in this fashion will ensure that there are no direct interactions between the different storage arrays.

Symmetrix storage systemsTo improve redundancy to logical devices, distribute HBA connections across as many Symmetrix interfaces as possible.

To improve performance, distribute paths on even and odd Symmetrix interface cards.

For more information on Symmetrix configuration requirements, refer to the EMC host connectivity guides, available on the Powerlink website.

Supported Hitachi Lightning, Hitachi TagmaStore, HP StorageWorks XP, and IBM ESS storage systems

To improve redundancy to logical devices, distribute HBA connections across as many channel host adapters (CHAs) and disk controllers (DKCs) as possible.

For more information on storage system configuration requirements, refer to the appropriate documentation from your vendor.

Supported HP StorageWorks EVA storage systemsFor best performance, supported HP StorageWorks EVA LUNs accessible from any given host should be distributed over both controllers (A and B) of the array. This ensures that I/O load from the host is shared by both array controllers.

For supported HP StorageWorks EVA arrays, every LUN should be set to Failover only or Failover/Failback on either controller A or controller B. The setting No preference does not allow for predictable load balancing over controllers.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Configuration Requirements

For PowerPath 5.2 and 5.3, the command powermt display dev=all class=hphsx shows the default (or preferred) controller for each LUN as the current controller as well. If the current controller is different from the preferred controller, you can manually reassign every LUN to its preferred controller using the powermt restore command. You can safely issue this command while the host is actively performing I/O operations. Note that this is not applicable to PowerPath 5.5 and later. The PowerPath Family CLI and System Messages Reference Guide provides information on powermt commands.

CAUTION!In configurations where one or more StorageWorks LUNs are shared by several hosts, the powermt restore command produces lasting results only if every host sending I/O to the storage system maintains connectivity to both controllers. If even one of the hosts subsequently loses all of its paths to one controller, any subsequent I/O that host issues causes the affected LUN(s) to be reassigned to the other controller—regardless of the preferred settings. All other hosts will then follow over the LUN(s) to the other controller as needed.

CLARiiON storage systemsWith CLARiiON storage systems, the array failover mode must be the same for all paths that access a single LUN. If two paths access the same LUN, and one path is set to PNR (passive not ready) mode and one to ALUA (asymmetric logical unit access) mode, PowerPath behavior is undefined for that LUN. The array failover mode is set at the HBA level with CLARiiON Navisphere commands. The EMC host connectivity guides, available on the Powerlink website, and the CLARiiON Storage-System Support website (www.emc.com/clariionsupport) provide more information.

Invista storage devicesEMC recommends configuring two switches per Invista instance. Single-switch configurations are supported for testing purposes only because a single switch does not provide high availability. For more information on Invista configuration requirements, refer to the Invista documentation, available on the Powerlink website.

Storage configuration requirements and recommendations 73

74

PowerPath Configuration Requirements

VPLEX storage devicesEMC recommends configuring two switches per VPLEX director. Single-switch configurations are supported for testing purposes only because a single switch does not provide high availability. For more information on VPLEX configuration requirements, refer to the VPLEX documentation, available on the Powerlink website.

Appendix C, “PowerPath Family End-of-Life Summary,” provides information on deprecated EMC and third-party storage arrays.

Dynamic reconfigurationPowerPath 5.0 and later for Windows and Solaris support dynamic addition and removal of LUNs and paths to the PowerPath configuration. In the case of Solaris, this support is provided by the Solaris Dynamic Reconfiguration (DR) feature. PowerPath 5.1 and later for AIX and Linux supports dynamic addition and removal of LUNs.

As you perform these procedures on your platform, keep documentation for your platform available. The PowerPath installation and administration guide for your platform provides for more information on dynamically adding and removing LUNs.

Hot swapping an HBA

PowerPath 5.0 and later for Windows and Solaris supports hot-swapping an HBA. Support for hot-swapping an HBA on PowerPath 5.1 SP2 and later on Linux is provided through the Linux PCI hot plug feature, which allows you to hot swap an HBA card using Fujitsu hardware and drivers. The PowerPath for Family Linux Release Notes provide information on supported Linux kernels. PowerPath 5.3 for AIX supports hot-swapping an HBA. The PowerPath installation and administration guide for your platform provides more information on this procedure.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

AInvisible Body Tag

This appendix describes PowerPath Standard Edition, or PowerPath SE, a version of PowerPath without a license key that provides only basic failover functionality.

Note: Older CLARiiON documents used the term Utility Kit PowerPath or PowerPath Fabric Failover to refer to PowerPath SE. The CLARiiON document Important Information about PowerPath SE contains additional, important information for CLARiiON users only. Be sure to read that document before you install PowerPath SE in a CLARiiON environment.

The appendix contains the following information:

◆ PowerPath SE functionality.............................................................. 76◆ Installing PowerPath SE.................................................................... 77◆ Using PowerPath SE.......................................................................... 78

PowerPath SE

PowerPath SE 75

76

PowerPath SE

PowerPath SE functionalityPowerPath SE is a server-based utility that provides basic failover for a CLARiiON or Symmetrix storage system.

PowerPath SE is supported in single HBA configurations where the same HBA is connected through a switch or fabric to each port on two separate Symmetrix FAs or to each CLARiiON SP. Figure 19 illustrates the supported configuration on a Symmetrix system.

Figure 19 PowerPath SE supported configuration

PowerPath SE protects against CLARiiON SP failures, Symmetrix FA port failures, and back-end storage-system failures, and supports non-disruptive upgrade (NDU) of storage system software. While a server is running normally, PowerPath SE takes no action. If a failure occurs in an SP or an FA port, PowerPath SE attempts to fail over (transfer) the I/Os to a different SP or FA port.

PowerPath SE does not protect against HBA failures. To protect against such failures in storage systems with multiple HBAs connected to a storage system, you must have PowerPath and an accompanying license.

Host

HBA

Symmetrix

FA2

FA1

SYM-000791

Switch

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath SE

Installing PowerPath SEBefore you install PowerPath SE, read the PowerPath Family release notes for your platform.

To install PowerPath SE, follow the installation procedure in the PowerPath installation and administration guide for your platform. Note, however, that you do not need to register a license key when you install PowerPath SE.

The most current versions of the release notes and installation and administration guides are available on the Powerlink website:http://Powerlink.EMC.com

Installing PowerPath SE 77

78

PowerPath SE

Using PowerPath SETo use PowerPath SE, refer to the PowerPath installation and administration guide for your platform as well as this product guide.

The most current versions of these manuals are available on the Powerlink website:http://Powerlink.EMC.com

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

B

This appendix contains the PowerPath Family functionality summary by version and platform. It lists the functions and features supported by PowerPath Multipathing, and Migration Enabler, and Encryption with RSA by PowerPath version, from version 5.2 and Service Pack releases to 5.5 releases, and by supported platforms. These include Solaris, Windows, Linux, AIX, and HP-UX.

PowerPath FamilyFunctionality Summary

PowerPath Family Functionality Summary 79

80

PowerPath Family Functionality Summary

Com

MULT

emcp

R1/R

Poweupgra

Unatt

Dynaadditrecon

Hot s

AddinPowew/o in

Coexmana

SNMdaem

ALUA

Manaper CGrou

Auditcomm

Root

MIGR

RemoAcce

Time

Table 4 PowerPath Family functionality summary by version and platform (page 1 of 2)

mand/Feature PowerPath 5.2 PowerPath 5.3 PowerPath 5.5

Solaris Windows Solaris Windows Linuxa AIX Windows Linux

IPATHING

adm enhancements X X X X X X X X

2 boot X X X X X X

rPath no reboot de (NRU)

Xb Xc X X X X

ended installation X X X X X X X X

mic LUN ion/removal/online figuring

X X X X X X X X

wapping HBA X X X X X X X X

g new paths to rPath logical device terruption

X X X X X X X X

istence with third-party gement software

X X X X X X X

P management on

Xd X Xe X X Xf X

X X X X X X X X

gement of >256 LUNs LARiiON Storage p (CLARiiON 04.29)g

SP2 X SP1, SP2 SP1 X

logging for powermt ands

X X X X X

check X X X X X

ATION ENABLERh

te Solutions Enabler ss

X X SP1 SP1 X X

Finder/Clone X X X X X X

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Family Functionality Summary

PausTime

Host

PausCopy

Host

Host

Open

Virtua

ENCR

HBA-

Thirdencry

MulitpRSA

a.

b.

c.

d.

e.

f.

g.

h.

i.

j.

k.

l.

m.

n.

Com

e/resume on Finder/Clone

X Xi Xj Xk Xl Xm

Copy ceiling X

e/resume on Host X X X X X X X X

Copy X X X X X X X X

Copy with encryption X Xn X X X X X

Replicator X X X X X X X X

l encapsulation X X X X Xo X X Xp

YPTION WITH RSAq

assisted encryption X

-party arrays with ption

SP1 X X X X Xr

athing encryption with X Xs X X X X Xt

PowerPath 5.3 SP2 for Linux supports multipathing only on SLES 11 only. PowerPath 5.3 SP2 for Linux does not support Migration Enabler and Encryption with RSA. The PowerPath 5.3 Family for Linux Release Notes provides more information on supported features and limitations.

PowerPath for Solaris requires a reboot after installation or upgrade in some cases.

See footnote b.

Windows management daemon functionality is delivered through SCOM 2007 and MOM 2005 management packs.

See footnote d.

See footnote d.

Not applicable to Windows (OS limitation).

Requires a separate license key.

When target is larger than source, pause/resume is not supported. When source and target are the same size, pause/resume is supported.

See footnote i.

See footnote i.

See footnote i.

See footnote i.

Windows 2003 only.

Table 4 PowerPath Family functionality summary by version and platform (page 2 of 2)

mand/Feature PowerPath 5.2 PowerPath 5.3 PowerPath 5.5

Solaris Windows Solaris Windows Linuxa AIX Windows Linux

81

82

PowerPath Family Functionality Summary

o.

p.

q.

r.

s.

t.

The Migration Enabler section of the PowerPath Family release notes for your platform provides information on thin device support.

The multipathing section of the PowerPath Family release notes for your platform and the E-Lab Interoperability Navigator provide information on native and third-party clustering.

The PowerPath 5.0 Product Guide provides information on features by platform for PowerPath version 5.0.

Virtual encapsulation is not supported on SLES 10 SP2.

Virtual encapsulation is not supported on SLES 10 SP3.

Requires separate license key.

Supported on SLES10 SP3 (x8664) and RHEL 5.5 (x8664) only.

Windows 2003 only.

See footnote r

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

C

This appendix contains the PowerPath Family end-of-life summary. It lists the PowerPath family functions and features for which support is being phased out, the document in which end of life is announced, and the release at which end of life is effective.

PowerPath FamilyEnd-of-Life Summary

PowerPath Family End-of-Life Summary 83

84

PowerPath Family End-of-Life Summary

Table 5 PowerPath end-of-life summary (page 1 of 4)

Feature/Function Platform Announceda Effective

Consistency Groups on PowerPath (Consistency Group support is included in Symmetrix Enginuity versions 5568 and later).

AIX PowerPath Family 5.3 for AIX Release Notes

PowerPath 5.3 SP1 for AIX

Windows • PowerPath 5.2 SP1 for Windows Release Notes

• PowerPath 4.5.x for Windows Release Notes

PowerPath and PowerPath/VE Family 5.3 for Windows

HP-UX PowerPath 5.1 SP2 for HP-UX Release Notes

Solaris PowerPath 5.2 SP1 for Solaris Release Notes

PowerPath 5.3 for Solaris

Linuxb N/A N/A

BasicFailover (bf) and NoRedirect (nr) load-balancing policies in powermt CLIc

AIX PowerPath Family 5.3 and Service Pack Releases for AIX Release Notes

Windows PowerPath and PowerPath/VE Family 5.3 and Service Pack Releases for Windows Release Notes

PowerPath and PowerPath/VE 5.5 for Windows

HP-UX PowerPath 5.1 and Service Pack Releases for HP-UX Release Notes

Solaris PowerPath Family 5.3 for Solaris Release Notes

Linux PowerPath Family 5.3 and Service Pack Releases for Linux Release Notes

PowerPath 5.5 for Linux

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Family End-of-Life Summary

powermt set priority command in powermt CLI

AIX PowerPath Family 5.3 and Service Pack Releases for AIX Release Notes

Windows PowerPath and PowerPath/VE Family 5.3 and Service Pack Releases for Windows Release Notes

PowerPath and PowerPath/VE 5.5 for Windows

HP-UX PowerPath 5.1 and Service Pack Releases for HP-UX Release Notes

Solaris PowerPath Family 5.3 for Solaris Release Notes

Linux PowerPath Family 5.3 and Service Pack Releases for Linux Release Notes

PowerPath 5.5 for Linux

hphsx class option in powermt CLI AIX PowerPath Family 5.3 and Service Pack Releases for AIX Release Notes

Windows PowerPath and PowerPath/VE Family 5.3 and Service Pack Releases for Windows Release Notes

PowerPath and PowerPath/VE 5.5 for Windows

HP-UX PowerPath 5.1 and Service Pack Releases for HP-UX Release Notes

Solaris PowerPath Family 5.3 for Solaris Release Notes

Linux PowerPath Family 5.3 and Service Pack Releases for Linux Release Notes

PowerPath 5.5 for Linux

Table 5 PowerPath end-of-life summary (page 2 of 4)

Feature/Function Platform Announceda Effective

85

86

PowerPath Family End-of-Life Summary

HP arrays:• HP EVA 3000 with VCS3.x• HP EVA 5000 with VCS3.x• HP XP 48• HP XP 128• HP XP 512• HP XP 1024• HP XP

AIX PowerPath Family 5.3 SP1 for AIX Release Notes

Windows PowerPath and PowerPath/VE Family 5.3 for Windows Release Notes

PowerPath and PowerPath/VE 5.5 for Windows

HP-UX PowerPath 5.1 SP2 for HP-UX Release Notes

Solaris PowerPath Family 5.3 for Solaris Release Notes

Linux PowerPath 5.3 SP1 for Linux Release Notes

PowerPath 5.5 for Linux

IBM Arrays:• F10• F20• 800• 800T

Windows PowerPath and PowerPath/VE Family 5.5 for Windows Release Notes

Hitachi Lightning arrayAIX PowerPath Family 5.3 SP1 for AIX

Release Notes

Windows PowerPath and PowerPath/VE Family 5.3 for Windows Release Notes

PowerPath and PowerPath/VE 5.5 for Windows

HP-UX PowerPath 5.1 SP2 for HP-UX Release Notes

Solaris PowerPath Family 5.3 for Solaris Release Notes

Linux PowerPath Family 5.3 SP1 for Linux Release Notes

PowerPath 5.5 for Linux

Table 5 PowerPath end-of-life summary (page 3 of 4)

Feature/Function Platform Announceda Effective

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

PowerPath Family End-of-Life Summary

HP arrays:• HP StorageWorks EMA 12000• HP StorageWorks EMA 16000• HP StorageWorks 8000

AIX PowerPath Family 5.3 for AIX Release Notes

PowerPath 5.3 SP1 for AIX

Windows PowerPath 4.5.x for Windows Release Notes

PowerPath 4.5.x/d5.3 for Windows

HP-UX PowerPath 5.1 SP1 for HP-UX Release Notes

PowerPath 5.1 SP2 for HP-UX

Solaris PowerPath 5.2 SP1 for Solaris Release Notes

PowerPath 5.3 for Solaris

Linux PowerPath Family 5.3 for Linux Release Notes

PowerPath 5.3 SP1 for Linux

Native path support for SLES 11e Linux PowerPath 5.3 SP2 for Linux

IBM Power (IBM PPC) Linux PowerPath Family 5.5 for Linux

Release Notes

IA64 architectureLinux PowerPath Family 5.5 for Linux

Release Notes

VERITAS Volume Manager (VxVM) 4.1

Linux PowerPath Family 5.5 for Linux Release Notes

a. Electronic versions of the documents indicated in this column are available on the Powerlink website at http://Powerlink.EMC.com.

b. Linux does not support Consistency Groups.

c. Deprecation of bf and nr is a two-phase deprecation. The release notes for your platform provide deprecation phase details.

d. PowerPath 5.3 for Windows supports third-party arrays, but this support does not include the HP arrays listed above.

e. Native path support will not change for existing versions of Linux (previous to SLES 11).

Table 5 PowerPath end-of-life summary (page 4 of 4)

Feature/Function Platform Announceda Effective

87

88

PowerPath Family End-of-Life Summary

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Glossary

This glossary contains terms related to PowerPath and the management of disk storage subsystems. Many of these terms are used in this manual.

AAccess Logix A software package that lets multiple hosts share storage on certain

CLARiiON storage systems. Access Logix implements storage sharing using storage groups. See also ”Storage group.”

Active (paths) One of two modes for PowerPath I/O paths. The other mode is standby. An active path can accept I/O. The load-balancing and failover policy (set for the PowerPath device with the powermt set policy command) determines how loads are balanced over active paths. Load balancing is done for each device with more than one active path. See also ”Mode” and “Standby (paths).”

Active-active(storage systems)

A type of storage system in which, if there are multiple interfaces to a logical device, they all provide equal access to the logical device. “Active-active” means all interfaces to a device are active simultaneously. For example, Symmetrix, Hitachi Lightning, Hitachi TagmaStore, HP StorageWorks XP, and IBM ESS storage systems are active-active. See also ”Active-passive (storage systems).”

Active-passive(storage systems)

A type of storage system in which, if there are multiple interfaces to a logical device, one is designated as the primary route to the device. The device is “assigned” to that interface card. I/O is not directed to paths connected to a nonassigned interface. For example, CLARiiON storage systems are active-passive.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide 89

90

Glossary

If there is a failure of a device’s assigned interface card or all paths to it, the device is reassigned automatically from the broken interface card to another interface card.

“Active-passive” means only one interface to a device is active at a time, and any others are passive with respect to that device and waiting to take over if needed. See also ”Active-active (storage systems)” and “Trespassing.”

Adapter A circuit board that enables a computer to use external devices such as a disk storage system or a high-speed network. See also ”Host bus adapter (HBA).”

Adaptive (ad) A load-balancing and failover policy for PowerPath devices in which I/O requests are assigned to paths based on an algorithm that takes into account path load and logical device priority.

Alive One of two states for PowerPath paths and logical devices. The other state is dead. A live path is usable: PowerPath can direct I/O to it. A live logical device either was never marked dead by PowerPath or was marked dead but restored with the powermt restore command. See also ”Dead.”

ALUA (AsymmetricLogical Unit Access)

An array failover mode available with CLARiiON arrays in which one array controller is designated as the active/optimized controller and the other array controller is designated as the active/non-optimized controller. As long as the active/optimized controller is viable, I/O is directed to this controller. Should the active/optimized array controller become unavailable or fail, I/O is directed to the active/non-optimized array controller until a trespass occurs.

Arbitrated loop A Fibre Channel topology supported by PowerPath. An arbitrated loop topology requires a port to successfully negotiate to establish a circuit between itself and another port on the loop.

BBasic failover (bf) A failover policy that protects against CLARiiON SP failures,

Symmetrix FA port failures, and back-end failures, and that allows non-disruptive upgrades to work when running PowerPath without a license key. It does not protect against HBA failures. Load balancing is not in effect with basic failover. I/O routing on failure is limited to one HBA and one port on each storage system interface. This policy is valid for CLARiiON, Symmetrix, Invista, VPLEX, and supported

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Glossary

Celerra devices, and is the default policy for them on platforms without a valid PowerPath license. This is the only time that a device is set to Basic failover. PowerPath version 5.3 and service packs is the last version to include support for setting the Basic failover (bf) load-balancing and failover policy when there is a PowerPath license present. As of PowerPath version 5.5 bf has been removed from the powermt set policy command usage. In subsequent releases you will not be able to manually set this policy. See also ”Load balancing.”

Boot device The device that contains a computer’s startup code. Symmetrix logical devices managed by PowerPath can be configured as boot devices.

Bus In a computer, a collection of signal lines that work together to connect one or more modules; for example, a disk controller and the central processor. A bus can also connect two cooperating controllers, such as a SCSI host adapter and a SCSI device controller. See also ”SCSI.”

CChannel A point-to-point data transport link.

Channel group PowerPath’s name for a communication channel directed to only one logical device. Several paths make up a channel group. Channel groups can increase system performance and redundancy by dedicating a set of paths to a critical application component (for example, database log files), while maintaining access to a redundant set of paths to the application component, in case the first set fails.

CLARiiON LUN name See ”User-assignable LUN name.”

CLARiiON optimization(co)

A load-balancing and failover policy for PowerPath devices, in which I/O requests are assigned to paths based on an algorithm that takes into account path load and the logical device priority you set with powermt set policy. This policy is valid for CLARiiON storage systems only and is the default policy for them, on platforms with a valid PowerPath license. It is listed in powermt display output as CLAROpt. See also ”Load balancing.”

Cluster Two or more interconnected hosts sharing access to the same data storage resources. If one host fails, another host can continue to make data available to applications.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide 91

92

Glossary

Consistency Group A group of Symmetrix devices specially configured to act in unison, to maintain the integrity of a database distributed across multiple Symmetrix Remote Data Facility (SRDF) units. PowerPath can report SRDF consistency group status. See also ”SRDF (Symmetrix Remote Data Facility).”

Controller A device that controls and manages access to a part of a computer or computerized system. Examples include disk controllers on computers or similar controllers on disk storage systems.

DData availability Access to any and all user data by an application.

Data channel See ”Channel.”

Dead One of two states for paths and logical devices:

• A dead path is not usable: PowerPath will not direct user I/O to this path. PowerPath marks a path dead when it fails a path test; it marks a path alive again when it passes a path test.

• A dead logical device returned certain types of I/O errors to PowerPath and was judged unusable. Once a logical device is marked dead (and until it is restored with powermt restore), PowerPath returns subsequent I/O requests with a failure status, without forwarding them to the logical device. This prevents further, unrecoverable corruption and allows the user to perform data recovery if needed. Dead is an unusual condition for logical devices. HP-UX is the only platform that ever marks logical devices as dead.

See also ”Alive,” “Path” and “Logical device.”

Default An attribute, value, or option that is assumed when no other is specified explicitly.

Degraded One of three statuses reported by PowerPath for an HBA. The other statuses are failed and optimal. Degraded means one or more (but not all) I/O paths connected to this HBA have failed. See also ”Failed” and “Optimal.”

Device An addressable part (physical or logical) of a host or storage device. For example, PowerPath represents a path set between a host and a logical device as a uniquely named pseudo device.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Glossary

Device number The value that logically identifies a device.

Device driver Software that permits data transfer between a computer system and a device such as a disk. Typically, a device driver interacts directly with system hardware. Consequently, satisfactory operation requires device driver software that is compatible with a specific operating system and hardware model.

Disabled (HBA) A user-defined HBA attribute, indicating the system administrator has made the HBA unavailable for use by PowerPath. Disabling an HBA tells PowerPath not to use any paths originating from this HBA for I/O. Disabling an HBA is done using operating-system-specific commands, not in PowerPath. See also ”Enabled (HBA).”

EEnabled (HBA) A user-defined HBA attribute, indicating the system administrator

considers the HBA available for use by PowerPath. Enabling an HBA is done using operating system–specific commands, not in PowerPath. See also ”Disabled (HBA).”

Encryption See ”PowerPath Encryption.”

E_Port An expansion port on a Fibre Channel switch that links multiple switches into a fabric.

emcpower device The name used by PowerPath (on some operating systems) for a pseudo device. See also ”Pseudo device.”

ESN Enterprise Storage Network. An ESN can provide high availability by configuring multiple paths between connections, configuring alternate paths to Storage Area Network (SAN) components, and deploying redundant SAN components.

FFabric The facilities that link multiple Fibre Channel nodes.

Failed One of three statuses reported by PowerPath for an HBA. The other statuses are degraded and optimal. Failed means all paths to this HBA are dead and no data is passing through the HBA. See also ”Degraded” and “Optimal.”

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide 93

94

Glossary

Failover In PowerPath, the process of detecting a failure on an active path and automatically sending data to another available path.

FC-AL See ”Fibre Channel Arbitrated Loop (FC-AL).”

Fibre A general term for all physical media types supported by the Fibre Channel specification, such as optical fiber, twisted pair, and coaxial cable.

Fibre Channel The general name of an integrated set of ANSI standards that define protocols for flexible information transfer. Fibre Channel is a high-performance serial data channel.

Fibre ChannelArbitrated Loop

(FC-AL)

A standard for a shared access loop, in which several Fibre Channel devices are connected (as opposed to point-to-point transmissions). See also ”Arbitrated loop.”

Firmware Software, typically startup and I/O instructions, stored in an HBA’s read-only memory. PowerPath installation requirements often specify both an HBA and a specific revision of that HBA’s firmware.

GGUI The acronym for graphical user interface, which represents an

application with icons, menus, and dialog boxes selectable by a user. Command-line interfaces are another major means of interacting with an application. PowerPath Administrator is a GUI that allows you to interactively manage PowerPath on Windows platforms.

HHost The generic name for a computer connected to a network or cluster

system.

Host bus adapter(HBA)

A device through which a host can issue I/O requests. PowerPath reports the status of paths originating from HBAs as optimal, degraded, or failed.

Hub A Fibre Channel device used to connect several devices (such as computer servers and storage systems) into a Fibre Channel- Arbitrated Loop (FC-AL). See also ”Fibre Channel Arbitrated Loop (FC-AL).”

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Glossary

Identifier (ID) A sequence of bits or characters that identifies a program, device, controller, or system.

Initiator A SCSI or Fibre Channel device (usually a host system) that requests an operation to be performed by another device, called the target. See also ”Target.”

Interface For example, a Fibre Adapter (FA) on a Symmetrix storage system or a Storage Processor (SP) on a CLARiiON storage system. An array interface port is the front-end interface that connects to the SAN. An interface board (or array board) consists of the interface ports.

LLeast blocks (lb) A load-balancing and failover policy for PowerPath devices, in which

load balance is based on the number of blocks in pending I/Os. I/O requests are assigned to the path with the fewest queued blocks, regardless of the number of requests involved. See also ”Load balancing.”

Least IOs (li) A load-balancing and failover policy for PowerPath devices, in which load balance is based on the number of pending I/Os. I/O requests are assigned to the path with the fewest queued requests, regardless of total block volume. See also ”Load balancing.”

Load balancing The activity of distributing the I/O workload across two or more paths, according to a defined policy. See also ”Path” and “Policy.”

Logical device The smallest addressable storage unit. A logical device is an entity managed and presented by a storage system, which comprises one or more physical disks or sections of physical disks. Logical devices aggregated and managed at a higher level by a volume manager are referenced as logical volumes rather than logical devices.

Logical VolumeManager (LVM)

Software that manages logical storage devices. Logical volume managers typically reside under the computer server’s filesystem.

Logical Unit Number(LUN)

An identifier for a physical or virtual device addressable through a target.

LUN name See ”User-assignable LUN name.”

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide 95

96

Glossary

LUNZ A CLARiiON device used for a management program to communicate with the storage system. A LUNZ is used to tell the storage system that the host exists and what is the WWN of the host. (A WWN, or World Wide Name, uniquely identifies a device on a Fibre Channel network.)

A LUNZ device is present when no storage has been assigned to the host. When Access Logix is used on a CLARiiON system, an agent runs on the host and communicates with the storage system through either the LUNZ or a storage device. On a CLARiiON system, the LUNZ device is replaced by the first storage device assigned to the host; the agent then communicates through the storage device. See also ”VCMDB (Volume Configuration Management Database).”

MMirroring Maintaining two or more identical copies of a designated volume on

two or more disks. Each copy is updated automatically during a write operation. Mirroring improves data availability: if one disk device fails, storage devices automatically use the other disk device to access the data. In this way, the mirrored copies of a disk can be presented as a single, fault-tolerant, virtual disk.

Mirrored pair A logical volume with all data recorded twice, once on each of two different physical devices.

Mode An attribute of a PowerPath path. Path mode can be active or standby. See also ”Active (paths)” and “Standby (paths).”

NNative device A device created by the operating system to represent and provide

access to a logical device. Typically, a native device is path aware (as opposed to path independent) and represents a single path to a logical device. The device is native in that it is provided by the operating system for use with applications. PowerPath supports native devices on all platforms except AIX. See also ”PowerPath device” and “Pseudo device.”

Nice name See ”User-assignable LUN name.”

No redirect (nr) A load-balancing and failover policy for PowerPath devices, in which neither load balancing nor failover is in effect. If nr is set on a failed path and a native device is used, I/O errors will occur when I/O is

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Glossary

directed to that path. If one or more paths is failed and nr is set, data I/O errors can occur. EMC does not recommend using this policy in production environments; use this policy only for diagnostic purposes. This policy is the default for Invista on platforms without a valid PowerPath license. PowerPath version 5.3 and service packs is the last version to include support for setting the NoRedirect (nr) load-balancing and failover policy when there is a PowerPath license present. As of PowerPath version 5.5 nr has been removed from the powermt set policy command usage. In subsequent releases you will not be able to manually set this policy. See also ”Load balancing.”

OOperating system Software that manages the use and allocation of computer resources;

for example, memory, central processing unit (CPU), disk, and printer access. PowerPath runs on several operating systems. In PowerPath documentation, an operating system and the hardware it runs on are referred to as a platform.

Optimal One of three statuses reported by PowerPath for an HBA; the others are degraded and failed. Optimal means all paths to this HBA are alive (usable). See also ”Degraded” and “Failed.”

PParameter A value given to a command variable. PowerPath powermt

commands have parameters that users can specify to tailor the effects of the commands.

Path Any route between nodes in a network. In PowerPath, a path refers to the route travelled by PowerPath data between a host and a logical device. A path comprises an HBA, one or more cables, a switch or hub (Fibre Channel only), an interface and port, and a logical device.

Path set In PowerPath, the group of all paths that read data from and write data to the same logical device.

Physical volume In IBM AIX LVM terminology, each physical disk drive connected to the system. A physical volume is an addressable disk on the SCSI bus. By default, AIX refers to the physical volumes as hdisk0, hdisk1, hdisk2, and so on. See also ”SCSI.”

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide 97

98

Glossary

Physical volumeidentifier (PVID)

On AIX, a unique number written on the first block of the device. The Logical Volume Manager uses this number to identify specific disks. See also ”Logical Volume Manager (LVM).”

Platform In PowerPath documentation, an operating system and the hardware it runs on.

Policy A load-balancing and failover algorithm for PowerPath devices. This can be changed with powermt set policy. See also ”Adaptive (ad),” “CLARiiON optimization (co),” “Least blocks (lb),” “Least IOs (li),” “Request (re),” “Round robin (rr),” “StreamIO (si),” and “Symmetrix optimization (so).” PowerPath version 5.3 and service packs is the last version to include support for setting the Basic Failover (bf) and NoRedirect (nr) load-balancing and failover policies when there is a PowerPath license present. As of PowerPath version 5.5 bf and nr have been removed from the powermt set policy command usage. In subsequent releases you will not be able to manually set these policies.

Port (1) An access point for data entry or exit. (2) A receptacle on a device, to which a cable for another device is attached.

PowerPath device A device created by PowerPath for each logical device PowerPath discovers. There is a one-to-one relationship between PowerPath devices and logical devices. PowerPath presents PowerPath devices differently, depending on the platform. Much of this difference is due to the design of the host operating system. Depending on the platform, PowerPath may present PowerPath devices as native devices or pseudo devices. See also ”Logical device,” “Native device,” and “Pseudo device.”

PowerPath Encryption PowerPath Encryption with RSA is host-based software that uses strong encryption to safeguard sensitive data on disk devices. PowerPath Encryption assures the confidentiality of data on a disk drive that is physically removed from a data center, and it prevents anyone who gains unauthorized access to the disk from reading or using the data on that device.

Pseudo device A special kind of device (operating system object used to access devices) created by PowerPath. It is path independent, as are native devices, once PowerPath is installed. When a pseudo device is created, there is one (and only one) per path set.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Glossary

See also ”emcpower device,” “Native device,” “Path set,” and “PowerPath device.”

RR1 (Source) and R2

(Target) devicesA Symmetrix source (R1) device participating in SRDF operations with a target (R2) device. All writes to the R1 device are mirrored to an R2 target device in a remote Symmetrix unit. On some platforms, PowerPath provides failover boot support to R1 and R2 devices. See also ”SRDF (Symmetrix Remote Data Facility)” and “Boot device.”

Reassignment On an active-passive storage system, movement of logical devices from one storage system interface card to another. This occurs in the event of a failure of a storage system interface card or all paths to an interface card. If an interface card fails, logical devices are reassigned from the broken interface to another interface. This reassignment is initiated by the other, functioning interface. If all paths from a host to an interface fail, logical devices accessed on those paths are reassigned to another interface, with which the host can still communicate. This reassignment is initiated by PowerPath, which instructs the storage system to make the reassignment.

Reassignment can take several seconds to complete; however, I/Os do not fail during it. After devices are reassigned, PowerPath detects the changes and seamlessly routes data using the new route.

The CLARiiON term for reassignment is trespassing.

See also ”Active-passive (storage systems).”

Redundant path An independent communication channel between a host and a logical device that already share at least one channel. PowerPath allows you to create redundant paths to promote failover. See also ”Failover.”

Request (re) A load-balancing and failover policy for PowerPath devices. For native devices, it uses the path that would have been used if PowerPath were not installed. For pseudo devices, it uses one arbitrary path for all I/O. For all devices, path failover is in effect, but load balancing is not. This is the default policy for CLARiiON storage systems on platforms with a valid PowerPath Base license. See also ”Failover” and “Load balancing.”

Round robin (rr) A load-balancing and failover policy for PowerPath devices, in which I/O requests are assigned to each available path in rotation. See also ”Load balancing.”

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide 99

100

Glossary

SSAN Storage Area Network. See also ”ESN.”

SCSI The acronym for Small Computer System Interface, the ANSI-standard set of protocols that defines connections between personal and other small computers and peripheral devices such as printers and disks. PowerPath supports SCSI standards. Specific requirements apply to each supported operating system.

SCSI device An HBA, peripheral controller, or intelligent peripheral that can attach to a SCSI bus.

Single point of failure(SPOF)

A hardware or software design or configuration that depends on one component for successful operation: If that component fails, the entire application fails. High-availability design tries to eliminate or minimize single points of failure through redundancy, recovery, and/or failover.

Standby (paths) One of two modes for I/O paths. A standby path is held in reserve. Being set to standby does not mean a path will not be used. Rather, it means that the weight of the path is heavily adjusted to preclude its use in normal operations. A standby path still can be selected if it is the best path for a request. Path mode is set with powermt set mode. See also ”Active (paths).”

Storage group One or more LUNs within a storage system that is reserved for one or more hosts and is inaccessible to other hosts. Access Logix enforces the host-to-storage group permissions and runs in the storage-system SPs

StreamIO (si) A load-balancing and failover policy for PowerPath devices in which, For each possible path for an I/O to a particular volume, this policy selects the same path as was selected for the previous I/O to the volume, unless the volume I/O count since last path change exceeds the volume’s threshold value. When the threshold is exceeded, the policy selects a new path based on the adaptive policy algorithm. The volume I/O count is re-zeroed on each path change (See also ”Adaptive (ad).”

Striping Segmenting logically sequential data and writing the segments to multiple physical disks. Placing data across multiple disks improves performance, by aggregating the I/O performance of several disks. It

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Glossary

also improves availability, as the combined striped data can be presented as a single, fault-tolerant, virtual disk.

Storage device A physical device that can attach to a SCSI device, which in turn connects to the SCSI bus.

Switch A Fibre Channel device used to connect other devices (for example, computer servers and storage systems) into a Fibre Channel fabric. In a switched topology, HBAs may be zoned to share storage-system ports. See also ”Fibre Channel.”

Symmetrixoptimization (so)

A load-balancing and failover policy for PowerPath devices, in which I/O requests are routed to paths based on an algorithm that takes into account path load and the logical device priority you set with powermt set policy. Load is a function of the number, size, priority, and type of I/O queued on each path. This policy is valid for Symmetrix storage systems only and is the default policy for them, on platforms with a valid PowerPath license. It is listed in powermt display output as SymmOpt. See also ”Load balancing.”

SRDF (SymmetrixRemote Data Facility)

The microcode and hardware required to support Symmetrix remote mirroring. See also ”Mirroring.”

TTarget A SCSI or Fibre Channel device that performs an I/O process

requested by another device, called the initiator. See also ”Initiator.”

Trespassing The CLARiiON term for reassignment. See also ”Reassignment.”

UUNIX An interactive, multitasking, multiuser operating system supported

by PowerPath.

User-assignable LUNname

A character string that a user or system manager associates with a logical device on a CLARiiON array and assigns through Navisphere.

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide 101

102

Glossary

VVCMDB (Volume

ConfigurationManagement

Database)

A Symmetrix device used for a management program to communicate with the storage system. A VCBDM is used to tell the storage system that the host exists and what is the WWN of the host. (A WWN, or World Wide Name, uniquely identifies a device on a Fibre Channel network.)

A VCMDB is present when using Volume Logix to perform LUN masking. When storage is assigned to the host, the storage appears in addition to the VCMDB. See also ”LUNZ.”

Volume An abstracted, logical disk device. Volumes read and write data like other disk devices, but typically they do not support other operations. A Symmetrix volume may comprise storage on one or more Symmetrix devices, but it is presented to hosts as a single disk device.

A volume can be a single disk partition or multiple disk partitions on one or more physical drives. A volume can coincide with a logical device, include multiple logical devices, or contain only a piece of a logical device. Applications that use volumes do not need to be aware of the underlying physical structure; software handles the mapping of virtual addresses to physical addresses.

See also ”Logical device.”

Volume group A group of physical volumes.

Volume manager Software that creates and manages logical volumes that span multiple physical disks, allowing greater flexibility and reliability for storing data.

WWrite throttling If enabled, limits the number of queued writes to the common I/O

queue in the HBA driver; instead, the writes are queued in PowerPath. As a result, read requests do not get delayed behind a large number of write requests. Write throttling is disabled by default. See also ”Host bus adapter (HBA).”

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Glossary

ZZone A set of devices that can access one another. All devices connected to

a Fibre Channel connectivity product (such as the ED-1032 Director) may be configured into one or more zones. Devices in the same zone can “see” each other, while those in different zones cannot. Zoning allows an administrator to group several devices by function or location. See also ”Fibre Channel.”

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide 103

104

Glossary

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Index

AAccess Logix 89Active-active storage systems 37Adapter 90Adaptive (ad) 90Alive path state 48Application performance tuning 52Arbitrated loop 90Autorestore. See Periodic autorestore

BBoot device 91Bus 91

CChannel 91Channel group 52Cluster 91Comments 17Consistency group 92Controller 92

DData availability 92Data channel 92Dead path state 48Default 92Device

definition 92driver 93native 40number 93

pseudo 41Disabled HBA status 93Disabling

a port 35Documentation, related 12

EE_Port 93emcpower device 93Enabled

HBA status 93Encryption. See PowerPath EncryptionESN 93

FFA 35Fabric 93Failover 27FC-AL 94Fibre 94Fibre Channel

configuration requirements 59definition 94

Fibre Channel Arbitrated Loop 94Firmware 94

GGUI 94

HHBA. See Host bus adapter

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide 105

106

Index

Host 94Host bus adapter (HBA)

definition 35disabled status 93enabled status 93

Hub 94

IIdentifier (ID) 95Initiator 95Interface 35, 95Invista storage devices 73

LLoad balancing 26, 44Load balancing group 39Logical Unit Number (LUN) 95Logical Volume Manager (LVM) 95LUNZ 96

MMirrored pair 96Mirroring 96

NNative devices 40

OOperating system 97

PParameter 97Path set 38Paths

about 35failover 27load balancing 26, 44path set 38restoration 26state 48testing 26, 48

Performance, tuning applications 52Periodic autorestore 51

Physical volume 97Physical Volume Identifier (PVID) 98Platform 98Port

definition 35disabling 35

Ports, using multiple 35powermt set mode 53powermt utility

commands 54PowerPath Encryption 32PowerPath Fabric Failover 75PowerPath SE 75Pseudo devices 41

RR1 (source) and R2 (target) devices 99Reassignment 37, 99Redundant path 99Restoring paths 26

SSAN 100SCSI

configuration requirements 70definition 100device 100

Single point of failure (SPOF) 100SP (Storage Processor) 35State, path 48Storage 100Storage device 101Storage group 100Stream I/O (si) 100Striping 100Switch 101Symmetrix Remote Data Facility (SRDF) 101

TTarget 101Testing paths 26, 48Trespassing 37, 99Tuning application performance 52

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

Index

UUNIX 101Utility Kit PowerPath 75

VVCMDB (Volume Configuration Management

Database) 102

Volume 102Volume group 102Volume manager 102

ZZone 103

107EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide

108

Index

EMC PowerPath Family Version 5.2, 5.3, and 5.5 Product Guide