352
EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.EMC.com EMC ® Host Connectivity Guide for Windows P/N 300-000-603 REV A48

EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Embed Size (px)

Citation preview

Page 1: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC® Host Connectivity Guidefor Windows

P/N 300-000-603REV A48

EMC CorporationCorporate Headquarters:

Hopkinton, MA 01748-9103

1-508-435-1000www.EMC.com

Page 2: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

2

Copyright © 2012 EMC Corporation. All rights reserved.

Published May, 2012

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

EMC Host Connectivity Guide for Windows

Page 3: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Contents

Preface............................................................................................................................ 17

Chapter 1 General Procedures and InformationGeneral Windows information ....................................................... 22

Terminology ............................................................................... 22Utilities and functions............................................................... 22

Windows environment..................................................................... 23Hardware connectivity ............................................................. 23

Booting Windows from external storage....................................... 24Boot-from-SAN .......................................................................... 24Benefits of boot-from-SAN....................................................... 24Boot-from-SAN configuration restrictions............................. 25Risks of booting from the storage array ................................. 25How to determine I/O latency and load on the boot LUN ............................................................................................. 26Boot crash dump save to disk behavior ................................. 26Configuring EMC VNX series and CLARiiON systems for boot from SAN ..................................................................... 27

Configuring for Windows 2000/2003/2008.................................. 44Configuring the HBA driver .................................................... 44Initializing the disks .................................................................. 44Creating volumes....................................................................... 47System status and error messages........................................... 49Adding devices online .............................................................. 50

Editing the Windows I/O timeout value ...................................... 51Recovering from drive errors .......................................................... 52

Recovery from offline state ...................................................... 52Rebooting to recover from an offline state............................. 54Recovery from at-risk file system............................................ 55Recovery from fail state ............................................................ 56

EMC Host Connectivity Guide for Windows 3

Page 4: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Contents

diskpar and diskpart ........................................................................ 57Microsoft Cluster Server .................................................................. 58

Windows 2008 clustering changes .......................................... 58Quorum schemes....................................................................... 61MSCS in a boot from SAN environment................................ 66Running cluster with iSCSI-attached Symmetrix devices ... 68

Troubleshooting Microsoft cluster issues ...................................... 69Setting up your networks for the server cluster ................... 69Configuring TCP/IP settings................................................... 70Using a different disk for the quorum resource.................... 73Recovering from a corrupted quorum log or quorum disk .............................................................................................. 75Checklist: Installing a physical disk resource ....................... 77Creating a new group in a cluster........................................... 78Creating a new resource ........................................................... 79Best practices for configuring and operating server clusters ........................................................................................ 81How to troubleshoot cluster service startup issues.............. 87Cluster disk and drive connection problems ........................ 93Client-to-cluster connectivity problems................................. 98General administrative problems ......................................... 102Windows 2008 Failover Clustering and Symmetrix........... 109

Windows 2008 Server Core operating system option ............... 110Limitations................................................................................ 111

Chapter 2 Fibre Channel Attach EnvironmentsWindows Fibre Channel environment......................................... 114

Hardware connectivity ........................................................... 114Boot device support ................................................................ 114

Planning for fabric zoning and connections ............................... 115Host configuration with Emulex HBAs ...................................... 116

Configuring Emulex OneConnect 10 GbE iSCSI BIOS/boot LUN settings for OCe10102-IM iSCSI adapters ......... 116Installing Emulex LPSe12002 8 Gb PCIe EmulexSecure Fibre Channel adapter ............................................................ 123

Host configuration with QLogic HBAs ....................................... 131Host configuration with Brocade HBAs...................................... 132Fibre Channel over Ethernet (FCoE) environments .................. 133Cisco Unified Computing System................................................ 135

EMC Host Connectivity Guide for Windows4

Page 5: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Contents

Chapter 3 iSCSI Attach EnvironmentsIntroduction ..................................................................................... 138

Terminology.............................................................................. 138Software..................................................................................... 138Boot device support ................................................................. 139

Installing the Microsoft iSCSI Initiator......................................... 140Adding or removing components ......................................... 141Determining the Initiator version on Windows 2003 ......... 153Uninstalling the Initiator......................................................... 153Windows 2008 R2 iSCSI Initiator manual procedure ......... 154Windows 2008 R2 iSCSI Initiator cleanup............................ 160

Using MS iSNS server software with iSCSI configurations ...... 163Installing iSNS server software.............................................. 163Configuring iSNS server ......................................................... 167Using discovery domains for iSNS........................................ 168

iSCSI Boot with the Intel PRO/1000 family of adapters ........... 173Preparing your storage array for boot .................................. 173Post installation information .................................................. 177

Notes on Microsoft iSCSI Initiator................................................ 180iSCSI failover behavior with the Microsoft iSCSI initiator ...................................................................................... 180Microsoft Cluster Server ......................................................... 201Dynamic disks .......................................................................... 202Boot ............................................................................................ 202NIC teaming ............................................................................. 202Using the Initiator with EMC PowerPath v4.6.x or later ... 203Commonly seen issues ............................................................ 208

Chapter 4 EMC Symmetrix, VNX Series, CLARiiON, and Celerra InformationSymmetrix environment ................................................................ 216

Initial Symmetrix configuration............................................. 216Arbitrated loop addressing .................................................... 217Fabric addressing ..................................................................... 219SCSI-3 FCP addressing............................................................ 220Federated Live Migration (FLM) ........................................... 221

VNX series and CLARiiON environment ................................... 222Storage components ................................................................ 222Required storage system setup .............................................. 222Host connectivity ..................................................................... 222LUNZ visibility to host ........................................................... 223Unisphere/Navisphere Windows host agent and server utility.......................................................................................... 223About Unisphere/Navisphere host agent ........................... 224

5EMC Host Connectivity Guide for Windows

Page 6: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Contents

About Unisphere/Navisphere storage system initialization utility.................................................................. 224Unisphere/Navisphere Management Suite ........................ 225LUN expansion recognition................................................... 225Configuring Veritas Volume Manager for VNX series and CLARiiON ........................................................................ 227

Celerra environment ...................................................................... 228

Chapter 5 Virtual ProvisioningVirtual Provisioning on Symmetrix ............................................. 230

Terminology ............................................................................. 231Management tools ................................................................... 232Thin device ............................................................................... 232

Implementation considerations .................................................... 235Over-subscribed thin pools.................................................... 236Thin-hostile environments ..................................................... 236Pre-provisioning with thin devices in a thin hostile environment ............................................................................. 237Host boot/root/swap/dump devices positioned on Symmetrix VP (tdev) devices ................................................ 238Cluster configurations ............................................................ 239

Operating system characteristics.................................................. 240

Chapter 6 InvistaEMC Invista overview ................................................................... 242

Invista architecture.................................................................. 242EMC Invista advantages......................................................... 243EMC Invista documentation.................................................. 244EMC Invista offerings ............................................................. 245

Prerequisites .................................................................................... 246Storage components ....................................................................... 247

Invista component terminology ............................................ 247Invista components ................................................................. 247Invista instance components.................................................. 247

Configuration guidelines............................................................... 249Required storage system setup..................................................... 250Host connectivity ............................................................................ 252Front-end paths............................................................................... 253

Guidelines for optimizing the configuration....................... 253Viewing the World Wide Name for an HBA port .............. 254InvistaServerUtilCLI (PushApp)........................................... 254Manually registering a front-end path ................................. 254Verifying the status of a front-end path............................... 256

EMC Host Connectivity Guide for Windows6

Page 7: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Contents

Making volumes in an Invista Virtual Frame visible to a Windows host .................................................................................. 259LUNZ visibility to host................................................................... 260Guidelines for optimizing the configuration .............................. 261Installing AdmReplicate................................................................. 262ICRV requirements.......................................................................... 263

Symmetrix-specific array ........................................................ 263VNX series- and CLARiiON-specific system....................... 264

Using PowerPath Migration Enabler ........................................... 266Storage Elements...................................................................... 266Preparing a Storage Element for PowerPath Migration Enabler....................................................................................... 266Making unimported Storage Element unavailable ............. 267

Invista and Veritas Volume manager interaction ....................... 268

Chapter 7 EMC VPLEXEMC VPLEX overview ................................................................... 270

Product description ................................................................. 270Product offerings...................................................................... 271GeoSynchrony .......................................................................... 271VPLEX advantages .................................................................. 274VPLEX management................................................................ 274SAN switches............................................................................ 275VPLEX limitations.................................................................... 275VPLEX documentation............................................................ 275

Prerequisites..................................................................................... 277Provisioning and exporting storage ............................................. 278

VPLEX with GeoSynchrony v4.x ........................................... 278VPLEX with GeoSynchrony v5.x ........................................... 279

Storage volumes .............................................................................. 280Claiming and naming storage volumes ............................... 280Extents ...................................................................................... 281Devices ...................................................................................... 281Distributed devices .................................................................. 281Rule sets..................................................................................... 281Virtual volumes ....................................................................... 282

System volumes............................................................................... 283Metadata volumes.................................................................... 283Logging volumes ..................................................................... 283

Required storage system setup ..................................................... 284Required Symmetrix FA bit settings ..................................... 284Supported storage arrays........................................................ 285Initiator settings on back-end arrays..................................... 285

Host connectivity ............................................................................ 286

7EMC Host Connectivity Guide for Windows

Page 8: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Contents

Exporting virtual volumes to hosts.............................................. 287Front-end paths............................................................................... 292

Viewing the World Wide Name for an HBA port .............. 292VPLEX ports ............................................................................. 292Initiators.................................................................................... 292

Configuring Windows hosts to recognize VPLEX volumes..... 294Configuring quorum on Windows Failover Cluster for VPLEX Metro or Geo clusters ....................................................... 296

VPLEX Metro or Geo cluster configuration......................... 296Prerequisites ............................................................................. 297Setting up quorum on a Windows Failover Cluster for VPLEX Metro or Geo clusters................................................ 298

Chapter 8 EMC PowerPath for WindowsPowerPath and PowerPath iSCSI................................................. 306PowerPath for Windows................................................................ 307

PowerPath and MSCS............................................................. 307Integrating PowerPath into an existing MSCS cluster ....... 307

PowerPath verification and problem determination................. 310Problem determination........................................................... 312Making changes to your environment ................................. 315PowerPath messages............................................................... 316

Chapter 9 Using Microsoft Native MPIO with Windows Server 2008 and Windows Server 2008 R2Support for Native MPIO in Windows Server 2008 and Windows Server 2008 R2 ............................................................... 318Installing and configuring Native MPIO .................................... 319

Path management in Multipath I/O for VPLEX, Symmetrix DMX, VMAX 40K, VMAX 20K/VMAX, and VMAX 10K/VMAXe, VNX series, and CLARiiON system........................................................................................ 321Enabling Native MPIO on Windows Server 2008 Server Core and Windows Server 2008 R2 Server Core................. 322

Known issues................................................................................... 324

Appendix A Persistent BindingUnderstanding persistent binding .............................................. 326

Methods of persistent binding............................................... 329

EMC Host Connectivity Guide for Windows8

Page 9: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Contents

Appendix B Solutions EnablerEMC Solutions Enabler ................................................................. 332Migration considerations .............................................................. 333

Information collection phase.................................................. 333Reconnection phase ................................................................. 334Troubleshooting phase............................................................ 334References ................................................................................. 335

Appendix C General Host Behavior and LimitationsIssues ................................................................................................ 338Capabilities and limitations .......................................................... 339

Operating system/driver capabilities and limitations....... 339How a server responds to a failure in the boot LUN path ....... 343

Appendix D Veritas Volume Management SoftwareVeritas Volume Management software ....................................... 346

Veritas Storage Foundation 5.0 and 5.1 ................................ 346Veritas Storage Foundation 4.3 .............................................. 347Veritas Storage Foundation 4.2 .............................................. 348Veritas Foundation Suite 4.1 .................................................. 348Veritas Volume Manager 3.1 and Veritas DMP .................. 348Veritas Volume Manager 3.0 .................................................. 349

Veritas Storage Foundation feature functionality ...................... 350Thin Reclamation (VxVM)...................................................... 350SmartMove (VxVM)................................................................. 350

Index .............................................................................................................................. 351

9EMC Host Connectivity Guide for Windows

Page 10: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Contents

EMC Host Connectivity Guide for Windows10

Page 11: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Title Page

Figures

1 Disk Management in Computer Management window .......................... 452 Upgrade to Dynamic Disk ............................................................................ 463 Healthy volumes and file systems example ............................................... 484 Recovery from offline state ........................................................................... 535 Offline state requires reboot ......................................................................... 546 Recovery from at-risk file system ................................................................ 557 Recovery from fail state ................................................................................. 568 Configure Cluster Quorum Wizard ............................................................ 609 MSCS using legacy quorum scheme ........................................................... 6210 MSCS using MNS quorum scheme ............................................................. 6311 Cluster Service Properties dialog box, Recovery tab ................................ 6812 Server Core installation example ............................................................... 11013 Emulex OneConnect 10 GbE iSCSI BIOS banner ..................................... 11714 Emulex OneConnect iSCSI Select Utility page ....................................... 11715 Emulex OneConnect iSCSI BIOS Controller Configuration Selection

Menu ................................................................................................................11816 Individual controller configuration details .............................................. 11817 Enable Boot Support .................................................................................... 11918 Controller Network Configuration screen ............................................... 12019 Controller Static IP Address ....................................................................... 12020 Controller Static IP Address ....................................................................... 12121 Controller iSCSI Target Configuration ..................................................... 12122 Adding iSCSI Target .................................................................................... 12223 Emulex OneConnect 10 GbE iSCSI BIOS banner ..................................... 12324 Emulex One Command Manager software installation window ......... 12425 eHBA Status before installing EHAPI ....................................................... 12526 ElxSec Setup Wizard ................................................................................... 12627 eHBA Status after installing EHAPI ......................................................... 12728 Register PowerPath license key for encryption ....................................... 12829 Config.bat ...................................................................................................... 129

EMC Host Connectivity Guide for Windows 11

Page 12: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

12

Figures

30 Cisco Unified Computing System example ............................................. 13631 Software Update Installation Wizard ........................................................ 14032 iSCSI Initiator Properties window ............................................................. 14233 iSCSI Initiator Properties, Discovery tab .................................................. 14434 Add Target Portal dialog box ..................................................................... 14535 Add iSNS Server dialog box ....................................................................... 14536 iSCSI Initiator Properties, Targets tab ....................................................... 14637 Log On to Target dialog box ....................................................................... 14738 Advanced Settings window ........................................................................ 14839 iSCSI Initiator Properties, Targets tab ....................................................... 15040 iSCSI Initiator Properties, Persistent Targets tab ..................................... 15141 iSCSI Initiator Properties, Bound Volumes/Devices tab ....................... 15242 Microsoft iSNS Server Installation Wizard .............................................. 16443 Installation option dialog ............................................................................ 16544 iSNS DHCP configuration option dialog .................................................. 16545 Installation confirmation message ............................................................. 16646 iSNS General properties .............................................................................. 16747 Target/Initiator Details dialog ................................................................... 16848 iSNS Server Properties, Discovery Domains tab ..................................... 16949 Create Discovery Domain dialog ............................................................... 16950 iSNS Server Properties, Discovery Domain with members added ....... 17051 Add registered Initiator or Target to Discovery Domain ....................... 17152 iSNS Server Properties, Discovery Domain Sets tab ............................... 17253 Four paths ...................................................................................................... 17854 EMC PowerPathAdmin ............................................................................... 17955 Advanced Settings dialog box .................................................................... 18156 Single iSCSI subnet configuration ............................................................. 18257 Multiple iSCSI subnet configuration ......................................................... 18958 iSCSI Initiator Properties dialog box ......................................................... 20459 Log On to Target dialog box ....................................................................... 20560 Advanced Settings dialog box .................................................................... 20661 Four paths ...................................................................................................... 20762 Virtual Provisioning on Symmetrix ........................................................... 23063 Thin device and thin storage pool containing data devices .................. 23464 Invista instance ............................................................................................. 24365 Front-end path example .............................................................................. 25366 Connectivity status window ....................................................................... 25567 Connectivity status window ....................................................................... 25768 VPLEX provisioning and exporting storage process .............................. 27969 Create storage view ...................................................................................... 28870 Register initiators ......................................................................................... 28971 Add ports to storage view .......................................................................... 29072 Add virtual volumes to storage view ........................................................ 291

EMC Host Connectivity Guide for Windows

Page 13: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Figures

73 VPLEX Metro cluster configuration example ........................................... 29674 PowerPath Administration icon ................................................................. 31075 PowerPath Monitor Taskbar icons and status .......................................... 31076 One path ......................................................................................................... 31177 Multiple paths ............................................................................................... 31278 Error with an Array port ............................................................................. 31479 Failed HBA path ........................................................................................... 31580 Original configuration before the reboot .................................................. 32881 Host after the rebooted ................................................................................ 32882 LUN Mapping and Automatic LUN Mapping ........................................ 340

13EMC Host Connectivity Guide for Windows

Page 14: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Figures

EMC Host Connectivity Guide for Windows14

Page 15: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Title Page

Tables

1 FC-AL addressing parameters ....................................................................2182 Symmetrix SCSI-3 addressing modes ........................................................2213 Required Symmetrix FA bit settings for connection to Invista ..............2504 Required Symmetrix FA bit settings for connection to VPLEX ..............2845 Possible failure states ....................................................................................3136 Array and device types ..............................................................................3207 Server response to failure in the boot LUN path (Single fault) ..............3438 Impact of failure in the boot LUN path .....................................................344

EMC Host Connectivity Guide for Windows 15

Page 16: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

16

Tables

EMC Host Connectivity Guide for Windows

Page 17: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Preface

As part of an effort to improve and enhance the performance and capabilities of its product line, EMC from time to time releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all revisions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes.

If a product does not function properly or does not function as described in this document, please contact your EMC Representative.

This guide describes the features and setup procedures for Windows 2000, Windows 2003, and Windows 2008 host interfaces to EMC storage arrays over Fibre Channel or iSCSI.

Audience This guide is intended for use by storage administrators, system programmers, or operators who are involved in acquiring, managing, or operating EMC Symmetrix, EMC VNX series, and EMC CLARiiON, and host devices.

Readers of this guide are expected to be familiar with:

◆ Symmetrix, VNX series, and CLARiiON system operation

◆ Windows 2000, 2003, or Windows 2008 operating environment

EMC Support Matrix

IMPORTANT!Always consult the EMC Support Matrix, available through E-Lab Interoperability Navigator at: http://elabnavigator.EMC.com, under the PDFs and Guides tab, for the most up-to-date information.

EMC Host Connectivity Guide for Windows 17

Page 18: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

18

Preface

Conventions used inthis guide

EMC uses the following conventions for notes and cautions.

Note: A note presents information that is important, but not hazard-related.

CAUTION!A caution contains information essential to avoid damage to the system or equipment. The caution may apply to hardware or software.

Typographical conventionsEMC uses the following type style conventions in this guide:

normal font In running text:• Interface elements (for example, button names, dialog box

names) outside of procedures• Items that user selects outside of procedures• Java classes and interface names• Names of resources, attributes, pools, Boolean expressions,

buttons, DQL statements, keywords, clauses, environment variables, filenames, functions, menu names, utilities

• Pathnames, URLs, filenames, directory names, computer names, links, groups, service keys, file systems, environment variables (for example, command line and text), notifications

bold • User actions (what the user clicks, presses, or selects)• Interface elements (button names, dialog box names)• Names of keys, commands, programs, scripts, applications,

utilities, processes, notifications, system calls, services, applications, and utilities in text

italic • Book titles• New terms in text• Emphasis in text

Courier • Prompts • System output • Filenames • Pathnames• URLs • Syntax when shown in command line or other examples

Courier, bold • User entry• Options in command-line syntax

Courier italic • Arguments in examples of command-line syntax• Variables in examples of screen or file output• Variables in pathnames

EMC Host Connectivity Guide for Windows

Page 19: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Preface

Where to get help EMC support, product, and licensing information can be obtained as follows.

Product information — For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Powerlink website (registration required) at:

http://Powerlink.EMC.com

Technical support — For technical support, go to EMC Customer Service on Powerlink. To open a service request through Powerlink, you must have a valid support agreement. Please contact your EMC sales representative for details about obtaining a valid support agreement or to answer any questions about your account.

Your comments Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send your opinion of this guide to:

[email protected]

<> Angle brackets for parameter values (variables) supplied by user.

[] Square brackets for optional values.

| Vertical bar symbol for alternate selections. The bar means or.

... Ellipsis for nonessential information omitted from the example.

EMC Host Connectivity Guide for Windows 19

Page 20: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

20

Preface

EMC Host Connectivity Guide for Windows

Page 21: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

1

This chapter provides general procedures and information regarding Windows hosts.

◆ General Windows information......................................................... 22◆ Windows environment...................................................................... 23◆ Booting Windows from external storage ........................................ 24◆ Configuring for Windows 2000/2003/2008................................... 44◆ Editing the Windows I/O timeout value ....................................... 51◆ Recovering from drive errors ........................................................... 52◆ diskpar and diskpart ......................................................................... 57◆ Microsoft Cluster Server ................................................................... 58◆ Troubleshooting Microsoft cluster issues ....................................... 69◆ Windows 2008 Server Core operating system option................. 110

General Proceduresand Information

General Procedures and Information 21

Page 22: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

22

General Procedures and Information

General Windows informationThis section provides information that is common to all supported versions of Windows. Please read the entire section before proceeding to the rest of the chapter.

Terminology

You should understand these terms:

◆ Free space — An unused and unformatted portion of a hard disk that can be partitioned or subpartitioned.

◆ Partition — A portion of a physical hard disk that functions as though it were a physically separate unit.

◆ Volume — A partition or collection of partitions that have been formatted for use by a file system. A volume is assigned a drive letter.

◆ Primary partition — A portion of a physical disk that can be marked for use by an operating system. A physical disk can have up to four primary partitions. A primary partition cannot be subpartitioned.

Utilities and functions

Here are some Windows functions and utilities you can use to define and manage EMC® Symmetrix®, EMC VNX™ series, and EMC CLARiiON® systems. The use of these functions and utilities is optional; they are listed for reference only:

◆ Disk Manager — Graphical tool for managing disks; for example, partitioning, creating, and deleting volumes.

◆ Registry Editor — Graphical tool for displaying detailed hardware and software configuration information. Not normally part of the Administrative Tools group, the registry editor, REGEDT.EXE, can be found in the Windows \system32 subdirectory.

◆ Event Viewer — Graphical tool for viewing system or application errors.

EMC Host Connectivity Guide for Windows

Page 23: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Windows environmentThis section lists Fibre Channel support information specific to the Windows environment.

For more information, refer to the appropriate chapter:

◆ Chapter 2, ”Fibre Channel Attach Environments”

◆ Chapter 3, ”iSCSI Attach Environments”

Hardware connectivityRefer to the EMC Support Matrix or contact your EMC representative for the latest information on qualified hosts, host bus adapters, and connectivity equipment.

EMC does not recommend mixing HBAs from different vendors in the same host.

Windows environment 23

Page 24: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

24

General Procedures and Information

Booting Windows from external storageWindows hosts have been qualified for booting from EMC array devices interfaced through Fibre Channel as described under "Boot Device Support" in the EMC Support Matrix. Refer to the appropriate Windows HBA guide, available on the Powerlink® website at http://Powerlink.EMC.com, for information on configuring your HBA and installing the Windows operating system to an external storage array:

◆ EMC Host Connectivity with Emulex Fibre Channel Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment 7

◆ EMC Host Connectivity with QLogic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment

◆ EMC Host Connectivity with Brocade Fibre Channel and Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment

Boot-from-SANAlthough Windows servers typically boot the operating system from a local, internal disk, many customers want to utilize the features of VNX series, CLARiiON, and Symmetrix to store and protect their boot disks and data. Boot-from-SAN allows VNX series, CLARiiON, or Symmetrix systems to be used as the boot disk for your server instead of a directly-attached (or internal) hard disk. Using a properly configured Fibre Channel HBA, FCoE CNA, or blade server mezzanine adapter connected and zoned to the same switch or fabric as the storage array, a server can be configured to use a LUN presented from the array as its boot disk.

Benefits of boot-from-SANBoot-from-SAN can simplify management in the data center. Separating the boot image from each server allows administrators to leverage their investments in EMC storage arrays to achieve high availability, better data integrity, and more efficient storage management. Other benefits can include:

◆ Improved disaster tolerance

EMC Host Connectivity Guide for Windows

Page 25: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

◆ Reduced total cost through diskless servers

◆ High-availability storage

◆ Rapid server repurposing

◆ Consolidation of image management

Boot-from-SAN configuration restrictions

Refer to the EMC Support Matrix for any specific boot-from-SAN restrictions since this guide no longer contains restriction information. The information in the EMC Support Matrix supersedes any restriction references found in previous HBA installation guides.

Risks of booting from the storage array

When using the storage array as a boot disk, EMC recommends that you shut down the host server during any maintenance procedures that could make the boot disk unavailable to the host.

CAUTION!Microsoft Windows operating systems use virtual memory paging files that reside on the boot disk. If the paging file becomes unavailable to the memory management system when it is needed, the operating system will crash with a blue screen.

Any of these events could crash a system booting from the storage array:

◆ Lost connection to array (pulled or damaged cable connection)

◆ Array service/upgrade procedures, such as on-line microcode upgrades and/or configuration changes

◆ Array failures, including failed lasers on Fibre Channel ports

◆ Array power failure

◆ Storage Area Network failures, such as Fibre Channel switches, switch components, or switch power failures

◆ Storage Area Network service/upgrade procedures, such as firmware upgrades or hardware replacements

Booting Windows from external storage 25

Page 26: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

26

General Procedures and Information

Note: EMC recommends moving the Windows virtual memory paging file to a local disk when booting from the storage array. Consult your Windows manual for instructions on how to move the paging file.

How to determine I/O latency and load on the boot LUNThe current restrictions for boot-from-array configurations listed in the EMC Support Matrix represent the maximum configuration that is allowed using typical configurations. There are cases where your applications, host, array, or SAN may already be utilized to a point when these maximum values may not be achieved. Under these conditions, you may wish to reduce the configuration from the maximums listed in the EMC Support Matrix for improved performance and functionality.

Here are some general measurements than can be used to determine if your environment may not support the maximum allowed boot-from-array configurations:

◆ Using the Windows Performance Monitor, capture and analyze the Physical Disk and Paging File counters for your boot LUN. If response time (sec/operation), or disk queue depth seem to be increasing over time, you should review any additional loading that may be affecting the boot LUN performance (HBA/SAN saturation, failovers, ISL usage, and so forth).

◆ Use available Array Performance Management tools to determine that the array configuration, LUN configuration and access is configured optimally for each host.

Possible ways to reduce the load on the boot LUN include:

◆ Move application data away from the boot LUN.

◆ Reduce the number of LUNs bound to the same physical disks.

◆ Select an improved performance RAID type.

◆ Contact your EMC support representative for additional information.

Boot crash dump save to disk behavior

If your system is configured to write crash dumps after system failures and the host is configured to boot from the array, you will be able to successfully save the crash dump only on the original

EMC Host Connectivity Guide for Windows

Page 27: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

available boot device path on which the system started. This is a Windows limitation, and installing EMC PowerPath® will not affect this behavior. At the time a system crash is to be written to disk, Windows has already saved the original boot path, and PowerPath cannot redirect the crashdump file (MEMORY.DMP) to an alternative available device. If you have a configuration for which you want to capture a crash dump, you should ensure that the original primary boot path is available at the time of the crash.

Configuring EMC VNX series and CLARiiON systems for boot from SAN

By default, EMC VNX series and CLARiiON storage systems are configured with all of the proper settings a Windows server requires for successful boot from SAN. EMC VNX series and CLARiiON storage systems have two storage processors (SPs) which allow for highly available data access even if a single hardware fault has occurred. In order for a host to be properly configured for high availability with boot-from-SAN, the HBA BIOS should have connections to both SPs on the VNX series and CLARiiON system.

At the start of the Windows boot procedure, there is no failover software running. HBA BIOS, with a primary path and secondary path(s) properly configured (with access to both SPs), will provide high availability while booting from SAN with a single hardware fault.

See EMC Knowledgebase solution emc99467 to determine which VNX series and CLARiiON failover modes are supported with your Windows failover software and VNX OE for block and CLARiiON FLARE® software.

IMPORTANT!EMC strongly recommends using failover mode 4 (ALUA active/active) when supported, as ALUA will allow I/O access to the boot LUN from either SP, regardless of which SP currently owns the boot LUN.

Failover mode 1 is an active/passive failover mode. I/O can only successfully complete if it is directed to the SP that currently owns the boot LUN. If HBA BIOS attempts to boot from a passive path, BIOS will have to time out before attempting a secondary path to the active (owning) SP, which can cause delays at boot time. Using ALUA failover mode whenever possible will avoid these delays.

Booting Windows from external storage 27

Page 28: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

28

General Procedures and Information

In order to configure a host to boot from SAN, the server needs to have a boot LUN presented to it from the array, which requires that the WWN of the HBA(s) or CNA(s), or the iqn of an iSCSI host, be registered.

In configurations where a server is already running Windows and is being attached to a VNX series and CLARiiON systems, the EMC Unisphere™/Navisphere® Agent would be installed on the server. This agent would automatically register the server’s HBA(s) WWNs on the array. In boot-from-SAN configurations where the OS is going to be installed on the VNX series and CLARiiON LUN, there is no agent available to perform the registration. Manual registration of the HBA WWNs is required in order to present a LUN to the server for boot.

For instructions on how to register a host to boot from an iSCSI-based SAN, refer to Chapter 3, ”iSCSI Attach Environments.”

Registering FC hostTo register your FC host with the array through the Unisphere/Navisphere Manager, perform the following steps.

Note: This procedure assumes that your host HBA WWN/WWPN has not yet been zoned to the VNX series and CLARiiON system and has not logged into the array.

1. Create new initiator records (for each HBA that will be connected to the VNX series and CLARiiON) that identify the host to the array and assign the SP port to boot from.

2. Create a storage group with the new server and boot LUN. This LUN should be sized properly in order for the OS, and any other applications, to fit properly.

EMC Host Connectivity Guide for Windows

Page 29: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Creating the initiator recordTo create the initiator record, complete the following steps:

1. Right-click the storage array and then select Connectivity Status from the drop-down menu.

The Connectivity Status window displays.

2. Click Create.

Booting Windows from external storage 29

Page 30: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

30

General Procedures and Information

The Create Initiator Record window displays.

3. In the Create Initiator Record window:

a. Enter the HBA WWNN (node name) followed by the WWPN (port name). For our example, the WWNN is 20:00:00:00:00:44:c4:b1 and the WWPN is 20:00:00:00:12:34:56:78.

b. Select the SP - port to which the host will connect.

c. Select the CLARiiON Open initiator type and choose the proper failover mode for your host failover software and VNX OE for block or CLARiiON FLARE software.

IMPORTANT!EMC strongly recommends failover mode 4 (ALUA active/active), if supported with your VNX OE for block or CLARiiON FLARE software and the host multipathing software that you intend to install.

d. Enter a host name and IP address.

e. Click OK.

EMC Host Connectivity Guide for Windows

Page 31: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

The Connectivity Status window displays.

4. Repeat these steps for each HBA that will be connected to the VNX series or CLARiiON system. Once your server is registered with the array, you can create a Storage Group with a LUN to be presented to the server.

IMPORTANT!Make note of the host name you chose during the manual registration process. If you install the Unisphere/Navisphere host agent on your Windows server after installation, you will need to be sure that your Window server is given the same name that you used during registration. If the name is different, and you install the Unisphere/Navisphere host agent, your registration on the VNX series or CLARiiON system could be lost and your server could lose access to the boot LUN and crash.

Manually registering an HBA WWN/WWPN Your server's HBA(s) WWN/WWPN can also be registered manually if it has already been zoned and logged into the array port. To manually register an HBA WWN/WWPN that is already logged into the array, complete the following steps.

Booting Windows from external storage 31

Page 32: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

32

General Procedures and Information

Using Unisphere/Navisphere Manager:

1. Right-click the Storage Array and then select Connectivity Status from the drop-down menu.

The Connectivity Status window displays.

2. In the list of Host Initiators, locate the HBA WWN/WWPN combination that represents your host server. For our example, the WWNN is 20:00:00:00:00:44:c4:b1 and the WWPN is

EMC Host Connectivity Guide for Windows

Page 33: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

20:00:00:00:12:34:56:78. The above window shows that this WWPN is logged into the VNX series or CLARiiON system, but is not yet registered.

3. Click on the WWN/WWPN and then click Register.

The Register Initiator Record window displays.

4. In the Create Initiator Record window:

a. Select the CLARiiON Open initiator type.

b. Choose the proper failover mode for your host failover software and VNX OE for block or CLARiiON FLARE software.

Booting Windows from external storage 33

Page 34: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

34

General Procedures and Information

IMPORTANT!EMC strongly recommends failover mode 4 (ALUA active/active), if supported with your VNX OE for block or CLARiiON FLARE software and the host multipathing software that you intend to install.

c. Enter a host name and IP address.

d. Click OK.

The VNX series or CLARiiON system will display a message to notify you that the host is being manually registered and will not be managed via host agent.

5. Click OK to close this window.

EMC Host Connectivity Guide for Windows

Page 35: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Once the new host initiator is registered, it will show up as registered in the Connectivity Status window.

6. Repeat these steps for each HBA that will be connected to the VNX series or CLARiiON system. Once your server is registered with the array, you can create a Storage Group with a LUN to be presented to the server.

IMPORTANT!Make note of the host name you chose during the manual registration process. If you install the Unisphere/Navisphere host agent on your Windows server after installation, you will need to be sure that your Window server is given the same name that you used during registration. If the name is different, and you install the Unisphere/Navisphere host agent, your registration on the VNX series or CLARiiON system could be lost and your server could lose access to the boot LUN and crash.

Using Naviseccli to create an initiator record or manually register an HBA WWN/WWPNThe secure Navisphere command line utility naviseccli may also be used to create an initiator record or manually register an HBA WWN/WWPN. All the selections required in the “Manually registering an HBA WWN/WWPN” examples can be included in a single naviseccli storagegroup command. Refer to the naviseccli documentation located at http://Powerlink.EMC.com for full details of the switches of the storagegroup command.

Using the values selected in the previous example, the naviseccli command to manually register an HBA WWN/WWPN would be:

Booting Windows from external storage 35

Page 36: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

36

General Procedures and Information

naviseccli -user <UserName> -password <Password> -Scope <scope> -h <SP IP address> storagegroup -setpath -ip 168.159.1.150 -hbauid 20:00:00:00:00:44:c4:b1:20:00:00:00:12:34:56:78 -sp a -spport 0 -host Boot_From_SAN0 -type 3 -failovermode 4 -arraycommpath 1

Creating a Storage Group and boot LUN for your Boot from SAN serverVNX series and CLARiiON LUNs are assigned to a registered server via Storage Groups. A Storage Group consists of one or more LUNs and the server name registered with the array (either manually or by the Unisphere/Navisphere host agent.)

To create a VNX or CLARiiON Storage Group for your boot- from-SAN host:

1. Right-click on Storage Groups and select Create Storage Group.

The Create Storage Group window displays.

2. Type the name you wish to use for the Storage Group. For example, Boot_From_SAN_0.

EMC Host Connectivity Guide for Windows

Page 37: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

When the new Storage Group is created, it should appear in the Storage Groups tree, as shown next.

3. To add your boot LUN to the Storage Group, right-click on the Storage Group name and choose Select LUNs.

Booting Windows from external storage 37

Page 38: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

38

General Procedures and Information

The Storage Group Properties window displays.

4. Select the LUNs tab.

a. From the Available LUNs list, select the LUN that you wish to use as a boot LUN.

b. Click Add to add it to the storage group.

EMC Host Connectivity Guide for Windows

Page 39: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

The LUN moves into the Selected LUNs list.

5. Click Apply to save the changes to the Storage Group.

A window displays asking you to confirm your action.

Booting Windows from external storage 39

Page 40: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

40

General Procedures and Information

6. Add your Host server to the Storage Group to associate the LUN to the Host. From the Storage Group Properties window, select the Hosts tab.

EMC Host Connectivity Guide for Windows

Page 41: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

a. Locate your registered Host name in the Available Hosts pane. Select your Host and click the highlighted arrow button to move it into the Host to be Connected area.

b. Click Apply to save the changes to the Storage Group.

Booting Windows from external storage 41

Page 42: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

42

General Procedures and Information

c. A confirmation window displays asking you to confirm your action. Click Yes.

d. You may see a window confirming the success of your changes. Click OK.

7. Once your Storage Group is created and has a LUN and associated Host registered, you may configure your Host server's HBA or CNA to boot from the LUN.

Configuring EMC Symmetrix arrays for boot from SANUnlike EMC VNX series and CLARiiON systems, EMC Symmetrix arrays may not be configured with all of the proper settings a Windows server requires for successful boot from SAN. Specific Symmetrix director flags (sometimes referred to as director bits) are required. Refer to “Required storage system setup” on page 250 for information on which director flags are required. These flags must be enabled on every port that a Windows server is attached to. Symmetrix arrays are highly available, with multiple connections (FA ports) for failover if hardware faults occur. In order for a host to be

EMC Host Connectivity Guide for Windows

Page 43: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

properly configured for high availability with boot from SAN, the HBA BIOS should have connections to at least two connections on the Symmetrix.

At the start of the Windows boot procedure, there is no failover software running. HBA BIOS, with a primary path and secondary path(s) properly configured (to separate FA ports), will provide high availability while booting from SAN with a single hardware fault.

In order to configure a host to boot from SAN, the server needs to have a boot LUN presented to it from the array. Unlike VNX series and CLARiiON systems, Symmetrix arrays do not require that an HBA's WWPN be registered. However, Symmetrix storage arrays do provide LUN masking features that require the HBA WWPN to be validated in the array’s device-masking database.

Various families of EMC Enginuity™ microcode use different techniques to enable and configure their LUN-masking features. In order to configure and apply LUN-masking for your array model, EMC Solutions Enabler software can be used to issue commands to the Symmetrix array using Solutions Enabler’s command line interface (CLI) to perform LUN-masking on a Symmetrix.

Refer to EMC Solutions Enabler Symmetrix Array Controls CLI, located on Powerlink, for instruction on using EMC Solutions Enabler CLI to perform LUN-masking for your Symmetrix model.

Note: It is assumed that your host HBA WWN/WWPN has not yet been zoned to the Symmetrix array.

Booting Windows from external storage 43

Page 44: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

44

General Procedures and Information

Configuring for Windows 2000/2003/2008 This section provides information for configuring Windows 2000/2003/2008 with EMC storage systems.

Configuring the HBA driver

Driver software for Windows hosts is provided with each HBA. Standard Windows driver installation procedures enable use of the HBAs. The driver for the host bus adapter must be installed and configured prior to partitioning the disks. Refer to the appropriate section of Chapter 2, ”Fibre Channel Attach Environments” or Chapter 3, ”iSCSI Attach Environments.”

Initializing the disks

To view the drives seen by your host, right-click the My Computer icon and select Manage. The Computer Management window (on Windows 2008, the Server Manager window), similar to Figure 1 on page 45, appears.

Select Disk Management from the Storage section in the list of options in the left pane of the window. In Figure 1 on page 45, the disks have not been formatted or partitioned.

To create volumes and format the disks, you must write signatures on all the disks to make them recognizable by Windows 2000/Windows 2003.

Note: Windows 2000 supports a maximum file system size of 2 TB.

Windows 2003/2008 supports a maximum file system size of 2 TB unless Service Pack 1 or 2 is installed. With SP1 or SP2, the maximum supported physical disk size is 256 TB. Volumes larger than 2 TB must use GPT partitions to support them. Refer to your Windows users guide for information on GPT partitions.

EMC requires minimum SP2 to be on the system.

These limitations should be taken into account when planning your storage environment.

EMC Host Connectivity Guide for Windows

Page 45: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Figure 1 Disk Management in Computer Management window

Disk types Windows groups all disks into two types: basic and dynamic. This convention is derived from enhancements to the Windows file systems.

Basic disks can be formatted with FAT, FAT32, and NTFS file systems.

Basic disks cannot be used to create striped, spanned, or RAID volumes using Windows Disk Manager. Windows requires that basic disks be upgraded to a dynamic status before being used for striped, spanned, or RAID volumes. These volumes are not changed even though the disks that make them up are basic disks.

Refer to “Upgrading to dynamic disks” on page 46.

Configuring for Windows 2000/2003/2008 45

Page 46: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

46

General Procedures and Information

Upgrading todynamic disks

In order to use the Windows file systems (spanned, striped, and RAID), you must upgrade your disks to dynamic disks. If you have a basic disk with a partition, you must have at least 2 MB of unallocated free space on the disk to perform the upgrade.

IMPORTANT!Dynamic disks cannot be used for clusters.

Figure 2 Upgrade to Dynamic Disk

Select the disk to upgrade. In the example in Figure 2, Disk 13 has been selected. Right-click to open the pop-up menu shown in Figure 2. Windows file systems can be used after upgrading.

EMC Host Connectivity Guide for Windows

Page 47: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Creating volumesWindows allows you to create up to four partitions in the free space of a physical hard disk, create multiple logical drives in the free space, and delete partitions.

You can use Create Volume Wizard to create the following:

◆ Spanned disk volume — Create spanned volumes from unpartitioned free space on one or more drives.

◆ Striped volume without parity — Create striped volumes from unpartitioned free space on two or more drives. Unlike spanned volumes, each member partition of the striped volume must be on a different disk. Disk Manager makes partitions on all disks approximately the same size. Up to 32 disks can participate in a striped volume.

◆ Striped volume with parity — Striped volumes with parity require a minimum of three partitions on different drives.

◆ Host-based mirrored volume — The procedures for each of the four volumes are explained by the Wizard. For more information about these volumes, click Help.

Creating simple,spanned, and stripe

volumes

The following steps partition and format one disk at a time:

1. Right-click the Symmetrix, VNX series, or CLARiiON disk you want to partition.

2. Select Create Volume from the menu.

3. In the Wizard program, select Simple Volume, Spanned Volume, or Striped Volume.

Note: Mirrored and RAID 5 are not good choices here, because you can mirror disks and create RAID 5 volumes more efficiently on the Symmetrix, VNX series, and CLARiiON system.

4. Enter the size of the partition, or do nothing if you are using the whole disk.

5. Click OK to continue the operation.

6. Assign a drive letter and click OK.

7. Select the file system type (NTFS or FAT) from the File System menu and type in a label for the partition and click OK.

Configuring for Windows 2000/2003/2008 47

Page 48: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

48

General Procedures and Information

8. Check Perform a Quick Format to prevent excessively long waits.

9. Read the summary, and click OK.

Disk Manager displays the successful creation of the new drives and volumes like those shown in Figure 3. The color identifies the type of volume.

Figure 3 Healthy volumes and file systems example

If file systems are not reported as healthy, refer to “Symmetrix environment” on page 216.

Deleting partitions Back up the contents of any partition, volume, or logical drive before you delete it:

1. Right-click the partition, volume, or drive to select it.

2. Select Delete from the pop-up menu.

Note: Windows warns you that all data will be lost and asks you to confirm.

EMC Host Connectivity Guide for Windows

Page 49: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

3. Click Yes to delete the selected item. That item becomes free space.

System status and error messages

In a Windows environment you can view, save, or clear system status and error messages. The Windows host logs all errors it detects into a system event log.

To open the event log:

1. From the Computer Management window (on Windows 2008, the Server Manager window), select the Event Viewer under System Tools. (On Windows 2008, the event viewer is located under the Diagnostics category under Windows Logs.)

2. From here:

• To view an event, double-click the event in the list, or select View, Details from the menu bar.

a. To save the contents of the system event log:

b. Select Action on the menu bar

c. Click Save As, and enter the full path of the location to which the file is to be saved.

• To clear the contents of the system event log:

a. Select Action on the menu bar.

b. Select Clear All Events.

c. Click No when prompted to save the events.

Note: The Event Viewer can also be accessed from the Computer Management utility under System Tools.

Configuring for Windows 2000/2003/2008 49

Page 50: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

50

General Procedures and Information

Adding devices onlineWhenever devices are added online to the system or the device channel addresses are changed, you must perform the actions described below in order to introduce the new devices to the system.

To add new disk devices while the system remains on line:

1. Add the new disk devices into the storage array configuration using the appropriate storage management software.

IMPORTANT!Always consult the EMC PSE Configuration Group for assistance when working with a system with live data.

2. Run the Disk Management utility and perform a disk rescan, in either of these ways:

• Right-click the Disk Management icon and select Rescan Disks.

• Select Disk Management, and Rescan Disks from the Action menu.

If Disk Manager does not see the new drives, you must reboot the host.

EMC Host Connectivity Guide for Windows

Page 51: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Editing the Windows I/O timeout value

Note: While the EMC-approved drivers for Emulex, QLogic, and Brocade HBAs automatically set the Windows disk timeout value to 60 seconds, some software applications may change it to other values for their own purposes. The follow section describes how to manually set the Windows disk timeout value to 60 seconds.

Some Windows configurations, especially those using the Symmetrix system as a boot device, periodically encounter I/O time-outs, which can cause unusual error messages and performance degradation. To avoid this potential problem, EMC recommends increasing the Windows default I/O timeout value, as follows:

1. Open the Windows registry editor:

a. On the Windows taskbar, click Start.

b. Select Run.

c. Type regedit.exe in the Open field, and click OK.

2. Follow this path: HKEY_LOCAL_MACHINE, System, CurrentControlSet, Services, Disk.

3. Look for TimeOutValue in the right pane of the registry editor:

• If the TimeOutValue exists, double-click it, and go to step 4.

• If the TimeOutValue does not exist:

a. Select Add Value from the Edit menu.

b. In the Value Name box, type TimeOutValue (exactly as shown).

c. For the data type, select REG_DWORD from the pull-down menu.

d. Click OK, and go to step 4.

4. In the DWORD Editor window:

a. Click decimal in the radix box.

b. Change the value in the data box to 60.

5. Click OK.

6. Close the registry editor.

7. Reboot the host.

Editing the Windows I/O timeout value 51

Page 52: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

52

General Procedures and Information

Recovering from drive errors This section provides information on recovering from drive errors from Windows 2000/Windows 2003 hosts. The following sections describe various drive errors and the recovery procedure:

◆ “Recovery from offline state”, next

◆ “Rebooting to recover from an offline state” on page 54

◆ “Recovery from at-risk file system” on page 55

◆ “Recovery from fail state” on page 56

Recovery from offline state

When a disk failure occurs from a disconnected cable or a failed storage director, the drives on a Windows 2000 or Windows 2003 system may be put into an offline state. (You can view this by going to the Disk Management utility.) This is a normal behavior that guards against data errors.

To recover from this condition, you must reactivate one offline drive and then perform a disk rescan. (Refer to Figure 4 on page 53.)

EMC Host Connectivity Guide for Windows

Page 53: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

This brings up all of the offline drives at once.

Figure 4 Recovery from offline state

Recovering from drive errors 53

Page 54: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

54

General Procedures and Information

Rebooting to recover from an offline stateIf a drive is offline for an extended amount of time, Windows 2000/2003 will not let you reactivate the drive. In this case, you must reboot the host to reactivate the drive. (Refer to Figure 5; you cannot select Reactivate Volume from the menu shown here, so you must reboot.)

Figure 5 Offline state requires reboot

EMC Host Connectivity Guide for Windows

Page 55: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Recovery from at-risk file systemTo recover from an at-risk file system, you must reactivate each individual disk. (Refer to Figure 6.)

Figure 6 Recovery from at-risk file system

Recovering from drive errors 55

Page 56: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

56

General Procedures and Information

Recovery from fail stateAnother case is where the drive is in a fail state. To recover, activate the failed volume. (Refer to Figure 7.)

Figure 7 Recovery from fail state

EMC Host Connectivity Guide for Windows

Page 57: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

diskpar and diskpart Current versions of Microsoft Windows Server do not correctly detect disk drive geometry of devices presented from storage area network (SAN) systems, such as EMC Symmetrix, VNX series, and CLARiiON. The detection results in a geometry of 63 sectors per track, whereas the correct form of geometry should be 64 sectors per track. Creating partitions and the volumes that they represent without modification will result in suboptimal I/O processes.

In shared storage arrays, inefficient utilization of storage components may not only adversely affect the performance of the host generating the I/O, but also other hosts that share common components. Optimally configured arrays ensure that all components are used efficiently. Misaligned Windows file systems can result in significant performance degradation. It is not uncommon to see a performance improvement of 20 percent or more (depending on layout and I/O load). As a result of resolving misaligned I/O and its overheads, a more efficient utilization of system resources will occur. This can translate into positive improvements in I/O response for other servers on the shared array.

Two programs are available for aligning partitions. The diskpar utility has been around for some time and works on all Windows Systems. The diskpart utility has been available since Windows 2000, but could not align partitions until an updated version was included with Windows 2003 SP1. The version of diskpart that can align partitions is version 5.2.3790 or later.

For information on using diskpar and diskpart for the alignment of disk partitions, refer to the EMC Engineering white paper Using diskpar and diskpart to Align Partitions on Windows Basic and Dynamic Disks, available on Powerlink. This white paper describes how to use diskpar and diskpart to align partitions on Basic Disks and Dynamic Disks under both Windows 2000 and Windows 2003. The paper provides extensive analysis to demonstrate how diskpar works and the limitations of using it.

diskpar and diskpart 57

Page 58: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

58

General Procedures and Information

Microsoft Cluster Server Microsoft Cluster Service (MSCS) is the native server clustering solution included as part of both Windows 2000, Advanced Server and Datacenter Editions, and Windows Server 2003, Enterprise and Datacenter Editions. A server cluster is used to provide failover support for applications and services.

A server cluster can consist of a varying number of nodes, based on the OS version:

◆ Windows 2000 Advanced Server — 2 nodes

◆ Windows 2000 Datacenter Server — 4 nodes

◆ Windows Server 2003 Enterprise and Datacenter — 8 nodes

Typically, each node is attached to one or more cluster storage devices. Cluster storage devices allow different servers to share the same data. By reading this data, they provide failover for resources.

All EMC arrays provide support for Microsoft Cluster Services over both Fibre Channel and iSCSI attach interfaces.

For a detailed overview and description of MSCS implementations in Windows, refer to:

http://www.microsoft.com/windowsserver2003/techinfo/overview/clustering.mspx.

Windows 2008 clustering changesIn Windows Server 2008, Microsoft Cluster Services (MSCS) has been revamped and is now called Failover Clustering. Updates include a new management interface, improved configuration processes, an embedded validation procedure, enhanced security features, expanded networking functionality, increased reliability when interacting with storage, built-in recovery processes, new backup and restore functionality, and a new Quorum model, discussed next.

Quorum model forWindows 2008

The Quorum model has changed in Windows Server 2008 Failover Clustering. In older systems, when an administrator heard the word quorum, he thought of a shared disk where the cluster configuration and some replicated files resided. This was a single point of failure in the cluster. If the quorum disk failed, the cluster service terminated and high availability was lost.

EMC Host Connectivity Guide for Windows

Page 59: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Windows Server 2003 Server Clusters offered a second quorum type called the Majority Node Set quorum, discussed further in “Majority node set (MNS)” on page 62. This type of quorum was typically implemented in multi-site clusters and required no shared storage. The Majority Node Set quorum consisted of a file share that resided on the system drive on each cluster node. Connections to this quorum type were by way of Server Message Block (SMB) connections. Once again, in order for the cluster to function, a majority of nodes had to be participating.

With the introduction of Exchange Server 2007 cluster continuous replication (CCR), File Share Witness (FSW) capability was added to Windows Server 2003 Server Clusters. This allowed for a single Exchange 2007 CCR cluster node (or any multi-site cluster) to continue to provide services as long as a connection to the FSW resulted in a majority being achieved.

In Windows Server 2008 Failover Clustering, the concept of quorum now truly means consensus. Quorum, or consensus, is now achieved by having enough votes to bring the cluster into service. Enough votes can be obtained in several ways, depending on the quorum configuration. There are four quorum modes available in a Windows Server 2008 Failover Cluster, as shown in Figure 8 on page 60. Of the four modes listed, only the first two (Node Majority, and Node and Disk Majority) can be automatically selected during the create cluster process.

The following logic should be used:

◆ If an odd number of nodes are being configured in the cluster, select Node Majority mode.

◆ If an even number of nodes are being configured in the cluster and shared storage is connected and accessible, then select Node and Disk Majority.

Microsoft Cluster Server 59

Page 60: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

60

General Procedures and Information

Figure 8 Configure Cluster Quorum Wizard

To select a witness disk from available storage, select the first disk that is at least 500 megabytes in size and has an NTFS partition configured. The remaining quorum modes can only be selected manually by running the Configure Cluster Quorum Wizard. The Node and File Share Majority option is typically used in a multi-site cluster configuration or in an Exchange 2007 CCR cluster. The last option, No Majority: Disk Only mode, is equivalent to the shared quorum model in legacy clusters. It is a single point of failure and generally should not be used.

There are only two types of witness resources, a physical disk and a file share, that can be configured in the cluster to help achieve consensus.

EMC Host Connectivity Guide for Windows

Page 61: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

A witness disk is a piece of storage that the cluster service can bring on line. This disk is located in the Cluster Core Resource Group along with the cluster Network Name and associated IP address resources. When the witness disk is configured, a Cluster folder is placed on the disk and a full copy of the cluster configuration (a cluster hive or replica) is placed on the disk.

An FSW is a network share that is located, in an ideal situation, on a server on the network that is not part of the cluster. An SMB connection is made to the FSW, and the FSW maintains a copy of the witness log file, which contains versioning information for the cluster configuration.

There can only be one witness resource configured in a cluster. This resource provides an extra vote should the cluster need it to achieve quorum. In other words, if the cluster is one vote (and therefore one node) short of achieving a consensus, the witness resource is brought online so quorum can be achieved. If the cluster should be more than one vote short of achieving quorum, the witness resource is left alone and the cluster remains in a dormant state, waiting for another cluster node to join.

Quorum schemesThis section provides information on the differences between shared disk and majority node set and quorum schemes.

Shared disk Prior to Windows Server 2003, MSCS implemented a single quorum scheme, known as shared disk quorum. In this quorum scheme, the quorum device is a configuration database for MSCS, using a quorum log file that is located on a LUN residing on a shared storage interconnect, accessible by all member hosts of the cluster (shown in Figure 9 on page 62).

Microsoft Cluster Server 61

Page 62: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

62

General Procedures and Information

:

Figure 9 MSCS using legacy quorum scheme

Majority node set(MNS)

Introduced in Windows Server 2003, majority node set (MNS) quorum is a new quorum scheme used in a cluster configuration with at least 3 nodes, instead of the classic shared disk quorum scheme. The MNS quorum scheme uses quorum data that is locally stored on the system disk of each member host of the cluster, so each member host has a local copy of the quorum data. From a cluster-wide perspective, the individual quorum data stores residing on each member host is transparent; the quorum data appears as a single resource.

In an MNS configuration, a dedicated MNS resource manages the quorum data residing locally on each member host and verifies that each locally-kept copy of the quorum data is consistent across all member nodes (shown in Figure 10 on page 63).

EMC Host Connectivity Guide for Windows

Page 63: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Figure 10 MSCS using MNS quorum scheme

While the disks that make up the MNS can (in theory) be disks on a shared storage fabric, the MNS implementation uses a directory on each node’s local system disk to store the quorum data. If the configuration of the cluster changes, that change is reflected across the different disks.

The change is only considered to have been "committed" (that is, made persistent) if that change is made to (<Number of nodes configured in the cluster>/2) + 1 nodes of the cluster.

This ensures that a majority (>50%) of the nodes have an up-to-date copy of the quorum data. The cluster service itself will only start up (and bring resources online) if a majority of the nodes configured as part of the cluster are up and running the cluster service. If there are fewer nodes, the cluster is said not to have quorum, and waits (based on the restart properties of the Cluster service) until more nodes attempt to join. Only when a majority (or quorum) of nodes are available, will the cluster service start and bring the resources online. In this way, because the up-to-date configuration is written to a majority of the nodes regardless of node failures, the cluster will always guarantee that it starts up with the latest, most up-to-date configuration.

In the case of a failure or split-brain in an MNS-based cluster, all MSCS instances across all member nodes that do not contain a majority of nodes are terminated. This ensures that if there is an MSCS instance running that contains a majority of the nodes, it can safely start up any resources that are not running on that host safely.

Microsoft Cluster Server 63

Page 64: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

64

General Procedures and Information

(It can be the only partition in the cluster that is running resources because all other MSCS instances on other hosts are terminated).

Using shared diskversus MNS quorum

schemes

Standard shared disk quorum schemes are suitable for the majority of situations that users will require, such as applications that require data to be highly available at a single site (generally utilizing a shared storage array).

MNS-based clusters are ideal for customers with specialized requirements, such as the following:

◆ Geographically dispersed cluster environments which do not use third-party replication schemes (such as SRDF/CE for MSCS) to remotely mirror a shared disk quorum and/or data resource. This environment requires a specialized implementation or application specific method to keep data consistent across member hosts.

◆ Cluster environments which do not need shared storage, such as applications that do not store persistent data, but need to provide consistent volatile data that is replicated across all member hosts. (Most applications used in a Windows environment do not fall into this category.)

Shared-disk data resources can still be used in a MNS-based cluster; MNS removes the requirement of a shared disk quorum resource only. Shared data disks can still be used in a MNS cluster. For example, clustered applications such as SQL or Exchange can use (and, in fact, require) shared disk storage for data.

Installing and configuring MSCS using shared disk and MNS quorum schemesThe quorum scheme used in an MSCS configuration is chosen during MSCS installation. Installation of a shared disk quorum, and a majority node set quorum is nearly identical, with the exception of modifying the Quorum resource type during install

Use the following procedure to configure MSCS on a Windows Server 2003 host:

1. Start Cluster Administrator.

2. Under File, select New… Cluster, and press Next in the New Server Cluster Wizard.

3. Select the domain name in which the cluster will be created and type a unique name for the cluster.

4. Press Next to continue.

EMC Host Connectivity Guide for Windows

Page 65: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

5. Enter the name of the system which will be the first node in the cluster. By default, the local server name is entered.

• If this is correct, press Next to continue.

• If this is not correct, type or select the system name to be the first cluster node and press Next to continue.

The New Cluster Server Wizard begins analyzing the system configuration for cluster feasibility. (This may take several minutes.) If an issue is found, the details of the issue will be detailed on this page. The issue must be corrected before the Wizard allows you to continue. When a Tasks Completed message with a green progress bar displays, the analysis has completed. Press Next to continue.

6. Enter an IP address that the cluster name will use for network access and press Next to continue.

7. Enter the username and password of an authorized account which will be used to manage the cluster. This account must be a member of the domain selected earlier.

8. Press Next to continue.

9. The next page reports a Proposed Cluster Configuration. Press the Quorum button to select the quorum resource or resource type:

• For a shared-disk quorum scheme, a shared quorum disk is specified (by drive letter/mount point).

• For a majority node set quorum scheme, select Majority Node Set in the drop-down menu. Press OK to exit the quorum dialog and press Next to continue.

The Wizard begins the cluster configuration. (This may take several minutes.) If an issue is found, the details of the issue will be detailed on this page. The issue must be corrected before the Wizard allows you to continue. When a Tasks Completed message with a green progress bar displays, the cluster configuration has completed. Press Next to continue.

10. The final screen reports that the New Server Cluster Wizard has completed successfully. Press Finish to exit the wizard.

The cluster, and its associated resources, can now be managed normally via Cluster Administrator.

Microsoft Cluster Server 65

Page 66: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

66

General Procedures and Information

MSCS in a boot from SAN environmentThis section contains information that should be considered when in a "boot from SAN" environment.

By default, MSCS does not allow configurations where the host is booted from a SAN-attached device, where the boot device (and in most cases, pagefile), and shared cluster devices reside on the same logical bus. For example, installing MSCS on a host where both the SAN-based boot disk and shared disks to be clustered are visible over the same HBA(s), may fail with the following error in Clcfgsrv.log:

The cluster cannot manage physical disks that are on the same storage bus as the volume that contains the operating system because other nodes connected to the storage bus cannot distinguish between these volumes and volumes used for data.

This limitation is by design, as cluster-specific disk ownership/arbitration routines may use a variety of LUN, target, or bus resets that can cause delays on any devices attached to the bus. When the boot device is attached to the same bus as the shared cluster disks, such resets could be catastrophic to the OS since they could potentially delay access to the boot device and/or pagefile, resulting in a host crash.

Microsoft provides a registry-based modification for Windows Server 2003 environments which allows the boot device and shared cluster devices to reside on the same logical bus. This modification is provided for those customers who understand the reasons for the original limitation and have mitigated much of the risk of having a boot device and shared cluster disk devices on the same logical bus by implementing STORPort-based HBA miniport drivers.

The STORPort-based miniport drivers allow targeted LUN resets, affecting only the shared cluster devices. This leaves the boot device unaffected, as opposed to bus-wide resets which occur in SCSIPort-based miniport or full-port HBA driver implementations.

Note: STORPort miniport drivers are unavailable in Windows 2000 Server environments.

EMC Host Connectivity Guide for Windows

Page 67: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Refer to the following Microsoft Knowledge Base articles for details on using this registry modification in Windows Server 2003 environments:

http://support.microsoft.com/kb/886569/en-us

Note that even in environments using STORPort-based HBA miniport drivers, bus-wide reset may still occur as a "last resort" in rare circumstances, if both the LUN-specific reset, and the subsequent array target reset fails for any reason. Refer to the following Microsoft Knowledge Base article for details regarding the MSCS disk arbitration reset scheme in SCSIPort and STORPort-based miniport environments, respectively:

http://support.microsoft.com/kb/309186/en-us

http://support.microsoft.com/kb/301647/en-us

IMPORTANT!EMC strongly recommends using the STORPort miniport drivers available for all supported HBAs, in addition to the latest EMC-qualified Microsoft STORPort QFE, for all Windows Server 2003-based installations, including both clustered and non-clustered environments.

Microsoft Cluster Server 67

Page 68: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

68

General Procedures and Information

Running cluster with iSCSI-attached Symmetrix devicesThe discovery of devices may take several minutes during a system reboot on hosts where there are a large number of Symmetrix iSCSI devices. When the node is part of a cluster, this could impact cluster functionality. Make sure you set the cluster service recovery properties as shown in Figure 11.

Figure 11 Cluster Service Properties dialog box, Recovery tab

EMC Host Connectivity Guide for Windows

Page 69: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Troubleshooting Microsoft cluster issuesThis section contains the following information about troubleshooting Microsoft cluster issues:

◆ “Setting up your networks for the server cluster” on page 69

◆ “Configuring TCP/IP settings” on page 70

◆ “Using a different disk for the quorum resource” on page 73

◆ “Recovering from a corrupted quorum log or quorum disk” on page 75

◆ “Checklist: Installing a physical disk resource” on page 77

◆ “Creating a new group in a cluster” on page 78

◆ “Creating a new resource” on page 79

◆ “Best practices for configuring and operating server clusters” on page 81

◆ “How to troubleshoot cluster service startup issues” on page 87

◆ “Cluster disk and drive connection problems” on page 93

◆ “Client-to-cluster connectivity problems” on page 98

◆ “General administrative problems” on page 102

◆ “Windows 2008 Failover Clustering and Symmetrix” on page 109

Setting up your networks for the server cluster

Follow the guidelines in this section to reduce network problems in your server cluster:

◆ Use identical network adapters in all cluster nodes, that is, make sure each adapter is the same make, model, and firmware version.

◆ Use at least two interconnects. Although a server cluster can function with only one interconnect, at least two interconnects are necessary to eliminate a single point of failure and are required for the verification of original equipment manufacturer (OEM) clusters.

◆ Reserve one network exclusively for internal node-to-node communication (the private network). Do not use teaming network adapters on the private networks.

Troubleshooting Microsoft cluster issues 69

Page 70: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

70

General Procedures and Information

◆ Set the order of the network adapter binding as follows:

1. External public network

2. Internal private network (Heartbeat)

3. [Remote Access Connections]

For more information, refer to Modify the Protocol Bindings Order in the TechNet section of the Microsoft website.

◆ Set the speed and duplex mode for multiple speed adapters to the same values and settings. If the adapters are connected to a switch, ensure that the port settings of the switch match those of the adapters.

For more information, refer to Change Network Adapter Settings in the TechNet section of the Microsoft website.

Configuring TCP/IP settings To configure TCP/IP settings:

1. Open Network Connections (Start > Settings > Control Panel > Network and Dial-up Connections from the Windows desktop).

2. Right-click the connection you want to configure, and click Properties.

3. Do one of the following:

• If the connection is a local area connection, on the General tab, in This connection uses the following items, click Internet Protocol (TCP/IP), and then click Properties.

• If the connection is a dial-up or VPN connection, on the Networking tab, in This connection uses the following items, click Internet Protocol (TCPIP), and then click Properties.

• If the connection is an incoming connection, on the Networking tab, in Network components, click Internet Protocol (TCP/IP), and then click Properties.

4. Do one of the following:

• If you want IP settings to be assigned automatically, click Obtain an IP address automatically, and then click OK.

• If you want to specify an IP address or a DNS server address:

EMC Host Connectivity Guide for Windows

Page 71: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

a. Click Use the following IP address, and type the IP address in IP address.

b. Click Use the following DNS server addresses, and in Preferred DNS server and Alternate DNS server, type the addresses of the primary and secondary DNS servers.

5. To configure DNS, WINS, and IP settings, click Advanced.

6. In a local area connection, selecting the Obtain an IP address automatically option enables the Alternate Configuration tab. Use this to enter alternate IP settings if your computer is used on more than one network. To configure DNS, WINS, and IP settings, click User configured on the Alternate Configuration tab.

Notes Note the following:

◆ You should use automated IP settings (DHCP) whenever possible for all connections because they eliminate the need to configure settings such as IP address, DNS server address, and WINS server address.

◆ The Alternate Configuration settings specify a second set of IP settings that are used when a DHCP server is not available. This is very useful for laptop users who often switch between two different network environments such as DHCP and static IP network environments.

◆ Use static IP addresses for each network adapter on each node.

◆ For private networks, define the TCP/IP properties for static IP addresses following the guidelines under Private Network Addressing Options in the TechNet section of the Microsoft website. (That is, specify a class A, B, or C private address.)

◆ Do not configure a default gateway or DNS or WINS server on the private network adapters. Also, do not configure private network adapters to use name resolution servers on the public network; otherwise, a name resolution server on the public network might map a name to an IP address on the private network. If a client then received that IP address from the name resolution server, it may fail to reach the address because no route from the client to the private network address exists.

Troubleshooting Microsoft cluster issues 71

Page 72: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

72

General Procedures and Information

◆ Configure WINS and/or DNS servers on the public network adapters. If Network Name resources are used on the public networks, set up the DNS servers to support dynamic updates; otherwise, the Network Name resources may not fail over correctly.

For more information, refer to Configure TCP/IP settings in the TechNet section of the Microsoft website.

◆ Configure a default gateway on the public network adapters. If there are multiple public networks in the cluster, configure a default gateway on only one of these.

For more information, refer to Configure TCP/IP settings in the TechNet section of the Microsoft website.

◆ Clearly identify each network by changing the default name.

For example, you could change the name of the private network connection from the default Local Area Connection to Private Cluster Network.

◆ Change the role of the private network from the default setting of All communications (mixed network) to Internal cluster communications only (private network), and verify that each public network is set to All communications (mixed network).

For more information, refer to Change How the Cluster uses a Network in the TechNet section of the Microsoft website.

◆ Place the private network at the top of the Network Priority list for internal node-to-node communication in the cluster.

For more information, refer to Change Network Priority for Communication between Nodes in the TechNet section of the Microsoft website.

Related information For more information, also refer to the following in the TechNet section of the Microsoft website:

◆ Network Connections Best Practices

◆ TCP/IP

◆ New Ways to do Network Connections Tasks

EMC Host Connectivity Guide for Windows

Page 73: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Using a different disk for the quorum resourceTo use a different disk for the quorum resource:

To perform this procedure, you must be a member of the Administrators group on the local computer, or you must have been delegated the appropriate authority. If the computer is joined to a domain, members of the Domain Admins group might be able to perform this procedure.

1. Select the disk you will use for the quorum resource. If the disk has two or more NTFS partitions, ensure that all partitions on the disk are assigned drive letters.

2. Open Cluster Administrator (Start > Settings > Control Panel > Administrative Tools > Cluster from the Windows desktop).

3. If one does not already exist, create a Physical Disk or other storage-class resource for the new disk.

For more information, refer to Checklist: Installing a Physical Disk Resource in the TechNet section of the Microsoft website.

4. In the console tree, click the Resources folder.

5. In the details pane, click the resource you will use for the quorum resource.

6. On the File menu, click Take Offline.

7. In the console tree, click the cluster name.

8. On the File menu, click Properties.

9. On the Quorum tab, click Quorum resource, and select a new disk or storage-class resource you want to use as the quorum resource for the cluster.

10. In Partition, if the disk has more than one partition, click the partition where you want the cluster specific data kept.

11. In Root path, type the path to the folder on the partition.

Example: \MSCS\

12. In the console tree, click the Resources folder.

13. In the details pane, click the new quorum resource.

14. On the File menu, click Bring Online.

Troubleshooting Microsoft cluster issues 73

Page 74: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

74

General Procedures and Information

Cautions Be aware of the following:

◆ If Cluster Administrator does not open, it may be because the Cluster service is not started.

To start the Cluster service, refer to Set the Cluster Service to only Start Manually in the TechNet section of the Microsoft website.

◆ If the quorum resource becomes corrupted, you cannot move it until it is repaired.

For more information, refer to Node-to-Node Connectivity Problems in the TechNet section of the Microsoft website.

◆ If the Cluster service is stopped on any other node, or the node is shut down, that node will not be able to form a cluster. Only nodes that are online when this change is made can form the cluster. However, nodes that are offline can still join the cluster and will be able to form a cluster after they have joined the cluster at least once.

Notes Note the following:

◆ As a security best practice, consider using Run as to perform this procedure. For more information, refer to Default Local Groups, Default Groups, and Using Run as in the TechNet section of the Microsoft website.

◆ Of the default resource types, only the Physical Disk, Local Quorum, or Majority Node Set resource can be a quorum resource. However, third party vendors can supply other storage class resource types that are quorum-capable.

◆ When you are making changes to the quorum, do not add or evict nodes at the same time.

◆ If you designate a resource as the quorum resource without first taking that resource offline, restart a node afterwards. This will ensure that normal check pointing is initiated for the quorum resource. Checkpointing is the saving of an extra copy of cluster configuration data and is normally done at four-hour intervals.

Related information In addition to the information referenced earlier in this section, refer to the following in the TechNet section of the Microsoft website:

◆ Assign, Change, or Remove a Drive Letter

◆ Quorum Resource

EMC Host Connectivity Guide for Windows

Page 75: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Recovering from a corrupted quorum log or quorum diskTo recover from a corrupted quorum log or quorum disk:

To perform this procedure, you must be a member of the Administrators group on the local computer, or you must have been delegated the appropriate authority. If the computer is joined to a domain, members of the Domain Admins group might be able to perform this procedure.

1. If the Cluster service is running, open Computer Management (Start > Settings > Control Panel > Administrative Tools > Computer Management from the Windows desktop).

2. In the console tree, double-click Services and Applications, and then click Services.

3. In the details pane, click Cluster Service.

4. On the Action menu, click Stop.

5. Repeat steps 1 through 4 for each remaining node.

6. If you have a backup of the quorum log, restore the log by following the instructions under Backing Up and Restoring Server Clusters in the TechNet section of the Microsoft website.

7. If you do not have a backup, select any given node. Make sure Cluster Service is highlighted in the details pane, and click Properties on the Action menu.

Under Service status, in Start parameters, specify /fixquorum and click Start.

8. Switch from the problematic quorum disk to another quorum resource.

For more information, refer to Use a Different Disk for the Quorum Resource in the TechNet section of the Microsoft website.

9. In Cluster Administrator, bring the new quorum resource disk online.

For information about how to do this, refer to Bring a Resource Online in the TechNet section of the Microsoft website.

10. Click Start > Run, and type a command with the following syntax:

cluster ClusterName res QuorumDiskResourceName /maint:on

Troubleshooting Microsoft cluster issues 75

Page 76: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

76

General Procedures and Information

11. Run chkdsk, using the switches /f and /r, on the quorum resource disk to determine whether the disk is corrupted.

For more information on running chkdsk, refer to chkdsk in the TechNet section of the Microsoft website.

12. Click Start > Run, and type a command with the following syntax:

cluster ClusterName res QuorumDiskResourceName /maint:off

13. The next step depends on whether chkdsk detects any corruption on the disk:

• If no corruption is detected, it is likely that the log was corrupted. Proceed to step 15.

• If corruption is detected, check the system log in Event Viewer for possible hardware errors.

Resolve any hardware errors before continuing.

You can use the ClusterRecovery tool, available in the Microsoft Windows Server 2003 Resource Kit, to restore the registry checkpoint files.

14. After chkdsk is complete, repeat steps 1 through 4 to stop the Cluster service.

15. Make sure that Cluster Service is highlighted in the details pane. Then click Properties on the Action menu.

Under Service status, in Start parameters, specify /resetquorumlog, and click Start. This restores the quorum log from the node's local database.

Important: You must start the Cluster service by clicking Start on the service control panel. You cannot click OK or Apply to commit these changes, as this does not preserve the /resetquorumlog parameter.

16. Restart the Cluster service on all other nodes.

Notes Note the following:

◆ As a security best practice, consider using Run as to perform this procedure. For more information, refer to Default Local Groups, Default Groups, and Using Run as in the TechNet section of the Microsoft website.

EMC Host Connectivity Guide for Windows

Page 77: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

◆ The quorum disk must be formatted with the NTFS file system.

◆ If none of the nodes are running, or one node fails while you are changing the quorum resource, only the running nodes are able to form the cluster, and the offline node is only able to join the cluster.

After the offline node has joined the cluster, all nodes are again able to form or join the cluster. This design prevents the offline node from forming the cluster using the old quorum resource.

Related information In addition to the information referenced earlier in this section, refer to the following in the TechNet section of the Microsoft website:

◆ Start, stop, pause, resume, or restart a service

◆ Quorum Resource

◆ Backing Up and Restoring Server Clusters

◆ Use a Different Disk for the Quorum Resource

◆ Run a Disk Maintenance Tool, such as Chkdsk on a Physical Disk resource

Checklist: Installing a physical disk resourceUse the following checklist when installing a physical disk resource.

Step Reference in TechNet section of Microsoft website

Review the concepts behind cluster resources. Server Cluster Resources

Review resource groups and resource dependencies.

Server Cluster Groups

Review disk storage concepts. Disk Management

Plan common resource settings. Checklist: Creating a New Resource

Make sure the disk has a signature. Disk Management

Make sure the disk is configured as basic and not dynamic.

Disk Management

Troubleshooting Microsoft cluster issues 77

Page 78: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

78

General Procedures and Information

Creating a new group in a clusterTo create a new group, follow the steps in this section.

To perform this procedure, you must be a member of the Administrators group on the local computer, or you must have been delegated the appropriate authority. If the computer is joined to a domain, members of the Domain Admins group might be able to perform this procedure.

1. Open Cluster Administrator (Start > Settings > Control Panel > Administrative Tools > Cluster Administrator from the Windows desktop).

2. On the File menu, point to New, and click Group.

3. In the New Group Wizard, in Name, type a name for the new group.

4. In Description, type any comments you want, and then click Next.

5. Under Available nodes, click the nodes you want to be the preferred owners for the group, and then click Add.

You can also leave Preferred owners empty. This means that it does not matter to which node the group fails back after failover takes place. For more information, refer to Failover and Failback in the TechNet section of the Microsoft website.

Make sure all partitions on the disk are formatted using the NTFS file system. Disk Management

Disk Management

Make sure that all partitions have a drive letter assigned or is set up as a mounted drive.

Important: If you use drive letters, assign drive labels that match the drive letters and make sure that you assign the same drive letter to each node in the cluster.

Disk Management Install Local Storage Buses and Devices

Use the New Resource Wizard to create the resource.

Create a New Resource

Step Reference in TechNet section of Microsoft website

EMC Host Connectivity Guide for Windows

Page 79: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

6. To change the priority of an owner, select the owner, click Move Up or Move Down, and then click Finish.

Notes Note the following:

◆ As a security best practice, consider using Run as to perform this procedure. For more information, refer to Default Local Groups, Default Groups, and Using Run as in the TechNet section of the Microsoft website.

◆ The name you give the group is for administrative purposes only. It is not the same as a network name, which is the name that clients can use to connect to the group. However, make sure this name is different from any other group name in the cluster.

◆ Cluster Administrator lists the group description and the group name in the details pane.

◆ All resources within the new group fail over together. To have the group fail back to a certain node, specify that node as the preferred owner and enable failback. You can balance groups among all nodes to maximize the performance of the cluster. However, you can choose not to have a preferred owner if the location of the group does not greatly affect performance.

Related information In addition to the information referenced earlier in this section, refer to the following in the TechNet section of the Microsoft website:

◆ Server Cluster Groups

Creating a new resource

To create a new resource:

To perform this procedure, you must be a member of the Administrators group on the local computer, or you must have been delegated the appropriate authority. If the computer is joined to a domain, members of the Domain Admins group might be able to perform this procedure.

1. Open Cluster Administrator (Start > Settings > Control Panel > Administrative Tools > Cluster Administrator from the Windows desktop).

2. In the console tree, double-click the Groups folder.

3. In the details pane, click the group to which you want the resource to belong.

Troubleshooting Microsoft cluster issues 79

Page 80: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

80

General Procedures and Information

4. On the File menu, point to New, and then click Resource.

5. In the New Resource Wizard, type the appropriate information in Name and Description, click the appropriate information in Resource type and Group, and then click Next.

6. Add or remove possible owners of the resource, and then click Next.

7. To add or remove dependencies:

• To add dependencies, under Available resources, click a resource, and then click Add.

• To remove dependencies, under Resource dependencies, click a resource, and then click Remove.

8. Repeat step 7 for any other resource dependencies, and then click Next.

9. Set resource properties in the Resource Parameters dialog box, where resource is the name of the resource type.

Different resource types contain different configuration information in their respective dialog boxes.

For more information on setting resource properties, refer to “Related information” on page 81.

Notes Note the following:

◆ As a security best practice, consider using Run as to perform this procedure. For more information, refer to Default Local Groups, Default Groups, and Using Run as in the TechNet section of the Microsoft website.

◆ Before adding a resource to your cluster, you must verify that the following are true:

• The type of resource is either one of the basic types provided with the Windows Server 2003 family or a custom resource type provided by the resource vendor.

• A group already exists within the cluster to which your resource will belong.

• All dependent resources have been created.

EMC Host Connectivity Guide for Windows

Page 81: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

◆ A resource that fails frequently can affect its associated Resource Monitor and other resources interacting with that Resource Manager. For this reason, a separate Resource Monitor is recommended for any resource that has failed repeatedly in the past.

Related information In addition to the information referenced earlier in this section, refer to the following in the TechNet section of the Microsoft website:

◆ Server Cluster Resources

◆ Resource Types

◆ Standard Resource Types

◆ View or Set Resource Properties

◆ Setting Resource Properties

◆ Resource Dependencies

◆ Resource Monitors

Best practices for configuring and operating server clustersThe guidelines in this section can help you effectively use a server cluster.

Secure your server clusterTo prevent your Windows 2003 server cluster from being adversely affected by denial of service attacks, data tampering, and other malicious attacks, it is highly recommended that you plan for and implement the security measures detailed in Best practices for securing server clusters in the TechNet section of the Microsoft website.

Note: For Windows 2008/2008 R2, see the documents on Technet: Secure Windows 2008 and Secure Windows 2008 R2.

Windows 2003: Verify that your server cluster hardware is listed in the Windows catalogFor Windows Server 2003, Enterprise Edition and Windows Server 2003, Datacenter Edition, Microsoft supports only complete server cluster systems chosen from the Windows Catalog.

To see if your system or hardware components, including your cluster disks, are compatible, refer to Support Resources in the TechNet section of the Microsoft website.

Troubleshooting Microsoft cluster issues 81

Page 82: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

82

General Procedures and Information

For a geographically dispersed cluster, both the hardware and software configuration must be certified and listed in the Windows Catalog.

The network interface controllers (NICs) used in certified cluster configurations must be selected from the Windows Catalog.

It is recommended that your cluster configuration consist of identical storage hardware on all cluster nodes to simplify configuration and eliminate potential compatibility problems.

Windows Server 2008 Cluster solutions will not be listed in the Windows Server Catalog.

For more information, visit the following Failover Clustering Configuration Program website:

http://www.microsoft.com/windowsserver2008/en/us/clustering-program.aspx

For more information, click the following article number to view the article, 943984, The Microsoft Support Policy for Windows Server 2008 Failover Clusters in the Microsoft Knowledge Base:

http://support.microsoft.com/kb/943984/

Windows 2008: Verification with Microsoft Failover Cluster Configuration ProgramEnsure your hardware is validated by Microsoft Failover Cluster Configuration Program. A validated configuration is a specific combination of tested hardware, software, and storage that delivers a high-availability experience and uses Windows Server 2008 Clustering Technology.

See the Windows Server 2008 Failover Cluster Configuration Program (FCCP) page on the Microsoft website.

Windows 2008 and 2008 R2 Cluster Validation ToolWindows 2008 and 2008 R2 includes the Cluster Validation Tool. You can perform tests to determine whether your system, storage, and network configuration is suitable for a cluster. The Cluster Validation Tool verifies that the nodes meet all of the operating system requirements, that the networks are configured correctly, that there are at least two separate networks on each node for redundancy, and that the storage subsystem supports the necessary Small Computer System Interface (SCSI) commands to handle cluster actions. Windows Server 2008 R2 also now includes a Best Practices Analyzer (BPA) for all major server roles, including Failover Clustering. This

EMC Host Connectivity Guide for Windows

Page 83: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

analyzer examines the best practices configuration settings for a cluster and cluster nodes.

Partition and format disks before adding the first node to your clusterPartition and format all disks on the cluster storage device before adding the first node to your cluster. You must format the disk that will be the quorum resource. All partitions on the cluster storage device must be formatted with NTFS (they can be either compressed or uncompressed), and all partitions on one disk are managed as one resource and move as a unit between nodes.

IMPORTANT!If you are running a non-SP2 version of Windows 2003, cluster disks must be partitioned as master boot record (MBR) and not as GUID partition table (GPT) disks.

Windows 2008 and 2008 R2 has Support for GPT disks in cluster storage GUID partition table (GPT) disks are supported in failover cluster storage. GPT disks provide increased disk size and robustness. Specifically, GPT disks can have partitions larger than two terabytes and have built-in redundancy in the way partition information is stored, unlike master boot record (MBR) disks. With failover clusters, you can use either type of disk.

Correctly set up your server cluster's networksFollow these guidelines to reduce network problems in your server cluster:

◆ Use identical network adapters in all cluster nodes, that is, make sure each adapter is the same make, model, and firmware version.

◆ Use at least two interconnects. Although a server cluster can function with only one interconnect, at least two interconnects are necessary to eliminate a single point of failure and are required for the verification of original equipment manufacturer (OEM) clusters.

◆ Reserve one network exclusively for internal node-to-node communication (the private network). Do not use teaming network adapters on the private networks.

Troubleshooting Microsoft cluster issues 83

Page 84: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

84

General Procedures and Information

◆ Set the order of the network adapter binding as follows:

1. External public network

2. Internal private network (Heartbeat)

3. [Remote Access Connections]

For more information, refer to Modify the Protocol Bindings Order in the TechNet section of the Microsoft website

◆ Set the speed and duplex mode for multiple speed adapters to the same values and settings. If the adapters are connected to a switch, ensure that the port settings of the switch match those of the adapters.

For more information, refer to Change Network Adapter Settings in the TechNet section of the Microsoft website.

◆ Use static IP addresses for each network adapter on each node.

◆ For private networks, define the TCP/IP properties for static IP addresses following the guidelines under Private Network Addressing Options in the TechNet section of the Microsoft website. (That is, specify a class A, B, or C private address.)

◆ Do not configure a default gateway or DNS or WINS server on the private network adapters. Also, do not configure private network adapters to use name resolution servers on the public network; otherwise, a name resolution server on the public network might map a name to an IP address on the private network. If a client then received that IP address from the name resolution server, it may fail to reach the address because no route from the client to the private network address exists.

◆ Configure WINS and/or DNS servers on the public network adapters. If Network Name resources are used on the public networks, set up the DNS servers to support dynamic updates; otherwise, the Network Name resources may not fail over correctly.

For more information, refer to Configure TCP/IP settings in the TechNet section of the Microsoft website.

◆ Configure a default gateway on the public network adapters. If there are multiple public networks in the cluster, configure a default gateway on only one of these.

For more information, refer to Configure TCP/IP settings in the TechNet section of the Microsoft website.

EMC Host Connectivity Guide for Windows

Page 85: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

◆ Clearly identify each network by changing the default name.

For example, you could change the name of the private network connection from the default Local Area Connection to Private Cluster Network.

◆ Change the role of the private network from the default setting of All communications (mixed network) to Internal cluster communications only (private network), and verify that each public network is set to All communications (mixed network).

For more information, refer to Change How the Cluster uses a Network in the TechNet section of the Microsoft website.

◆ Place the private network at the top of the Network Priority list for internal node-to-node communication in the cluster.

For more information, refer to Change Network Priority for Communication between Nodes in the TechNet section of the Microsoft website.

Do not install applications into the default Cluster Group

Do not delete or rename the default Cluster Group or remove any resources from that resource group

The default Cluster Group contains the settings for the cluster and some typical resources that provide generic information and failover policies. This group is essential for connectivity to the cluster. It is therefore very important to keep application resources out of the default Cluster Group and so prevent clients from connecting to the Cluster Group's IP address and network name resources. If a resource for an application is added to this group and the resource fails, it may cause the cluster group to fail also, therefore reducing the overall availability of the entire cluster. It is highly recommended that you create separate resource groups for application resources.

For more information, refer to Planning your Groups and Checklist: Planning and Creating a Server Cluster in the TechNet section of the Microsoft website.

Back up your server clusterTo be able to effectively restore your server cluster in the event of application data or quorum loss, or individual node or complete cluster failure, follow these steps when preparing backups:

1. Perform an Automated System Recovery (ASR) backup on each node in the cluster.

Troubleshooting Microsoft cluster issues 85

Page 86: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

86

General Procedures and Information

2. Back up the cluster disks from each node.

3. Back up each individual application (for example, Microsoft Exchange Server or Microsoft SQL Server) running on the nodes.

By default, Backup Operators do not have the user rights necessary to create an Automated System Recovery (ASR) backup on a cluster node. However, Backup Operators can perform this procedure if that group is added to the security descriptor for the Cluster service. You can do that using Cluster Administrator or cluster.exe. For more information, refer to Give a User Permissions to Administer a Cluster and Cluster in the TechNet section of the Microsoft website.

For more information, refer to Backing Up and Restoring Server Clusters in the TechNet section of the Microsoft website. For more information on backing up applications in a cluster, see the documentation for that application.

Give the cluster service account full rights to administer computer objects if Kerberos authentication is enabled for virtual servers

If you enable Kerberos authentication for a virtual server's Network Name resource, the Cluster service account does not need full access rights to the computer object associated with that Network Name resource. The Cluster service can use the default access rights given to members of the authenticated users group, but certain operations (for example, renaming the computer object) will be restricted.

It is recommended that you work with your domain administrator to set up appropriate administration rights and permissions for the Cluster service account.

For more information, refer to information about Kerberos authentication under Virtual Servers in the TechNet section of the Microsoft website.

Do not install scripts used by Generic Script resources on cluster disksIt is recommended that you install script files used by Generic Script resources on local disks, not on cluster disks. Incorrectly written script files can cause the cluster to stop responding. Installing the script files on a local disk makes it easier to recover from this scenario.

For guidelines on writing scripts for the Generic Script resource, refer to the Microsoft Platform Software Development Kit (SDK). For information on troubleshooting Generic Script resource issues, refer

EMC Host Connectivity Guide for Windows

Page 87: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

to article Q811685, A Server Cluster with a Generic Script Resource Stops Responding, in the Microsoft Knowledge Base.

How to troubleshoot cluster service startup issues

This section describes basic troubleshooting steps you can use to diagnose Cluster service startup issues. Although this is not a comprehensive list of all the issues that can cause the Cluster service not to start, it does address about 90 percent of startup issues.

Note: This information was taken from Microsoft Article ID: 266274, revision 3.0.

CAUTION!This section contains information about modifying the registry. Before you modify the registry, back it up, and make sure that you understand how to restore the registry if a problem occurs.

Note: For information about how to back up, restore, and edit the registry, refer to Description of the Microsoft Windows Registry in the Microsoft Knowledge Base.

When the Cluster service initially starts, it attempts to join an existing cluster. For this to occur, the Cluster service must be able to contact an existing cluster node. If the join procedure does not succeed, the cluster continues to the form stage; the main requirement of this stage is the ability to mount the quorum device.

Here are the steps in the startup process:

1. Authenticate the Service account.

2. Load the local copy of the cluster database.

3. Use information in the local database to try to contact other nodes to begin the join procedure. If a node is contacted and authentication is successful, the join procedure is successful.

4. If no other node is available, the Cluster service uses the information in the local database to mount the quorum device and updates the local copy of the database by loading the latest checkpoint file and replaying the quorum log.

Troubleshooting Microsoft cluster issues 87

Page 88: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

88

General Procedures and Information

WARNING

If you use Registry Editor incorrectly, you may cause serious problems that may require you to reinstall your operating system. Microsoft cannot guarantee that you can solve problems that result from using Registry Editor incorrectly. Use Registry Editor at your own risk.

Troubleshooting cluster service startup issuesFollow these steps to troubleshoot issues:

1. Verify that the cluster node that is having problems is able to properly authenticate the Service account. You can determine this by logging on to the computer with the Cluster service account, or by checking the System event log for Cluster service logon problem event messages.

2. Verify that the %SystemRoot%\Cluster folder contains a valid Clusdb file and that the Cluster service attempted to start.

Start the Registry Editor (Regedt32.3xe) and verify that the following registry key is valid and loaded:

HKEY_LOCAL_MACHINE\Cluster

The cluster hive should have a structure that is very similar to Cluster Administrator. Make note of the network and quorum keys. If the database is not valid, you can copy and use the cluster database from a live node. If all nodes do not have a valid cluster database, refer to How to Use the Cluster TMP file to Replace a Damaged Clusdb File in the Microsoft Knowledge Base.

3. If the node is not the first node in the cluster, check connectivity to other cluster nodes across all available networks. Use the ping.exe tool to verify TCP/IP connectivity, and use Cluster Administrator to verify that the Cluster service can be contacted.

Use the TCP/IP addresses of the network adapters in the other nodes in the Connect to dialog box in Cluster Administrator.

4. If it cannot contact any other node, the service continues with the form phase. It attempts to locate information about the quorum in the local cluster database and then tries to mount the disk. If the quorum disk cannot be mounted, the service does not start.

EMC Host Connectivity Guide for Windows

Page 89: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

If another node has successfully started and has ownership of the quorum, the service does not start. This is usually caused by connectivity or authentication issues. If this is not the case, you can check the status of the quorum device by starting the service with the -fixquorum switch and attempting to bring the quorum disk online, or changing the quorum location for the service. Also check the System event log for disk errors.

If the quorum disk successfully comes online, it is likely that the quorum is corrupted. To correct this issue, refer to the following Microsoft Knowledge Base article:

Windows 2000: 245762, Recovering from a Lost or Corrupted Quorum Log

5. Check the attributes of the Cluster.log file to make sure that it is not read-only, and make sure that no policy is in effect that prevents modification of the Cluster.log file. If either of these conditions exist, the Cluster service cannot start. If these steps do not resolve the problem, you should take additional troubleshooting steps. The cluster log file can be valuable in additional troubleshooting.

Recovering from an Event ID 1034 on a server cluster

Note: This information was taken from Microsoft Article ID: 280425, revision 3.3.

Symptoms

A physical disk resource may fail to come online, or the Cluster service may fail to start. The following message is generated in the system event log:

Event ID: 1034 Source: ClusDisk Description: The disk associated with cluster disk

resource DriveLetter could not be found. The expected signature of the disk was DiskSignature.

Cause

These issues typically occur if either of the following conditions is true:

◆ A disk has become unavailable or inaccessible, and therefore, the Cluster service cannot find it.

◆ The signature on the disk has been changed.

Troubleshooting Microsoft cluster issues 89

Page 90: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

90

General Procedures and Information

The Cluster service recognizes and identifies disks by their disk signatures. Disk signatures are stored on the physical disk in the master boot record (MBR). The MBR is a record that the Cluster service keeps of all the disks that it manages. It uses the MBR to track the disks. During the course of Cluster service operations (start, restart, failover, and so forth), if the Cluster service cannot find a disk that is identified by a particular signature, it will fail to bring the disk online. The cluster component that specifically detects this condition and logs the error is the cluster disk filter driver (Clusdisk.sys). The error message provides information on the "missing disk" but does not indicate the reasons that this condition may have occurred.

Resolution

To resolve this problem:

1. Make sure the disk is actually exposed through the shared interconnects and is visible to the operating system. To do this:

a. Click Start.

b. Click Run.

c. Type CompMgmt.msc.

d. Click OK.

e. In Computer Management under System Tools, Device Manager, look under Disk Drives, and you can view all the logical disks that are being presented to the node.

All nodes in a cluster can see the same number of disk drives for disks that are managed by the cluster. For example, if there are 10 disks that are managed by the cluster, all 10 are visible to all nodes in the cluster. If you know the Target ID and LUN of the disk, you can validate them by clicking Properties for each disk.

– If the count does not match, the disk is not accessible to that node. Troubleshoot your storage solution to make sure that the disk is accessible and can be mounted by the operating system. When the storage solution is functioning correctly, you can rescan the bus by right-clicking the Device Manager disks.

EMC Host Connectivity Guide for Windows

Page 91: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

– If the count does match, and if the Cluster service is up and running, reduce the complexity, if possible, by moving the all the disk resources (groups that host the resources) to a single node. If the Cluster service has failed, shut down all nodes and restart one node.

2. If the disk signatures have changed, use dumpcfg.exe to write the expected signature back to the disk. The signatures of the disks as enumerated by dumpcfg should match the list that is derived from the following registry subkey:

HKLM/System/CurrentControlSet/Services/Clusdisk/Parameters

Clusdisk uses this information to bind to disks that are managed by the Cluster service.

3. If the signatures in the list do not match the registry subkey list, you must correctly identify the disks that have had their signatures changed and reset them to the expected signatures. To do this:

a. Power down all but one node.

b. Open Computer Management, double-click Storage, and then click Disk Management.

In Logical Disk Manager, note the disk number and label that is associated with the failing disk. This information is to the left of the partition information. For example: Disk 0.

Compare the information that is displayed with the message in the Description section of the Event ID 1034.

Example: The disk associated with cluster disk resource 'Disk Q:\'. The disk label should not change even if the signature has. The disk label will help you correctly identify the problem disk. Once the disk has been correctly identified its signature can be checked again to validate the mismatch.

c. If you cannot see the disks in DiskMgmt.msc, set the Cluster service and Cluster Disk device to Manual, and then restart the node. (All other nodes should remain shut down.)

To do this (noting that step i. may not be necessary):

i. Click Start, point to Programs, point to Administrative Tools, and then click Computer Management.

Troubleshooting Microsoft cluster issues 91

Page 92: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

92

General Procedures and Information

ii. Click Device Manager in the left pane, and then click Show Hidden Devices on the View menu.

iii. In the right pane, view the non-Plug and Play drives section, and then double-click the Clusdisk driver.

iv. On the Driver tab, change the Startup type option from System to Disabled.

v. In the left pane, double-click Services and Applications, and then click Services.

vi. In the right pane, double-click the Cluster service, and then click Disabled in the Startup type box.

vii.Restart the node, and then repeat step ii if necessary.

d. Write the signature that the Cluster service expects to the disk:

i. Obtain the expected signature from the Description section of the Event ID 1034 error message. For example: The expected signature of the disk was 12345678.

ii. Copy DumpCfg.exe from the Windows 2000 Resource Kit to the local node. At the command prompt, type dumpcfg.exe. Under the [DISKS] section, the disk number and signature for all available disks is displayed. Validate the actual disk signature with what the Cluster service

iii. Write the expected signature to the disk by using the following command, where 12345678 is the disk signature in hexadecimal, and 0 is the disk number that you replaced (which was obtained from the previous step):

dumpcfg.exe -s 12345678 0

For more information about using Dumpcfg.exe, type dumpcfg /? at the command prompt.

e. Set the Cluster service back to Automatic, and set the Cluster Disk device back to System on the node. Start the Cluster Disk device, and then start the Cluster service.

f. Open Cluster Administrator, and then bring the disk online.

g. Turn on all other nodes, one at a time, and then test failover.

EMC Host Connectivity Guide for Windows

Page 93: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Cluster disk and drive connection problemsThis section describes some problems specific to cluster disk and drive connections.

Problem When the physical disks are not powering up or spinning, Cluster service cannot initialize any quorum resources.

CauseCables are not correctly connected, or the physical disks are not configured to spin when they receive power.

SolutionAfter checking that the cables are correctly connected, check that the physical disks are configured to spin when they receive power.

Problem The Cluster service fails to start and generates an Event ID 1034 in the Event log after you replace a failed hard disk, or change drives for the quorum resource.

CauseIf a hard disk is replaced, or the bus is re-enumerated, the Cluster service may not find the expected disk signatures, and consequently may fail to mount the disk.

SolutionWrite down the expected signature from the Description section of the Event ID 1034 error message. Then follow these steps:

1. Back up the server cluster.

2. Set the Cluster service to start manually on all nodes, and then turn off all but one node.

3. If necessary, partition the new disk and assign a drive letter.

4. Use the confdisk.exe tool (available in the Microsoft Windows Server 2003 Resource Kit) to write that signature to the disk.

5. Start the Cluster service and bring the disk online.

6. If necessary, restore the cluster configuration information.

7. Turn on each node, one at a time.

For information on replacing disks in a server cluster, refer to Microsoft Knowledge Base article Q305793, How to Replace a Disk with Windows 2000 or Windows Server 2003 family Clusters.

Troubleshooting Microsoft cluster issues 93

Page 94: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

94

General Procedures and Information

Problem Drive on the shared storage bus is not recognized.

CauseScanning for storage devices is not disabled on each controller on the shared storage bus.

SolutionVerify that scanning for storage devices is disabled on each controller on the shared storage bus. Many times, the second computer you turn on does not recognize the shared storage bus during the BIOS scan if the first computer is running. This situation can manifest itself in a Device not ready error being generated by the controller, or in substantial delays during startup. To correct this, disable the option to scan for devices on the shared controller.

Note: This symptom can manifest itself as one of several errors, depending on the attached controller. It is normally accompanied with a one- to two-minute start delay and an error indicating the failure of some device.

Problem Configuration cannot be accessed through Disk Management. Under normal cluster operations, the node that owns a quorum resource locks the drive storing the quorum resource, preventing the other nodes from using the device.

If you find that the cluster node that owns a quorum resource cannot access configuration information through Disk Management, the cause of the problem might be either of the following.

Possible Cause 1A device does not have physical connectivity and power.

SolutionReseat controller cards, reseat cables, and make sure the drive spins up when you start.

Possible Cause 2You attached the cluster storage device to all nodes and started all the nodes before installing the Cluster service on any node.

SolutionAfter you attach all servers to the cluster drives, you must install the Cluster service on one node before starting all the nodes. Attaching the drive to all the nodes before you have the cluster installed can corrupt the file system on the disk resources on the shared storage bus.

EMC Host Connectivity Guide for Windows

Page 95: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Problem SCSI or Fibre Channel storage devices do not respond.

Possible Cause 1The SCSI bus is not properly terminated.

SolutionMake sure that the SCSI bus is not terminated early and that the SCSI bus is terminated at both ends.

Possible Cause 2The SCSI/Fibre Channel cable is longer than the cable specification allows.

Solution Make sure that the SCSI/Fibre Channel cable is not longer than the cable specification allows.

Possible Cause 3The SCSI/Fibre Channel cable is damaged.

SolutionMake sure that the SCSI/Fibre Channel cable is not damaged. (For example, check for bent pins and loose connectors on the cable and replace it if necessary.)

Problem Disk groups do not move or stay online pending after move.

CauseCables are damaged or not properly installed.

SolutionCheck for bent pins on cables and make sure that all cables are firmly anchored to the chassis of the server and drive cabinet.

Problem Disks do not come online or Cluster service does not start when a node is turned off.

CauseIf the quorum log is corrupted, the Cluster service cannot start.

SolutionIf you suspect that the quorum resource is corrupted, refer to the information on the problem Quorum log becomes corrupted in Node-to-node connectivity problems in the TechNet section of the Microsoft website.

Problem Drives do not fail over or come online.

Troubleshooting Microsoft cluster issues 95

Page 96: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

96

General Procedures and Information

Possible Cause 1The drive is not on a shared storage bus.

SolutionIf drives on the shared storage bus do not fail over or come online, make sure the disk is on a shared storage bus, not on a nonsystem bus.

Possible Cause 2If you have more than one local storage bus, some drives in Shared cluster disks will not be on a shared storage bus.

SolutionIf you do not remove these drives from Shared cluster disks (in the Cluster Application Wizard), the drives do not fail over, even though you can configure them as resources.

Problem Mounted drives disappear, do not fail over, or do not come online.

CauseThe clustered mounted drive was not configured correctly.

SolutionLook at the Cluster service errors in the Event Log (ClusSvc in the Source column). You need to recreate or reconfigure the clustered mounted drive if the description of any Cluster service error is similar to the following:

Cluster disk resource disk_resource: Mount point mount_drive for target volume target_volume is not acceptable for a clustered disk because reason. This mount point will not be maintained by the disk resource.

EMC Host Connectivity Guide for Windows

Page 97: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

When recreating or reconfiguring the mounted drive(s), follow these guidelines:

◆ Make sure that you create unique mounted drives so that they do not conflict with existing local drives on any node in the cluster.

◆ Do not create mounted drives between disks on the cluster storage device (cluster disks) and local disks.

◆ Do not create a mounted drive from a clustered disk to the cluster disk that contains the quorum resource (the quorum disk). You can, however, create a mounted drive from the quorum disk to a clustered disk.

◆ Mounted drives from one cluster disk to another must be in the same cluster resource group, and must be dependent on the root disk.

For more information on viewing the Event Log, refer to View Event Logs in the TechNet section of the Microsoft website. For more information on creating mounted drives in a server cluster, refer to Add drives on the shared storage bus in the TechNet section of the Microsoft website.

Problem The cluster quorum disk (containing the quorum resource) becomes disconnected from all nodes in a cluster and you are later unable to add the nodes back to the cluster.

Cause and solutionRefer to identically titled problem in Node-to-node connectivity problems in the TechNet section of the Microsoft website.

For information about how to obtain product support, refer to Technical support options in the TechNet section of the Microsoft website.

It is important that you correctly configure the storage topology (for example, SCSI, Fibre Channel, Storage Area Networks) and the storage interconnects (for example, multiple paths) used in your server cluster. Before deploying your server cluster, contact your hardware vendors to ensure that your particular cluster storage configuration is supported at the hardware level.

For descriptions of supported cluster storage topologies, best practices for deploying and managing cluster storage, and a list of cluster storage-related Knowledge Base articles, refer to the cluster storage information in the TechNet section of the Microsoft website.

Troubleshooting Microsoft cluster issues 97

Page 98: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

98

General Procedures and Information

Client-to-cluster connectivity problemsThis section describes some problems specific to client-to-cluster connectivity.

Problem Clients cannot connect to virtual servers.

Possible cause 1The client might not be using the correct network name or IP address to access the cluster.

SolutionMake sure that the client is accessing the cluster using the correct network name or IP address.

Possible cause 2The client might not have the TCP/IP protocol correctly installed and configured.

SolutionMake sure that the client has the TCP/IP protocol correctly installed and configured. Depending on the application being accessed, the client can address the cluster by specifying either the resource network name or the IP address. In the case of the network name, you can verify proper name resolution by checking the NetBT cache (using the Nbtstat.exe utility) to determine whether the name had been previously resolved. Also, confirm proper WINS configuration, both at the client and through the WINS Administrator.

If the client is accessing the resource through a specific IP address, ping the IP address of the cluster resource and cluster nodes from a command prompt.

For more information on Nbtstat.exe, refer to Nbtstat in the TechNet section of the Microsoft website. For instructions on how to troubleshoot connectivity problems, refer to Troubleshoot client-to-virtual server connectivity in the TechNet section of the Microsoft website.

Possible cause 3The client may be attempting to connect to an incorrect node after a failover.

EMC Host Connectivity Guide for Windows

Page 99: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

SolutionMake sure that the routers are configured correctly. For more information, refer to article Q244331, MAC Address Changes for Virtual Server During a Failover with Clustering, in the Microsoft Knowledge Base.

Problem Clients cannot access resources.

Possible cause 1Both your private network and the primary network are down.

SolutionIf your server cluster nodes are multihomed and use a private network to communicate, the primary network can be down and the Cluster service can still function normally. If the private network is lost between nodes, resources that are configured to do so will fail over to the node that has ownership of the quorum resource. Each of the other nodes will stop its Cluster service.

Possible cause 2A node might have lost connectivity with the network.

SolutionIf you suspect a node has lost connectivity with the network:

◆ Confirm that the connection configuration settings are correct for each connection. To do this, open Control Panel and double-click Network Connections. Right-click a connection and click Properties.

◆ Check WINS or DNS to make sure that they are properly configured.

◆ Confirm that the static IP addresses being used by the Cluster service are still in place and are not being used by other resources on the network.

Problem A node cannot communicate on a network.

CauseIf a node cannot communicate on a network, it might lack a physical connection to the other nodes, or the other nodes might not be connected to one another. If the cabling has failed, the Cluster service might not have received the heartbeat of the node and so failed the resources owned by that node over to other nodes in the cluster. The node can no longer communicate to clients on a network.

Troubleshooting Microsoft cluster issues 99

Page 100: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

100

General Procedures and Information

SolutionCheck the hub and local cabling, both to the network and to the cluster disks.

Problem Clients cannot access a group that has failed over.

CauseThere might not be a physical connection between the cluster nodes and the group that failed over has also failed to come online.

Solution1. Confirm that you have a physical connection between the cluster

nodes:

• Make sure that the network cabling has not failed or been damaged. If the cabling has failed, the Cluster service might have failed over the resources to another node.

• Ensure that the storage device cabling has not failed.

• If the nodes are separated by one or more hubs, check the connectivity through all hubs.

2. Bring the group back online.

Problem Clients cannot attach to a cluster file share resource.

Possible cause 1WINS or DNS might not be correctly configured.

SolutionMake sure that WINS or DNS are correctly configured. If you are using WINS, run WINS Manager on the WINS server; then, on the Mappings menu, click Show Database. Make sure that each node and the cluster are registered in the WINS database and that the registrations are active. For more information on WINS and WINS Manager, refer to the Networking Guide in the Microsoft Windows Server 2003 Resource Kit.

Possible cause 2Your security policies might not allow the client to access the file share.

SolutionMake sure that your security policies allow the client to access the file share. The client must have the right to log on to the file share, or the Guest account must be enabled for the client to have access.

EMC Host Connectivity Guide for Windows

Page 101: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Problem Clients cannot access a cluster resource.

Possible cause 1Either the IP Address resource or Network Name resource for the group in which the resource is contained is not online.

SolutionCheck the group dependencies; the resource is supposed to be dependent on either the IP Address or the Network Name resource. Ensure that the IP Address and Network Name resources are online in the resource group. From the client computer, try to ping the IP addresses of the virtual server and individual nodes.

Possible cause 2Either the client or the cluster computer is not configured for either WINS or DNS.

SolutionMake sure that the cluster nodes are configured properly for name resolution using either WINS or DNS, and make sure that clients are configured to use the same form of name resolution.

Possible cause 3The client is attempting to access the cluster from a different subnet, and DNS is not correctly configured.

SolutionConfigure the cluster nodes and client computer to use WINS or DNS. If you use DNS, add a DNS address record for the cluster in the DNS database.

Problem Client can detect all nodes but cannot detect a virtual server.

Possible cause 1The virtual server might not have its own IP Address and Network Name resources.

SolutionMake sure that the virtual server has its own IP Address and Network Name resources, and that both resources are online. Your servers must have an address and a name on the network for any other server or client to properly recognize that the servers are on the network.

Possible cause 2One or more nodes are not correctly configured to use WINS or DNS.

Troubleshooting Microsoft cluster issues 101

Page 102: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

102

General Procedures and Information

SolutionMake sure that all nodes are correctly configured to use WINS or DNS.

Problem Clients cannot connect to a virtual server and the system event logs on the client computers contain Kerberos authentication errors.

CauseThe cluster might have used a previously created computer object for the virtual server. When the virtual server's Network Name resource is brought online for the first time after Kerberos support has been enabled, the cluster changes the object's password. Active Directory then replicates this change to all domain controllers in the cluster. However, because of replication latencies, clients connecting to this virtual server might receive a Kerberos ticket encrypted with the old password. This can result in Kerberos authentication errors.

SolutionWait until the new password has been replicated to all domain controllers, or force replication using the repadmin and replmon command line tools, available in the \Support\Tools folder on your operating system installation CD. For information on monitoring and troubleshooting replication issues, refer to Troubleshooting replication in the TechNet section of the Microsoft website. For information about how to obtain product support, refer to Technical support options in the TechNet section of the Microsoft website.

General administrative problems

This section describes some general problems not covered in other sections.

Problem The Cluster service fails and the node cannot detect the network.

CauseYou probably have a configuration problem.

Solution◆ If the node was recently configured, or if you have installed some

resource that required you to restart the computer, make sure that the node is still properly configured for the network.

EMC Host Connectivity Guide for Windows

Page 103: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

◆ Check that the server is properly configured for TCP/IP. Also check that the appropriate services are running. If the node recently failed, there is an instance of failover; but, if the other nodes are misconfigured as well, the failover will be inadequate and client access will fail.

Problem An IP address added to a group in the cluster fails.

Possible cause 1The IP address is not unique.

SolutionThe IP address must be different from every other group IP address and every other IP address on the network.

Possible cause 2The IP address is not a static IP address.

SolutionThe IP addresses must be statically assigned outside of a DHCP scope, or they must be reserved by the network administrator.

Problem An IP address resource is unresponsive when taken offline, for example you are unable to query its properties.

CauseYou may not have waited long enough for the resource to go offline.

SolutionIf an IP Address resource is unresponsive when taken offline, make sure that you wait long enough for the resource to Certain resources take time to go offline. For example, it can take up to three minutes for the IP Address resource to go fully offline.

Problem You receive the error RPC server is unavailable.

CauseThe server may not be operational, or the Cluster service and the RPC services may not be running.

SolutionMake sure the server is operational and that both the Cluster service and the RPC services are running. Also, check the name resolution of the cluster; it is possible that you are using the wrong name or that the name is not being properly resolved by WINS or DNS.

Problem Cluster Administrator cannot open a connection to a node.

Troubleshooting Microsoft cluster issues 103

Page 104: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

104

General Procedures and Information

CauseThe node may not be running.

SolutionIf Cluster Administrator cannot open a connection to a node, make sure that the node is running. If it is, confirm that both the Cluster service and the RPC services are running.

Problem An application starts but cannot be closed.

CauseYou may not have taken a resource offline using Cluster Administrator.

SolutionWhen you bring resources online using Cluster Administrator, you must also take those resources offline using Cluster Administrator; do not attempt to close or exit the application from the application interface.

Problem A resource group has failed over but will not fail back.

Possible cause 1The hardware and network configurations may not be valid.

SolutionMake sure that the hardware and network configurations are valid. If any interconnect fails, failover can occur because the Cluster service does not detect a heartbeat, or it may not even register that the node was ever online. In this case, the Cluster service fails over the resources to the other nodes in the server cluster, but it cannot fail back because that node is still down.

Possible cause 2The resource group may not have been configured to fail back immediately, or you are not troubleshooting the problem within the allowable failback hours for the resource.

SolutionMake sure that the resource group is configured to fail back immediately, or that you are troubleshooting the problem within the allowable failback hours for the resource group. A group can be configured to fail back only during specified hours. Often, administrators prevent failback during peak business hours. To check this, use Cluster Administrator to view the resource failback policy.

EMC Host Connectivity Guide for Windows

Page 105: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Possible cause 3You restarted the node to test the failover policy for the group instead of pressing the reset button.

SolutionMake sure that you press the reset button on the node. The resource group will not failback to the preferred node if you shutdown, then restart the node. For more information on testing failback policies, refer to Test node failure in the TechNet section of the Microsoft website.

Problem All nodes appear to be functioning correctly, but you cannot access all of the drives from one node.

Possible cause 1The shared drive may not be functioning.

SolutionConfirm that the shared drive is still functioning. Try to access the drive from another node. If you can do that, check the cable from the device to the node that you cannot perform the access. If the cable is not the problem, restart the computer and then try again to access the device. If you cannot access the drive, check your configuration.

Possible cause 2The drive has completely failed.

SolutionDetermine (from another node) whether the drive is functioning at all. You may have to restart the drive (by restarting the computer) or replace the drive. The hard disk with the resource or a dependency for the resource may have failed. You may have to replace a hard disk. You may also have to reinstall the cluster.

Problem Cluster Administrator update delays.

CauseIf you run Cluster Administrator from a remote computer, it may not display the correct (updated) cluster state when the cluster network name fails over from one node to another node. This can result in Cluster Administrator displaying a node as being online, when it is actually offline.

SolutionTo work around this problem, restart Cluster Administrator. You can avoid this problem by connecting to clusters through node names.

Troubleshooting Microsoft cluster issues 105

Page 106: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

106

General Procedures and Information

However, if the node you are connected to fails, Cluster Administrator stops responding until the RPC connection times out.

Problem Cluster Administrator stops responding when a node fails.

CauseThe Cluster Administrator may be slow in doing dynamic updates.

SolutionIf Cluster Administrator stops responding when a node fails, make sure that Cluster Administrator is not just slow in doing dynamic updates. If the Cluster service is running on a remaining node, Cluster Administrator is either not responding or is updating very slowly.

There are two ways to see if the Cluster service is running on a remaining node:

◆ Use the TCP/IP Ping utility to ping the cluster name on a remaining node.

◆ In Control Panel, double-click Services, and check whether the Cluster service is running.

Problem Cannot connect to cluster from recent file list.

CauseFiles listed in the Cluster Administrator recent file list on the File menu and in the Open Connection to Cluster dialog box) have the cluster name appended to the path. For example, instead of Webclust1, the recent file list may list C:\Windows\Cluster\Webclust1. This problem occurs when Microsoft Visual C++ version 5.0 is installed.

SolutionTo work around this problem, manually type the cluster name when you open the connection.

Problem Node performance is sluggish and node fails.

CauseCPU may be overloaded.

SolutionCheck that your node is not processor-bound. That is, that the CPU is not running at 100-percent utilization. If you try to run too many resources for the node capacity, you can overload the CPU.

EMC Host Connectivity Guide for Windows

Page 107: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Also review the size of your paging file. If the paging file is too small, the Cluster service can detect this as a node failure and fail over the groups.

Problem The cluster log contains numerous resource informational messages (for example: Entered LooksAlive, Entered Open, Entered Offline).

CauseOne or more of your Generic Script resources fills the cluster log with multiple copies of Entered LooksAlive, Entered Open, Entered Offline messages.

SolutionWhen creating a script for a Generic Script resource, do not use the LogInformation method when calling the LooksAlive function.

For more information, refer to the Microsoft Platform Software Development Kit (SDK).

Problem The Cluster service fails to start and returns an error code of ERROR_SHARING_VIOLATION (32) with event ID 1144 (NM_EVENT_REGISTER_NETWORK_FAILED).

CauseThe Internet Assigned Numbers Authority (IANA)-assigned port (3343) used by the cluster network driver (ClusNet) is bound to another process, preventing the Cluster service from starting.

SolutionUse port scanning and process termination utilities to identify and end the process that is bound to port 3343.

To do this:

1. Open a command prompt.

2. Navigate to the %systemroot%\system32 directory.

3. Type netstat -a -o and press ENTER.

This displays all listening and connected ports and the process ID of each process bound to that port. Port 3343 appears for each cluster network on the node.

Troubleshooting Microsoft cluster issues 107

Page 108: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

108

General Procedures and Information

The -a option indicates that all connections and listening ports are supposed to be displayed. Server clusters uses UDP so the ports are normally in listening mode rather than in connections.

The -o option indicates that the owning process ID is supposed to be displayed.

4. Type tasklist.

This displays the IDs for all the processes running on the node, including the process ID that matches the Cluster service (ClusSvc.exe).

5. Type taskkill /pid ID and press ENTER to terminate the process(es) bound to port 3343 that do not match the ID for the Cluster service.

Problem You cannot manually restore the cluster database on a local node by copying the systemroot\cluster\CLUSDB file from another node.

CauseIf the cluster registry hive is already locked and loaded by the Cluster service, the operating system will prevent you from copying a local CLUSDB file or overwriting an existing CLUSDB file on another node.

SolutionStop the Cluster service. Then unload the HKEY_LOCAL_MACHINE\Cluster hive before restoring the cluster database file.

To do this:

1. Open a command prompt.

2. Type net stop clussvc and press Ener to stop the Cluster service.

3. Use the Registry Editor to unload the hive under HKEY_LOCAL_MACHINE\Cluster. For more information, refer to Unload a hive from the registry in the TechNet section of the Microsoft website.

The operating system will now allow you to copy the CLUSDB file from a node and manually restore it to another node. For information about how to obtain product support, refer to Technical support options in the TechNet section of the Microsoft website.

EMC Host Connectivity Guide for Windows

Page 109: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

Windows 2008 Failover Clustering and SymmetrixFor Windows 2008 Failover Clustering support, the SCSI-3 PER bit is required per volume when attaching a host to a Symmetrix. Please contact your EMC account team to have the SCSI-3 PER bit enabled for all shared disk LUNs. If you have an appropriate license for Solutions Enabler, you can refer to the EMC Solutions Enabler Symmetrix Array Management CLI Product Guide, available on Powerlink, to set the bit.

Troubleshooting Microsoft cluster issues 109

Page 110: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

110

General Procedures and Information

Windows 2008 Server Core operating system optionWindows Server 2008 includes a variation of an installation called a Server Core. Server Core is a "scaled-back" installation where no Windows Explorer shell is installed. All configuration and maintenance is done either through the command line interface windows or by connecting to the machine remotely using Microsoft Management Console.

Figure 12 Server Core installation example

EMC Host Connectivity Guide for Windows

Page 111: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Procedures and Information

LimitationsSince Server Core is a special installation of Windows Server 2008, the following limitations exist:

◆ You cannot upgrade from a previous version of the Windows Server operating system to a Server Core installation. Only a clean installation is supported.

◆ You cannot upgrade from a full installation of Windows Server 2008 to a Server Core installation. Only a clean installation is supported.

◆ You cannot upgrade from a Server Core installation to a full installation of Windows Server 2008. If you need the Windows user interface or a server role that is not supported in a Server Core installation, you must install a full installation of Windows Server 2008.

Windows 2008 Server Core operating system option 111

Page 112: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

112

General Procedures and Information

EMC Host Connectivity Guide for Windows

Page 113: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

2

This chapter provides information specific to Windows hosts over Fibre Channel.

◆ Windows Fibre Channel environment.......................................... 114◆ Planning for fabric zoning and connections ................................ 115◆ Host configuration with Emulex HBAs........................................ 116◆ Host configuration with QLogic HBAs ........................................ 131◆ Host configuration with Brocade HBAs ....................................... 132◆ Fibre Channel over Ethernet (FCoE) environments.................... 133◆ Cisco Unified Computing System ................................................. 135

Fibre Channel AttachEnvironments

Fibre Channel Attach Environments 113

Page 114: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

114

Fibre Channel Attach Environments

Windows Fibre Channel environmentThis section lists some Fibre Channel support information specific to the Windows environment.

Refer to Chapter 1 for information that is common to both Fibre Channel and SCSI configurations.

Hardware connectivity

The Fibre Channel implementation supports up to eight Fibre Channel HBAs per host (depending on the number of available PCI slots).

Boot device support

Windows hosts have been qualified for booting from devices interfaced over Fibre Channel as described in the EMC Support Matrix and in the following sections:

◆ “Host configuration with Emulex HBAs” on page 116

◆ “Host configuration with QLogic HBAs” on page 131

EMC Host Connectivity Guide for Windows

Page 115: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Fibre Channel Attach Environments

Planning for fabric zoning and connections

Note: This section applies only to fabric configurations, as opposed to direct-attach configurations.

Plan for the switch topology, target to hosts mapping, and zoning. Draw the connectivity between the hosts, switch and the Symmetrix system for a verification of the fabric configuration. After the user has decided on the desired connections, the user must configure zoning capability in the switch before proceeding. Refer to the EMC Networked Storage Topology Guide located on Powerlink for information on zone configuration.

Planning for fabric zoning and connections 115

Page 116: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

116

Fibre Channel Attach Environments

Host configuration with Emulex HBAsxteamx: I kept this first

paragraph and then addedthe 2 new sections, 10 GbE

and eHBA. This how youwant it?

To install one or more EMC-qualified Emulex host bus adapters into a Windows host and configure the host for connection to EMC Storage Arrays over Fibre Channel, follow the procedures in the EMC Host Connectivity with Emulex Fibre Channel Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment document, available in the EMC OEM section of the Emulex website http://www.emulex.com or on Powerlink.

This section provides information on the following:

◆ “Configuring Emulex OneConnect 10 GbE iSCSI BIOS/boot LUN settings for OCe10102-IM iSCSI adapters” on page 116

◆ “Installing Emulex LPSe12002 8 Gb PCIe EmulexSecure Fibre Channel adapter” on page 123

Configuring Emulex OneConnect 10 GbE iSCSI BIOS/boot LUN settings for OCe10102-IM iSCSI adapters

This section describes the steps required to configure an Emulex OneConnect 10 GbE iSCSI boot BIOS to allow an array-attached LUN to be used as a boot disk for the server.

To configure an Emulex OneConnect 10 GbE iSCSI adapter boot BIOS, complete the following steps.

1. When the Emulex OneConnect 10 GbE iSCSI BIOS banner displays during power-on self test (POST), as shown in Figure 13 on page 117, press Ctrl S to enter the Emulex OneConnect 10 GbE iSCSI BIOS Configuration utility.

EMC Host Connectivity Guide for Windows

Page 117: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Fibre Channel Attach Environments

Figure 13 Emulex OneConnect 10 GbE iSCSI BIOS banner

2. The utility opens to the main Emulex OneConnect iSCSI Select Utility page, as shown in Figure 14.

Figure 14 Emulex OneConnect iSCSI Select Utility page

Host configuration with Emulex HBAs 117

Page 118: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

118

Fibre Channel Attach Environments

There is one iSCSI Initiator Name displayed, the host's IQN name. Use the Tab key to select the Controller Configuration.

3. The next screen displays the number of controllers under the Controller Selection Menu, as shown in Figure 15.

Figure 15 Emulex OneConnect iSCSI BIOS Controller Configuration Selection Menu

Details of the individual controller configuration display, as shown in Figure 16.

Figure 16 Individual controller configuration details

4. Select Controller Properties and press Enter.

EMC Host Connectivity Guide for Windows

Page 119: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Fibre Channel Attach Environments

A message displays showing the Controller Model Number, BIOS and Firmware Version, and Boot Support, as shown in Figure 17.

Figure 17 Enable Boot Support

5. Use the Tab key to highlight Boot Support. A drop-down menu displays.

a. Choose Enable to enable the adapter Boot Support for the specific controller and press Enter.

b. Use the Tab key to highlight Save and press Enter.

6. Press Esc to return to the previous Controller Configuration screen, as shown in Figure 16 on page 118.

7. From this screen, scroll down and select Network Configuration and press Enter.

Host configuration with Emulex HBAs 119

Page 120: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

120

Fibre Channel Attach Environments

This will display the Controller MAC Address, Port Speed, Link Status, and other information, as shown in Figure 18.

Figure 18 Controller Network Configuration screen

8. Press the Tab key to select Configure Static IP Address and press Enter. The Controller Static IP Address screen displays.

Figure 19 Controller Static IP Address

EMC Host Connectivity Guide for Windows

Page 121: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Fibre Channel Attach Environments

9. As shown in Figure 19, you can key in the IP address, Subnet Mask, and Default Gateway. To save the IP address, use the Tab key to highlight Save and press Enter.

Figure 20 Controller Static IP Address

10. Press Esc to return to the Controller Configuration screen as shown in Figure 16 on page 118. Scroll down and select iSCSI Target Configuration and press Enter.

Figure 21 shows the list of Targets which are already connected to the host.

Figure 21 Controller iSCSI Target Configuration

11. Select Add New iSCSI Target if you need to add more devices.

Host configuration with Emulex HBAs 121

Page 122: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

122

Fibre Channel Attach Environments

You need to key in the iSCSI Target Name, which is the desired array's IQN name and iSCSI Target IP address as shown in Figure 22.

Figure 22 Adding iSCSI Target

12. After you have completed all the setup and boot selections, press Esc key to return to the Emulex OneConnect iSCSI Select Utility page, as shown in Figure 14 on page 117.

13. To save all the configurations, use the Tab key to select Save and then press Esc.

14. You will be asked if you want to exit from the utility by pressing Y or N. After you press Y, the system will reboot.

EMC Host Connectivity Guide for Windows

Page 123: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Fibre Channel Attach Environments

15. During the subsequent reboot, the Emulex OneConnect 10 GbE iSCSI BIOS banner screen shows the array and LUN that has been specified as a boot-capable LUN, as shown in Figure 23. At this point, the OS installation can begin using this LUN as the boot volume.

Figure 23 Emulex OneConnect 10 GbE iSCSI BIOS banner

Installing Emulex LPSe12002 8 Gb PCIe EmulexSecure Fibre Channel adapter

This section includes the following information needed to install an Emulex LPSe12002 8 Gb PCIe EmulexSecure Fibre Channel adapter:

◆ “Setting up an Emulex encrypted HBA” on page 124

◆ “Installing PowerPath with encryption with RSA enabled” on page 127

◆ “Installing an existing encryption HBA and EHAPI software” on page 128

◆ “Configuring PowerPath encryption with RKM server” on page 128

Host configuration with Emulex HBAs 123

Page 124: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

124

Fibre Channel Attach Environments

Setting up an Emulex encrypted HBAComplete the following steps to set up an Emulex encrypted HBA.

1. Install the latest version of the Emulex One Command Manager (OCM) software.

OCM is required to manage the encryption HBA and will also manage all other Emulex HBAs in the system, replacing HBAnywhere as the Emulex adapter management tool.

The Emulex One Command Manager software installation window is shown in Figure 24.

Figure 24 Emulex One Command Manager software installation window

2. Click Next until the installation is complete.

EMC Host Connectivity Guide for Windows

Page 125: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Fibre Channel Attach Environments

3. Install the EHAPI software. Figure 25 shows the status ofthe eHBA in the Device Manager before installing EHAPI.

Figure 25 eHBA Status before installing EHAPI

Note: An internet connection is needed to install the EHAPI software. Be sure to configure the gateway connection correctly.

If there is a problem running the EHAPI setup, install .NET Framework 3.5. After installing .NET, run the EHAPI setup again.

Host configuration with Emulex HBAs 125

Page 126: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

126

Fibre Channel Attach Environments

The ElxSec Setup Wizard displays, as shown in Figure 26.

Figure 26 ElxSec Setup Wizard

EMC Host Connectivity Guide for Windows

Page 127: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Fibre Channel Attach Environments

Figure 27 shows the status of your eHBAs in the Device Manager after installing EHAPI.

Figure 27 eHBA Status after installing EHAPI

Installing PowerPath with encryption with RSA enabled1. Follow the instructions in the PowerPath and PowerPath/VE for

Windows Installation and Administration Guide, located on Powerlink, for general installation steps and registration.

Host configuration with Emulex HBAs 127

Page 128: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

128

Fibre Channel Attach Environments

2. Complete this additional step to register the PowerPath license key, which is specifically for encryption, as shown in Figure 28.

Figure 28 Register PowerPath license key for encryption

Installing an existing encryption HBA and EHAPI software1. Remove the existing EHAPI software using the Add/Remove

Programs of ElxSec.

2. Install the new EHAPI software by running setup.exe in the EHAPI kit and reboot the system.

3. Install the new OCM driver kit if it is not yet installed.

Configuring PowerPath encryption with RKM server1. Copy the credentials to the host.

Once PowerPath is installed and registered, copy the .cer and .pfx client credentials files generated by RKM server to the PowerPath encryption configuration directory C:\Program Files\EMC\RSA\Rkm_Client\config.

This directory should also contain four client configuration file templates (.tmpl files) by default.

EMC Host Connectivity Guide for Windows

Page 129: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Fibre Channel Attach Environments

2. Edit the Key Manager Client Configuration files.

Check the content of the following files:

a. Rkm_init.conf

b. Rkm_keyclass.conf

c. Rkm_registration.conf

d. Rkm_svc.conf

IMPORTANT!Accurate information about the RKM server IP address, client credentials file path, keyclass, etc., are needed for successful configuration. Any misconfiguration here may result in failure starting encryption daemon or turning on device encryption in a later step.

3. Run the following batch file to kick start the xcrypt configuration procedure. An example output is shown in Figure 29:

a. C:\Program Files\EMC\RSA\CST\lib\RKM_Config.bat

b. Lockbox passphrase – Teleph0ne# ? Require certain password requirements

c. Client Credential ID and Password needed

Figure 29 Config.bat

Host configuration with Emulex HBAs 129

Page 130: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

130

Fibre Channel Attach Environments

4. Use the following commands to check the query on PowerPath CLI:

#powervt xcrypt -info -dev all >> Status Enquiry on all disks#powervt xcrypt -on -dev harddisk1 -no >> Encryption#powervt xcrypt -off -dev harddisk1 -no >> De-Cryption#powervt xcrypt -info -dev harddisk1 >> Enquiry Encryption Status

EMC Host Connectivity Guide for Windows

Page 131: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Fibre Channel Attach Environments

Host configuration with QLogic HBAsTo install one or more EMC-approved QLogic host bus adapters (HBAs) into a Windows host and configure the host for connection to the EMC Storage Arrays over a Fibre Channel, follow the procedures in the EMC Host Connectivity with QLogic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment document, available in the EMC OEM section of the QLogic at http://www.qlogic.com or on Powerlink.

Host configuration with QLogic HBAs 131

Page 132: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

132

Fibre Channel Attach Environments

Host configuration with Brocade HBAsTo install one or more EMC-approved Brocade host bus adapters (HBAs) into a Windows host and configure the host for connection to the EMC Storage Arrays over a Fibre Channel, follow the procedures in the Host Connectivity with Brocade Fibre Channel and Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment document, available in the EMC OEM section of the Brocade website at http://www.brocade.com or on Powerlink.

EMC Host Connectivity Guide for Windows

Page 133: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Fibre Channel Attach Environments

Fibre Channel over Ethernet (FCoE) environmentsEMC supports Emulex Fibre Channel over Ethernet (FCoE) Converged Network Adapter (CNA). FCoE adapters represent a method to converge both Fibre Channel and Ethernet traffic over a single physical link to a switch infrastructure that manages both storage (SAN) and network (IP) connectivity within a single unit.

The benefits of FCoE technology become apparent in large data centers:

◆ Where dense, rack-mounted and blade server chassis exist.

◆ Where physical cable topology simplification is a priority.

◆ In virtualization environments, where several physical storage and network links are commonly required.

The installation of the Emulex FCoE CNA provides the host with an Intel-based 10 Gb Ethernet interface (using the existing in-box drivers), and an Emulex Fibre Channel adapter interface. Upon installation of the proper driver for the FCoE CNA, the Fibre Channel interface will function identically to that of a standard Emulex Fibre Channel HBA. The FCoE CNA simply encapsulates Fibre Channel traffic within Ethernet frames. As such, FC-based content within this guide also applies directly to Emulex FCoE CNAs.

In-depth information about FCoE and its supported features and topologies can be found in the "Fibre Channel over Ethernet (FCoE)" chapter in the EMC Networked Storage Topology Guide, available through EMC E-Lab™ Interoperability Navigator at: http://elabnavigator.EMC.com.

To install one or more EMC-qualified Emulex, QLogic, or Brocade CNAs into a Windows host and configure the host for connection to EMC storage arrays over FCoE, follow the procedures listed in the appropriate guide:

◆ EMC Host Connectivity with Emulex Fibre Channel Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment, available in the EMC OEM section of the Emulex website http://www.emulex.com or on Powerlink.

◆ EMC Host Connectivity with QLogic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment, available in the EMC OEM section of the QLogic at http://www.qlogic.com or on Powerlink.

Fibre Channel over Ethernet (FCoE) environments 133

Page 134: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

134

Fibre Channel Attach Environments

◆ EMC Host Connectivity with Brocade Fibre Channel and Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment document, available in the EMC OEM section of the Brocade website at http://www.brocade.com or on Powerlink.

EMC Host Connectivity Guide for Windows

Page 135: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Fibre Channel Attach Environments

Cisco Unified Computing System The Cisco Unified Computing System (UCS) is a next-generation data center platform that unites compute, network, storage access, and virtualization into a single system configuration. As shown in Figure 30 on page 136, configurations consist of a familiar chassis and blade server combination that works with Cisco's Fabric Interconnect switches to attach to NPIV-enabled fabrics. This allows for a centralized solution combining high-speed server blades designed for virtualization, FCoE connectivity, and centralized management. Fibre Channel ports on Fabric Interconnect switches must be configured as NP ports, which requires the connected Fabric switch to be NPIV-capable. Refer to the latest EMC Support Matrix for currently supported switch configurations.

In each server blade, an Emulex- or QLogic-based converged network adapter (CNA) mezzanine board is used to provide Ethernet and Fibre Channel connectivity for that blade to an attached network or SAN. These CNAs are based on currently supported PCI Express CNAs that EMC supports in standard servers and use the same drivers, firmware, and BIOS to provide connectivity to both EMC Fibre Channel and iSCSI storage array ports through the UCS Fabric Extenders and Fabric Interconnect switches that provide both 10 Gb Ethernet and/or Fibre Channel.

Cisco Unified Computing System 135

Page 136: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

136

Fibre Channel Attach Environments

Figure 30 Cisco Unified Computing System example

In-depth information about UCS and how it utilizes FCoE technology for its blade servers can be found in the Cisco UCS documentation at http://www.cisco.com.

The UCS Fabric Interconnect switches are supported with the same supported configurations as the Cisco NEX-5020. Refer to the "Fibre Channel over Ethernet (FCoE)" c hapter in the EMC Networked Storage Topology Guide, located on Powerlink for information on supported features and topologies.

EMC Host Connectivity Guide for Windows

Page 137: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

3

This chapter provides information on the Microsoft iSCSI Initiator. and the Microsoft Cluster Server.

◆ Introduction ...................................................................................... 138◆ Installing the Microsoft iSCSI Initiator ......................................... 140◆ Using MS iSNS server software with iSCSI configurations....... 163◆ iSCSI Boot with the Intel PRO/1000 family of adapters ............ 173◆ Notes on Microsoft iSCSI Initiator................................................. 180

iSCSI AttachEnvironments

iSCSI Attach Environments 137

Page 138: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

138

iSCSI Attach Environments

IntroductionThe Microsoft iSCSI Software Initiator package adds support to the Windows operating system for using iSCSI targets that support 1.0 of the iSCSI spec. The Microsoft iSCSI Software Initiator package runs with Windows 2000 SP4 or later, Windows XP SP1 or later and Windows 2003 Server or later.

To use the iSCSI software Initiator kernel mode driver, the system must include a qualified Designed for Windows NIC or and iSCSI HBA.

Terminology

You should understand these terms:

◆ Challenge Handshake Access Protocol (CHAP) — An authentication method that is used during the iSCSI login in both the target discovery and the normal login.

◆ iSCSI Network Portal — The host NIC IP address that is used for the iSCSI driver to create a session with the storage.

Software

Node-names The Microsoft iSCSI initiator supports an iSCSI target that is configured with a node-name as the following rules:

◆ Node-names are encoded in UTF8 Character set.

◆ The length of a node-name should be 223 characters or less.

◆ The valid characters are:

◆ A name can include any of these characters:

• a through z (upper or lower case; uppercase characters are always mapped to lowercase)

• 0 through 9

• . (period)

• - (dash)

• : (colon)

Refer to the Microsoft iSCSI Initiator x.0 User’s Guide for the complete set of rules for setting up the valid initiator and target node-names.

EMC Host Connectivity Guide for Windows

Page 139: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Boot device supportThe Microsoft iSCSI initiator does not support booting the iSCSI host from iSCSI storage. Refer to the EMC Support Matrix for the latest information about boot device support.

Introduction 139

Page 140: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

140

iSCSI Attach Environments

Installing the Microsoft iSCSI InitiatorThe Initiator can be found at:

http://www.microsoft.com/downloads

At the beginning of the install process, the Software Update Installation Wizard screen displays (Figure 31).

Figure 31 Software Update Installation Wizard

EMC Host Connectivity Guide for Windows

Page 141: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Four installation options are available:

◆ Virtual Port Driver (iscsiprt)

This is always checked and cannot be unchecked. All configurations require the port driver and thus it is always installed.

◆ Initiator Service (iscsiexe.exe)

This is the kernel mode iSCSI software Initiator driver and is used to connect to iSCSI devices via the Windows TCP/IP stack using NICs. If this option is selected then the Initiator Service option is automatically selected.

◆ Software Initiator (msiscsi.exe)

This is the kernel mode iSCSI software Initiator driver and is used to connect to iSCSI devices via the Windows TCP/IP stack using NICs. If this option is selected then the Initiator Service option is also selected automatically.

◆ Microsoft MPIO Multipathing Support for iSCSI

This installs the core MS MPIO framework files and the Microsoft iSCSI Device Specific Module (DSM). This will enable the MS iSCSI software Initiator and HBA to perform session-based multipathing to a target that supports multiple sessions to a target. If the version of MS MPIO core files is later than the version installed on the computer, the core MS MPIO files will be upgraded to the latest version that is part of the installation package.

Adding or removing components

To add or remove specific MS iSCSI Software Initiator components, run the setup package executable and configure the checkboxes to match the desired installation. The application should auto-check the boxes for components that are already installed. For example, if you want to add the MS MPIO component then you would leave the other checkboxes alone and just check the MS MPIO checkbox.

Note: If the MS MPIO checkbox is not checked then the installer will attempt to uninstall the Microsoft iSCSI DSM and the core MS MPIO files. However if there is another DSM installed then the core MS MPIO files will not be uninstalled. The setup application determines if another DSM is installed by checking the MS MPIO supported device list.

Installing the Microsoft iSCSI Initiator 141

Page 142: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

142

iSCSI Attach Environments

The installation will put an icon on your desktop labeled Microsoft iSCSI Initiator. To open the iSCSI Initiator Properties window (Figure 32 on page 142), double-click on the icon.

Figure 32 iSCSI Initiator Properties window

The General tab (shown in Figure 32) lists the iqn name of your host. This will be the name presented to your storage arrays. You could think of it as the WWN.

Please note the following:

◆ There will only be one iqn name for each host even if you have multiple NICs installed.

EMC Host Connectivity Guide for Windows

Page 143: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

◆ Microsoft requires that the Initiator Service be installed if you are using an iSCSI HBA. Only the service needs to be installed when using an HBA, not the software Initiator.

◆ If you are installing the Initiator Service to be used in conjunction with an iSCSI HBA then the IQN name of the HBA is changed to the iqn name shown on the General tab.

◆ If you have two HBAs installed in your host, BOTH will get the same iqn name.

◆ Precautions should be taken when using an iSCSI HBA and the Initiator to boot from array.

For directions on how to boot from array, consult the EMC Fibre Channel and iSCSI with QLogic Host Bus Adapter Guide, available at http://www.qlogic.com or on Powerlink.

Installing the Microsoft iSCSI Initiator 143

Page 144: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

144

iSCSI Attach Environments

The Discovery tab (Figure 33) shows the two ways to discover targets with the Initiator.

Figure 33 iSCSI Initiator Properties, Discovery tab

The two ways to discover targets with the Initiator are:

◆ Explicitly enter the IP address of the target portal.

◆ Enter the IP address of the iSNS server.

iSNS is used as a way to register both targets and initiators on a common server. If you enter an IP address for the iSNS server the host systems will register themselves with the iSNS server and will retrieve a list of targets that you can connect to.

EMC Host Connectivity Guide for Windows

Page 145: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

More information on iSNS can be found at:

http://www.microsoft.com/downloads

Examples In the following example, we are logging into the target portal at 192.168.150.108 and also registered with the iSNS server at 192.168.8.1.

Figure 34 Add Target Portal dialog box

Figure 35 Add iSNS Server dialog box

Through these two discovery mechanisms we get the following information on the Targets tab (Figure 36 on page 146).

◆ The first entry is a target we discovered by explicitly entering the target portal IP address.

◆ All of the other targets listed below are provided by the iSNS server.

Installing the Microsoft iSCSI Initiator 145

Page 146: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

146

iSCSI Attach Environments

Figure 36 iSCSI Initiator Properties, Targets tab

We will log into the 6th target listed above by highlighting it and clicking Log On. The Log On to Target dialog box appears (Figure 37 on page 147).

EMC Host Connectivity Guide for Windows

Page 147: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Figure 37 Log On to Target dialog box

Checking Automatically restore this connection when the system boots will cause the Initiator to log in to the target after each reboot. If you don't check this box, then the host will not log in to the target and you will lose access to the disk presented from that target.

Checking Enable multi-path allows you to create multiple sessions to the target using MPIO. Although you installed the Initiator with Microsoft MPIO Multipathing Support for iSCSI at install time, you still need to check this box if you want to use multipathing with this target. It is mandatory to check this option if you are using EMC PowerPath 4.6.x or later.

Installing the Microsoft iSCSI Initiator 147

Page 148: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

148

iSCSI Attach Environments

Clicking the Advanced button brings up the Advanced Settings window (Figure 38).

Figure 38 Advanced Settings window

EMC Host Connectivity Guide for Windows

Page 149: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

In the Connect by using section you can select the Local Adapter, the Source IP and the Target Portal. The source IP is the IP address assigned to the NIC installed in the host. If you have multiple NICs, they will be listed here. The Target Portal is the address assigned to the target name you are trying to log in to.

In the CHAP logon information section, CHAP logon information is checked. CHAP is a protocol used to authenticate the peer of a connection and is based upon the peer sharing a password or secret. CHAP requires the Initiator to have both a username and secret in order to operate. The CHAP username is typically passed to the target and the target will lookup the secret for that username in its private table. By default the Microsoft iSCSI Initiator service will use the initiator node name as the CHAP username but this can be changed on the array. In this example there is a CHAP username "hct" configured on the array so we connect to the target using HCT and the Target Secret.

The IPSec tab is used to configure the parameters for IPSec. This protocol provides authentication and data encryption at the IP packet layer.

Note: At this time IPSec is not supported.

Click OK to connect to the target (Figure 39 on page 150).

Installing the Microsoft iSCSI Initiator 149

Page 150: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

150

iSCSI Attach Environments

Figure 39 iSCSI Initiator Properties, Targets tab

EMC Host Connectivity Guide for Windows

Page 151: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

The Persistent Targets tab (Figure 40) will list all the targets that you have chosen to automatically connect to after a reboot.

Figure 40 iSCSI Initiator Properties, Persistent Targets tab

Installing the Microsoft iSCSI Initiator 151

Page 152: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

152

iSCSI Attach Environments

The disk startup sequence in Windows, when using the Microsoft iSCSI software Initiator kernel mode driver, is different from the startup sequence when using an iSCSI or other HBA. Disks exposed by the Microsoft iSCSI Initiator kernel mode driver are available for applications and services much later in the boot process and in some cases might not be available until after the service control manager begins to start automatic start services. The Microsoft iSCSI Initiator service includes functionality to synchronize automatic start services and the appearance of iSCSI disk. The iSCSI service can be configured with a list of disk volumes that are required to be present before the start of automatic start services.

Figure 41 iSCSI Initiator Properties, Bound Volumes/Devices tab

EMC Host Connectivity Guide for Windows

Page 153: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Determining the Initiator version on Windows 2003To find the version of the Initiator currently running on your system, from a command prompt issue the command: iscsicli versioninfo and compare the results to the table below:

Note: There is no version number associated to the Initiator in Windows 2008 since the initiator is native to the operating system and no longer a separately installed product.

Uninstalling the InitiatorTo completely uninstall the MS iSCSI Software Initiator package, go to the Add/Remove program applet in the control panel and click on the Remove button for the MS iSCSI Software Initiator package. The uninstall will completely uninstall the iSCSI Initiator package including the kernel mode driver, Initiator service and MS MPIO support, although if there is another DSM then the core MS MPIO files will not be uninstalled. This means that if you have PowerPath 1.1 or 4.6 loaded at the time you uninstall the Initiator, then the MPIO framework will remain on the host.

version 2.0 1653

version 2.02 1941

version 2.03 3099

version 2.04 3273

version 2.05 3392

version 2.06 3497

version 2.07 3610

version 2.08 b3825

Installing the Microsoft iSCSI Initiator 153

Page 154: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

154

iSCSI Attach Environments

Windows 2008 R2 iSCSI Initiator manual procedurePrior to configuring the iSCSI initiator, ensure you have decided exactly which NIC will connect to which target.

For example:

NIC1 and SPA-0 and SPB-0 are on one network subnet. NIC2 and SPA-1 and SPB-1 are on a different subnet. This example connects NIC1 to SPA-0 and SPB-0, and NIC2 to SPA-1 and SPB-1.

Note: These could also be on the same subnet, but this is not recommended.

◆ NIC1

• SPA-0

• SPB-0

◆ NIC2

• SPA-1

• SPB-1

To configure the iSCSI Initiator manually, complete the following steps:

1. While logged in as an Administrator on the server, open the Microsoft iSCSI Initiator through Control Panel (showing All Control Panel Items) or Administrative Tools.

Note: Do not use Quick Connect on the Targets tab. (If you have used Quick Connect, see “Windows 2008 R2 iSCSI Initiator cleanup” on page 160).

EMC Host Connectivity Guide for Windows

Page 155: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

2. Select Discovery tab, Discover Portal, in the iSCSI Initiator Properties window.

The Discover Target Portal dialog box displays.

3. Enter the IP Address of the Target Storage address and select Advanced.

Installing the Microsoft iSCSI Initiator 155

Page 156: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

156

iSCSI Attach Environments

The Advanced Setting dialog box displays.

4. Select Microsoft iSCSI Initiator in the Local adapter field.

5. Select the IP address of the NIC to be used.

6. Click OK and then OK again.

EMC Host Connectivity Guide for Windows

Page 157: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

The iSCSI Initiator Properties window displays.

7. Select the Targets tab.

8. Highlight the first target iqn and select Connect.

The Connect to Target dialog box displays.

9. Select Enable multi-path if using PowerPath or Windows 2008 Native MPIO.

10. Click Advanced.

Installing the Microsoft iSCSI Initiator 157

Page 158: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

158

iSCSI Attach Environments

The Advanced Settings dialog box displays.

11. In the Local adapter field, select Microsoft iSCSI Initiator from the drop-down menu..

12. In the Initiator IP field, select the correct NIC IP address from the drop-down menu.

13. In the Target poral IP field, select the IP address from the drop-down menu.

14. Click OK and then OK again.

15. Connect each of the other three targets in the list following the same procedure listed in the previous steps.

EMC Host Connectivity Guide for Windows

Page 159: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

16. In the iSCSI Initiator Properties window, select the Favorite Targets tab. This should show each of the targets that have been connected.

17. If the host has Unisphere/Navisphere Agent installed, you should now see it Logged in and Registered in Unisphere/Navisphere Manager. Otherwise you will need to manually register the NIC in Unisphere/Navisphere Manager.

18. Place the host in a Storage Group that has LUNs in it using Unisphere/Navisphere Manager, and then go back to the host and do a device manager "Scan for Hardware Changes". After a few minutes, you should see the disk devices arrive in the PowerPath GUI and or in Disk Management.

Note: PowerPath only shows the one Adapter in the PowerPath GUI, even though you may be using multiple NICs. The adapter seen here does not represent the NICs you have installed in your system, but rather it represents the MS iSCSI software initiator.

Installing the Microsoft iSCSI Initiator 159

Page 160: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

160

iSCSI Attach Environments

Windows 2008 R2 iSCSI Initiator cleanup

Note: If running Windows 2008 Failover Cluster, make sure this host does not own any disk resources. Move resources to another node in the cluster or take the disk resources offline.

Similarly, any LUNs being used on a Standalone Windows host need to be offline. Use Disk Management to offline the disks.

To clean up the iSCSI Initiator, complete the following steps.

1. While logged in as an Administrator on the server, open the Microsoft iSCSI Initiator through Control Panel (showing All Control Panel Items) or Administrative Tools.

2. Select the Discovery tab, select one of the addresses in the Target Portals field, and click Remove.

EMC Host Connectivity Guide for Windows

Page 161: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

3. A warning appears. Click OK.

4. Remove all the other Target Portals.

5. In the iSCSI Initiator Properties window, select the Volumes and Devices tab, select the volume in the Volume List field, and click Remove. Do this for each volume you want to remove.

6. In the iSCSI Initiator Properties window, select the Favorite Targets tab, select the target from the Favorite Targets field, and click Remove. Do this for each target you want to remove.

7. In the iSCSI Initiator Properties window, select the Targets tab, select one of the targets in the Discovered targets field, and click Disconnect.

Installing the Microsoft iSCSI Initiator 161

Page 162: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

162

iSCSI Attach Environments

8. A warning message displays. Click Yes.

9. Follow steps Step 7 and Step 8 for each of the targets to be disconnected.

If you are running PowerPath, all of the devices will be showing as dead in the PowerPath GUI. To clean and remove these, complete the following steps:

1. Open a Command Prompt using Run as administrator.

2. Type powermt check and when asked to remove dead device, select "a" for ALL.

3. Check the Discovery tab to ensure that there are no further targets connected.

4. Check each of the iSCSI initiator tabs and ensure they are all empty.

EMC Host Connectivity Guide for Windows

Page 163: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Using MS iSNS server software with iSCSI configurationsThe Microsoft iSNS Server is a Microsoft Windows service that processes iSNS registrations, deregistrations, and queries via TCP/IP from iSNS clients, and also maintains a database of these registrations. The Microsoft iSNS Server package consists of Windows service software, a control-panel applet, a command-line interface tool, and a set of WMI interfaces. Additionally, there are DLLs allowing Microsoft Cluster Server to manage Microsoft iSNS Server as a cluster resource.

When configured properly, the iSNS server allows iSCSI initiators to query for available iSCSI targets that are registered with the iSNS server. The iSNS server also allows administration of iSCSI networks by providing a form of “zoning” in order to allow initiators access only to targets designated by the administrator.

Installing iSNS server software

Microsoft iSNS server is available for download from Microsoft’s website in the downloads section. Currently, there is no direct link to the iSNS software.

Prior to running the installation, it is recommended that your iSCSI network interface controller (NIC) be configured to work with your iSCSI network. Symmetrix, VNX series, and CLARiiON iSCSI interfaces must be configured to recognize and register with the iSNS server software. Refer to the DMX MPCD For iSCSI Version 1.0.0 Technical Notes for information on the configuration of the MPCD for iSCSI into Symmetrix DMX systems. VNX series and CLARiiON configuration is done by an EMC Customer Engineer (CE) through Unisphere/Navisphere Manager. The CE will configure your CX-Series system settings for each iSCSI port.

After your storage array target ports are configured, install the iSNS server software by starting the installation package downloaded from the Microsoft website.

Using MS iSNS server software with iSCSI configurations 163

Page 164: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

164

iSCSI Attach Environments

Figure 42 Microsoft iSNS Server Installation Wizard

Follow each step of the iSNS Server software installer (Figure 42). You are required to accept a user license agreement and to select a location to install the software. Choosing the defaults for each step on the wizard is recommended.

After files are copied, the installation options dialog appears. Here you have the option of installing the iSNS service, uninstalling the iSNS service, or doing nothing. For installation, leave the Install iSNS Service button checked, and click OK (Figure 43 on page 165).

EMC Host Connectivity Guide for Windows

Page 165: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Figure 43 Installation option dialog

Another license agreement dialog appears. You must agree to the license agreement before installation can finish. After the iSNS Service is installed, you are prompted to allow the iSNS DHCP option to be configured (Figure 44). EMC does not recommend using DHCP for iSCSI configurations, but rather using static IP addresses for iSCSI targets and initiators. Select No to continue installation.

Figure 44 iSNS DHCP configuration option dialog

Using MS iSNS server software with iSCSI configurations 165

Page 166: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

166

iSCSI Attach Environments

The installation program displays a message (Figure 45) stating that the iSNS service was installed and started.

Figure 45 Installation confirmation message

Click OK to see the installation summary screen. Click Next to complete installation.

EMC Host Connectivity Guide for Windows

Page 167: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Configuring iSNS serverUsing the installed desktop icon, start the iSNS Server GUI interface. When open, the General tab (Figure 46) shows any targets and initiators that have registered with the iSNS server.

Figure 46 iSNS General properties

By selecting on any individual target or initiator and clicking Details, more information about the selected object is displayed (Figure 47 on page 168).

Using MS iSNS server software with iSCSI configurations 167

Page 168: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

168

iSCSI Attach Environments

Figure 47 Target/Initiator Details dialog

Using discovery domains for iSNS

The idea of Discovery Domains (DD) in iSNS is to provide a means for partitioning and grouping registered storage nodes for the dual purposes of administrative control and discovery. Microsoft iSNS Server implements the optional Default DD and automatically adds all newly registered storage nodes into the Default DD. However, if a storage node is already a member of a non-Default DD, then it is not automatically placed in the Default DD when it is registered. By automatically adding newly registered storage nodes to the Default DD, the default behavior is that all newly registered storage nodes can discover each other. It is up to an administrator to subsequently partition storage nodes into non-Default Discovery Domains.

EMC Host Connectivity Guide for Windows

Page 169: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

1. To create a new DD, click the Discovery Domains tab (Figure 48), and then click Create.

Figure 48 iSNS Server Properties, Discovery Domains tab

2. Type in the name of the DD you want to create (Figure 49), and click OK.

Figure 49 Create Discovery Domain dialog

Using MS iSNS server software with iSCSI configurations 169

Page 170: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

170

iSCSI Attach Environments

3. With the DD name selected on the Discovery Domains tab (Figure 50), click Add to add target and initiator members to the DD (Figure 51 on page 171).

Figure 50 iSNS Server Properties, Discovery Domain with members added

EMC Host Connectivity Guide for Windows

Page 171: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Figure 51 Add registered Initiator or Target to Discovery Domain

Using MS iSNS server software with iSCSI configurations 171

Page 172: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

172

iSCSI Attach Environments

4. After your DD is created, you must add it to a Discovery Domain set. Click the Discovery Domain Sets tab (Figure 52). Select the Default DDS (Discovery Domain Set) and click Add to add your created DD to the DDS.

Figure 52 iSNS Server Properties, Discovery Domain Sets tab

Be sure that the desired DDS is enabled by selecting it from the pull down menu and checking the Enable box.

These are the basic features that most iSCSI/iSNS users will use in their iSCSI environments. For further features included in the iSNS Server package, refer to the Microsoft iSNS Server Users Guide available for download with the iSNS Server software.

EMC Host Connectivity Guide for Windows

Page 173: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

iSCSI Boot with the Intel PRO/1000 family of adaptersIntel iSCSI Boot is designed for the Intel PRO/1000 family of PCI-Express Server Adapters. Intel iSCSI Boot provides the capability to boot from a remote iSCSI disk volume located on an iSCSI-based Storage Area Network (SAN).

The basic steps to configuring boot from SAN are:

1. Prepare you storage array for boot from SAN.

2. Install boot-capable hardware in your system.

3. Install the latest Intel iSCSI Boot firmware using the iscsicli DOS utility.

4. Connect the host to a network with the contains the iSCSI target.

5. Configure the iSCSI boot firmware on the NIC to boot from a pre-configured iSCSI target disk.

6. Configure the host to boot from the iSCSI target.

This section will focus on preparing your array to boot from SAN. The steps listed above are documented on the Intel website and can be downloaded from this link:

http://downloadfinder.intel.com/scripts-df-external/Detail_Desc.aspx?agr=Y&ProductID=2249&DwnldID=12194&strOSs=All&OSFullName=All%20Operating%20Systems&lang=eng

You can also find a list of supported adapters at the above link.

Preparing your storage array for boot

This section explains how to prepare your array in order to successfully present a boot LUN to you host.

The first thing you need to consider is what the host name will be. Using the naming conventions explained in “Node-names” on page 138, record an appropriate iqn name for your host.

In the following example, we will use an iqn name of iqn.1992-05.com.microsoft:intel.hctlab.hct.

Configuring your CX3 for iSCSI boot

Note: The following example assumes that you are familiar with Unisphere/Navisphere Manager.

iSCSI Boot with the Intel PRO/1000 family of adapters 173

Page 174: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

174

iSCSI Attach Environments

In order for boot from SAN to work properly, you first need to present a LUN to the host. To do this on a CX3 using Navisphere Manager:

1. Create new initiator records that identify the host to the array.

2. Create a record for each SP port to you might potentially connect.

3. Create a storage group with the new server and boot LUN. This LUN should be sized properly in order for the OS, and any other applications, to fit properly.

Once this is complete, follow these steps:

1. Right-click on the Storage Array and then select Connectivity Status.

EMC Host Connectivity Guide for Windows

Page 175: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

2. Click New to display the Create Initiator Record window.

3. In the Create Initiator Record window:

a. Enter the iqn name of the host in the Initiator Name field. For our example, use iqn.1992-05.com.microsoft:intel.hctlab.hct.

b. Select the SP - port to which the host will connect.

c. Enter a host name and ip address in the Host Information section.

d. Click OK.

iSCSI Boot with the Intel PRO/1000 family of adapters 175

Page 176: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

176

iSCSI Attach Environments

Once these steps are complete, the host displays in the Connectivity Status window.

4. Remember that the above steps create an initiator record that has a path to B0. To create additional initiator records, repeat steps 2 and 3 on page page 175.

a. Select a different SP - port in the drop-down menu.

b. Instead of entering a new host name, select Existing Host and choose the host created during the creation of the initial initiator record.

Note: This is critical to provide uninterrupted access if an array side path failure should occur.

5. Once you have created initiator records for each SP port, you will be able to create a storage group as you normally would and assign the newly created host and a boot LUN.

At this point you have configured the array to present a boot LUN to the host at boot up. You can continue with the instructions documented by Intel to install your OS to a local hard drive and then image your host OS to the boot LUN assigned above.

EMC Host Connectivity Guide for Windows

Page 177: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Post installation informationThis section contains the following installation:

◆ “Using two Intel NICs in a single host”, next

◆ “PowerPath 4.6”, on this page

◆ “Problem” on page 212

Using two Intel NICs ina single host

The above process in “Configuring your CX3 for iSCSI boot”, beginning on page 173, creates connections that can be accessed by the host on all ports available on the CX3 (in our example ports A0, A1, B0, and B1).

By using a dual-port PRO/1000, or two single-port PRO/1000s, the Intel BIOS will allow you to set up one port as the primary and another as the secondary. By configuring the primary login to connect to one SP, and the secondary login to connect to the other SP, your host will have access to both SPs.

Note: You do not need to configure Microsoft iSCSI Software Initiator for Windows to be able to detect the iSCSI Disk. Microsoft iSCSI Software Initiator automatically retrieves the iSCSI configurations from the PRO/1000 adapter iSCSI Boot firmware.

PowerPath 4.6 Once you are successfully booting from your array, you can install PowerPath 4.6.

Before installing PowerPath you will need to make sure your paths between the NICs and the array ports are set up properly. To do this, use a combination of the Intel BIOS logins, documented in the Intel guide in the "Firmware Setup" section, along with additional target logins using the Microsoft Initiator. What you are trying to accomplish is a path setup that looks much like what is discussed in t “Using the Initiator with EMC PowerPath v4.6.x or later” on page 203.

When complete, your paths will look like those shown in Figure 53 on page 178.

iSCSI Boot with the Intel PRO/1000 family of adapters 177

Page 178: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

178

iSCSI Attach Environments

Figure 53 Four paths

Setting this up will be slightly different than what is discussed in “Using the Initiator with EMC PowerPath v4.6.x or later” on page 203. Remember that you have already created two paths by configuring the Intel BIOS with primary and secondary logins. So, for example, if you configure the Intel BIOS to connect to A0 and B1, once you boot your host the Microsoft Initiator will show two connected logins to port A0 and B1 on the Target tab.

In order to complete the path setup you need to use the process beginning with Step 1 on page 204. When complete, you will have something similar to the following:

◆ NIC1 → A0

◆ NIC2 → A1

◆ NIC1→ B0

◆ NIC2 → B1

EMC Host Connectivity Guide for Windows

Page 179: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

You can now install PowerPath 4.6.x. Once installed, PowerPath administrator will look similar to Figure 54. Failures on the array side (i.e., loss of path between SP port and switch and SP failures) will be managed correctly by PowerPath.

Figure 54 EMC PowerPathAdmin

Note: PowerPath will sometimes show behavior that is not typically seen in non-boot implementations because of the design of the boot version of the Microsoft Initiator. The most notable difference is when a host side cable/NIC fault occurs. If the cable connected to the NIC that first found the LUN at boot time is disconnected, or if the NIC fails, PowerPath will show three dead paths instead of the two that would be expected. This behavior is expected with the Microsoft Initiator boot version. If the paths were set up as previously explained, a host side fault will not affect your system.

iSCSI Boot with the Intel PRO/1000 family of adapters 179

Page 180: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

180

iSCSI Attach Environments

Notes on Microsoft iSCSI InitiatorThis section contains important information about Microsoft iSCSI Initiator, including:

◆ “Microsoft Cluster Server” on page 201

◆ “Dynamic disks” on page 202

◆ “Boot” on page 202

◆ “NIC teaming” on page 202

◆ “Using the Initiator with EMC PowerPath v4.6.x or later” on page 203

◆ “Commonly seen issues” on page 208

iSCSI failover behavior with the Microsoft iSCSI initiatorWhen creating an iSCSI session using the Microsoft iSCSI Initiator, one of the choices a user has to make in the Advanced Settings dialog box (Figure 55 on page 181) is whether to:

◆ Have iSCSI traffic for that session travel over a specific NIC, or

◆ Allow the OS to choose which NIC will issue the iSCSI traffic.

This option also allows the Microsoft iSCSI Initiator to perform some failover (independent of EMC PowerPath) in the case of a NIC failure.

Note: Multiple subnet configurations are highly recommended as issues can arise in single subnet configurations.

EMC Host Connectivity Guide for Windows

Page 181: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Figure 55 Advanced Settings dialog box

The Source IP pull-down menu in the Advanced Settings dialog box lists the IP address of each NIC on the server, as well as an entry labeled Default. Default allows Windows to choose which NIC to use for iSCSI traffic.

The examples in this section describe the different failover behaviors that can occur when a NIC fails in both a single subnet configuration and a multiple subnet configuration after choosing either a specific NIC from the pull-down menu or Default:

◆ “Single subnet, Source IP is "Default"” on page 185

◆ “Single subnet, Source IPs use specific NIC IP addresses” on page 187

◆ “Multiple subnets, Source IP is "Default"” on page 192

Notes on Microsoft iSCSI Initiator 181

Page 182: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

182

iSCSI Attach Environments

◆ “Multiple subnets, Source IPs use specific NIC IP addresses” on page 195

Single iSCSI subnet configurationFigure 56 illustrates a single iSCSI subnet configuration.

Figure 56 Single iSCSI subnet configuration

In this configuration, there is a single subnet used for all iSCSI traffic. This iSCSI subnet is routable with the corporate network, but only iSCSI ports on the array and server NICs sending/receiving iSCSI traffic are connected to switches on this subnet.

The Windows server has a total of three NICs:

◆ One connected to the corporate network, with a defined default gateway

◆ Two connected to the iSCSI subnet, for NIC redundancy

Partial output from an ipconfig /all command from the server returns:

Ethernet adapter NIC1:

Description . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server AdapterIP Address. . . . . . . . : 10.14.108.78Subnet Mask . . . . . . . : 255.255.255.0Default Gateway . . . . . :

Ethernet adapter NIC2:

Description . . . . . . : Intel(R) PRO/1000 MT Dual Port Server Adapter #2 IP Address. . . . . . . : 10.14.108.79

EMC Host Connectivity Guide for Windows

Page 183: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Subnet Mask . . . . . . : 255.255.255.0 Default Gateway . . . . :

Ethernet adapter Corporate:

Description . . . . . .. . : Intel 8255x-based PCI Ethernet Adapter (10/100) IP Address. . . . . . . : 10.14.16.172 Subnet Mask . . . . . . : 255.255.255.0 Default Gateway . . . . : 10.14.16.1

All four iSCSI ports on the VNX series and CLARiiON are also connected to the iSCSI subnet. Each iSCSI port has a default gateway configured (with an iSCSI subnet address). The management port on the VNX series and CLARiiON is connected to the corporate network, and also has a default gateway defined.

The VNX series and CLARiiON's network configuration is as follows:

Management port (10/100 Mb):IP Address 10.14.16.46, default gateway 10.14.16.1 iSCSI Port SP A0: IP Address 10.14.108.46, default gateway 10.14.108.1iSCSI Port SP A1: IP Address 10.14.108.48, default gateway 10.14.108.1iSCSI Port SP B0: IP Address 10.14.108.47, default gateway 10.14.108.1iSCSI Port SP B1: IP Address 10.14.108.49, default gateway 10.14.108.1

Fully licensed PowerPath is installed for all examples.

ipconfig informationThe corporate network is the 10.14.16 subnet. The Server's Intel 8255x-based PCI Ethernet NIC connects to this subnet.

The iSCSI subnet is the 10.14.108 subnet. The Server's Intel Pro/1000 MT Dual Port NICs connect to this subnet.

An ipconfig /all command from the server returns:

Windows IP Configuration Host Name . . . . . . . . : compaq8502 Primary Dns Suffix . . . : YAMAHA.com Node Type . . . . . . . . : Unknown IP Routing Enabled. . . . : No WINS Proxy Enabled. . . . : No

DNS Suffix Search List. . : YAMAHA.com

Ethernet adapter NIC1: Connection-specific DNS Suffix . :

Description . . . . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server Adapter Physical Address. . . . . .: 00-04-23-AB-83-42

DHCP Enabled. . . . . . . : No IP Address. . . . . . . . : 10.14.108.78 Subnet Mask . . . . . . . : 255.255.255.0 Default Gateway . . . . . :

Notes on Microsoft iSCSI Initiator 183

Page 184: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

184

iSCSI Attach Environments

Ethernet adapter NIC2: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server Adapter #2 Physical Address. . . . . .: 00-04-23-AB-83-43 DHCP Enabled. . . . . . . .: No IP Address. . . . . . . . .: 10.14.108.79 Subnet Mask . . . . . . . .: 255.255.255.0 Default Gateway . . . . . .:

Ethernet adapter Corporate: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel 8255x-based PCI Ethernet Adapter (10/100) Physical Address. . . . . . : 08-00-09-DC-E3-9C DHCP Enabled. . . . . . . . : No IP Address. . . . . . . . . : 10.14.16.172 Subnet Mask . . . . . . . . : 255.255.255.0 Default Gateway . . . . .. : 10.14.16.1 DNS Servers . . . . . . . . : 10.14.36.200

10.14.22.13

Routing table informationA route print command from the server returns:

IPv4 Route Table===========================================================================Interface List0x1 .......................... MS TCP Loopback interface0x10003 ...00 04 23 ab 83 42 .....Intel(R) PRO/1000 MT Dual Port Server Adapter0x10004 ...00 04 23 ab 83 43 ...... Intel(R) PRO/1000 MT Dual Port Server Adapter #20x10005 ...08 00 09 dc e3 9c ...... Intel 8255x-based PCI Ethernet Adapter (10/100)======================================================================================================================================================Active Routes:Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.14.16.1 10.14.16.172 20 10.14.16.0 255.255.255.0 10.14.16.172 10.14.16.172 20 10.14.16.172 255.255.255.255 127.0.0.1 127.0.0.1 20 10.14.108.0 255.255.255.0 10.14.108.78 10.14.108.78 10 10.14.108.0 255.255.255.0 10.14.108.79 10.14.108.79 10 10.14.108.78 255.255.255.255 127.0.0.1 127.0.0.1 10 10.14.108.79 255.255.255.255 127.0.0.1 127.0.0.1 10 10.255.255.255 255.255.255.255 10.14.16.172 10.14.16.172 20 10.255.255.255 255.255.255.255 10.14.108.78 10.14.108.78 10 10.255.255.255 255.255.255.255 10.14.108.79 10.14.108.79 10 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 224.0.0.0 240.0.0.0 10.14.16.172 10.14.16.172 20 224.0.0.0 240.0.0.0 10.14.108.78 10.14.108.78 10 224.0.0.0 240.0.0.0 10.14.108.79 10.14.108.79 10 255.255.255.255 255.255.255.255 10.14.16.172 10.14.16.172 1 255.255.255.255 255.255.255.255 10.14.108.78 10.14.108.78 1 255.255.255.255 255.255.255.255 10.14.108.79 10.14.108.79 1Default Gateway: 10.14.16.1

EMC Host Connectivity Guide for Windows

Page 185: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

===========================================================================Persistent Routes: None

Example 1 Single subnet, Source IP is "Default"

The Default setting can be verified through the iscsicli sessionlist command. The bold output shows an Initiator Portal of 0.0.0.0/<TCP port>, which is what Default is displayed as follows:

Microsoft iSCSI Initiator version 2.0 Build 1941

Total of 4 sessions

Session Id : ffffffff8ae2600c-4000013700000002Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a0ISID : 40 00 01 37 00 00TSID : 1f 34Number Connections : 1

Connections: Connection Id : ffffffff8ae2600c-1 Initiator Portal : 0.0.0.0/1049 Target Portal : 10.14.108.46/3260 CID : 01 00

Session Id : ffffffff8ae2600c-4000013700000003Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a1ISID : 40 00 01 37 00 00TSID : 1a 85Number Connections : 1

Connections: Connection Id : ffffffff8ae2600c-2 Initiator Portal : 0.0.0.0/1050 Target Portal : 10.14.108.48/3260 CID : 01 00

Session Id : ffffffff8ae2600c-4000013700000004Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b0ISID : 40 00 01 37 00 00TSID : 5a 34Number Connections : 1

Connections:

Notes on Microsoft iSCSI Initiator 185

Page 186: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

186

iSCSI Attach Environments

Connection Id : ffffffff8ae2600c-3 Initiator Portal : 0.0.0.0/1051 Target Portal : 10.14.108.47/3260 CID : 01 00

Session Id : ffffffff8ae2600c-4000013700000005Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b1ISID : 40 00 01 37 00 00TSID : 1a 85Number Connections : 1

Connections: Connection Id : ffffffff8ae2600c-4 Initiator Portal : 0.0.0.0/1052 Target Portal : 10.14.108.49/3260 CID : 01 00

The operation completed successfully.

When two NICs are on the same iSCSI traffic subnet (10.14.108), Windows will only use one NIC for transmitting all iSCSI traffic. In this example, it uses the NIC1 (IP address 10.14.108.78), which is the highest entry in the routing table for the 10.14.108 subnet.

NIC1 handles iSCSI traffic to all four VNX series and CLARiiON iSCSI SP ports, while NIC2 (10.14.108.79) is idle.

If NIC1 fails, Windows automatically fails over to NIC2 for all iSCSI traffic. This failure is transparent to PowerPath, as shown in the following powermt display output:

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=CLAROpt; priority=0; queued-IOs=2Owner: default=SP B, current=SP BArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors============================================================================== 7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active alive 0 0 7 port7\path0\tgt1\lun1 c7t1d1 SP A1 active alive 0 0 7 port7\path0\tgt2\lun1 c7t2d1 SP B0 active alive 1 0 7 port7\path0\tgt3\lun1 c7t3d1 SP B1 active alive 1 0

All paths remain listed as active no errors are indicated, and the Q-IOs Stats indicate that both paths to SP B are still being used for traffic to this LUN.

EMC Host Connectivity Guide for Windows

Page 187: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

If NIC1 is subsequently repaired, Windows does not return the iSCSI traffic to NIC1. NIC1 remains idle while NIC2 is used for all iSCSI traffic to all four VNX series and CLARiiON iSCSI SP ports. After NIC1 is repaired, it would take an NIC2 failure to move iSCSI traffic back to NIC1.

Example 2 Single subnet, Source IPs use specific NIC IP addresses

The following iscsicli sessionlist output shows that each NIC is used for two iSCSI sessions; four in total. The bold output below shows the IP addresses used in the Initiator Portals.

Microsoft iSCSI Initiator version 2.0 Build 1941

Total of 4 sessions

Session Id : ffffffff8ae2700c-4000013700000002Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a0ISID : 40 00 01 37 00 00TSID : 5a 34Number Connections : 1

Connections: Connection Id : ffffffff8ae2700c-1 Initiator Portal : 10.14.108.78/1394 Target Portal : 10.14.108.46/3260 CID : 01 00

Session Id : ffffffff8ae2700c-4000013700000003Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a1ISID : 40 00 01 37 00 00TSID : df 84Number Connections : 1

Connections: Connection Id : ffffffff8ae2700c-2 Initiator Portal : 10.14.108.79/1395 Target Portal : 10.14.108.48/3260 CID : 01 00

Session Id : ffffffff8ae2700c-4000013700000004Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b0ISID : 40 00 01 37 00 00TSID : 1f 34

Notes on Microsoft iSCSI Initiator 187

Page 188: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

188

iSCSI Attach Environments

Number Connections : 1Connections: Connection Id : ffffffff8ae2700c-3 Initiator Portal : 10.14.108.78/1396 Target Portal : 10.14.108.47/3260 CID : 01 00

Session Id : ffffffff8ae2700c-4000013700000005Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b1ISID : 40 00 01 37 00 00TSID : df 84Number Connections : 1

Connections: Connection Id : ffffffff8ae2700c-4 Initiator Portal : 10.14.108.79/1397 Target Portal : 10.14.108.49/3260 CID : 01 00

The operation completed successfully.

In this configuration both NICs are used for iSCSI traffic, even though they are on the same subnet. iSCSI traffic targeted to VNX series and CLARiiON iSCSI SP ports A0 and B0 are directed through NIC1. iSCSI traffic targeted to VNX series and CLARiiON iSCSI SP ports A1 and B1 are directed through NIC2.

If NIC1 fails, Windows will not attempt to re-route iSCSI traffic targeted to VNX series and CLARiiON iSCSI SP ports A0 and B0, even though NIC2 can physically reach those ports. Instead, Windows will fail iSCSI sessions connected to SP ports A0 and B0, which in turn leads PowerPath to mark paths to those ports as “dead.” The following powermt display output shows paths to A0 and B0 marked as dead. All iSCSI traffic for this LUN is directed to the single surviving path on SP B, the current owner on this LUN.

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=CLAROpt; priority=0; queued-IOs=2Owner: default=SP B, current=SP BArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors============================================================================== 7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active dead 0 1 7 port7\path0\tgt1\lun1 c7t1d1 SP A1 active alive 0 0 7 port7\path0\tgt2\lun1 c7t2d1 SP B0 active dead 0 1

EMC Host Connectivity Guide for Windows

Page 189: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

7 port7\path0\tgt3\lun1 c7t3d1 SP B1 active alive 2 0

If NIC1 is subsequently repaired, Windows re-establishes iSCSI sessions to VNX series and CLARiiON SP ports A0 and B0, and PowerPath marks those paths as "alive" and once again uses the path to B1 (as well as the path to B0) for IO to this LUN.

Multiple iSCSI subnet configurationFigure 57 illustrates a multiple iSCSI subnet configuration.

Figure 57 Multiple iSCSI subnet configuration

In this configuration, there are two subnets used for all iSCSI traffic:

◆ iSCSI Subnet 108 is the 10.14.108 subnet.

◆ iSCSI Subnet 109 is the 10.14.109 subnet.

These iSCSI subnets are routable with the corporate network and with each other, but only iSCSI ports on the array and server NICs sending/receiving iSCSI traffic are connected to switches on this subnet.

The Windows server has a total of three NICs:

◆ One connected to the corporate network, with a defined default gateway

◆ One connected to each of the two iSCSI subnets

Notes on Microsoft iSCSI Initiator 189

Page 190: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

190

iSCSI Attach Environments

Partial output from an ipconfig /all command from the server returns:

Ethernet adapter iSCSI108: Description . . . . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server Adapter IP Address. . . . . . . . . : 10.14.108.78 Subnet Mask . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . :

Ethernet adapter iSCSI109: Description . . . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server Adapter #2 IP Address. . . . . . . . : 10.14.109.78 Subnet Mask . . . . . . . : 255.255.255.0 Default Gateway . . . . . :

Ethernet adapter Corporate: Description . . . . . . . . .. . : Intel 8255x-based PCI Ethernet Adapter (10/100) IP Address. . . . . . . . : 10.14.16.172 Subnet Mask . . . . . . . : 255.255.255.0 Default Gateway . . . . . : 10.14.16.1

An iSCSI port on each Storage Processor of the VNX series and CLARiiON is connected to each iSCSI subnet. This is done to create high availability in case of either a subnet failure or an SP failure.

If a subnet fails, the remaining iSCSI subnet can continue to send iSCSI traffic to each SP, allowing both SPs to continue to process IO, thereby spreading the IO load.

If an SP fails, PowerPath must trespass all LUNs to the surviving SP, but PowerPath is able to load balance across both subnets, spreading the network load across them.

Each iSCSI port has a default gateway configured (with an iSCSI subnet address). The management port on the VNX series and CLARiiON is connected to the corporate network and also has a defined default gateway.

The VNX series and CLARiiON's network configuration is as follows:

Management port (10/100 Mb):IP Address 10.14.16.46, default gateway 10.14.16.1 iSCSI Port SP A0: IP Address 10.14.108.46, default gateway 10.14.108.1iSCSI Port SP A1: IP Address 10.14.109.48, default gateway 10.14.109.1iSCSI Port SP B0: IP Address 10.14.108.47, default gateway 10.14.108.1iSCSI Port SP B1: IP Address 10.14.109.49, default gateway 10.14.109.1

Fully licensed PowerPath is installed for all examples.

ipconfig informationThe corporate network is the 10.14.16 subnet. The Server's Intel 8255x-based PCI Ethernet NIC connects to this subnet.

EMC Host Connectivity Guide for Windows

Page 191: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

The iSCSI subnets are the 10.14.108 and the 10.14.109 subnets. The Server's Intel Pro/1000 MT Dual Port NICs connect to these subnets.

An ipconfig /all command from the server returns:

Windows IP Configuration Host Name . . . . . . . . . . . . : compaq8502 Primary Dns Suffix . . . . . . . : YAMAHA.com Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : YAMAHA.com

Ethernet adapter iSCSI108: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server Adapter Physical Address. . . . . . . . . : 00-04-23-AB-83-42 DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 10.14.108.78 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter iSCSI109: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server Adapter

#2 Physical Address. . . . . . . . . : 00-04-23-AB-83-43 DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 10.14.109.78 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter Corporate: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel 8255x-based PCI Ethernet Adapter

(10/100) Physical Address. . . . . . . . . : 08-00-09-DC-E3-9C DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 10.14.16.172 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 10.14.16.1 DNS Servers . . . . . . . . . . . : 10.14.36.200 10.14.22.13

Notes on Microsoft iSCSI Initiator 191

Page 192: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

192

iSCSI Attach Environments

Routing table informationA route print command from the server returns:

IPv4 Route Table===========================================================================Interface List0x1 ........................... MS TCP Loopback interface0x10003 ...00 04 23 ab 83 42 ...... Intel(R) PRO/1000 MT Dual Port Server Adapter0x10004 ...00 04 23 ab 83 43 ...... Intel(R) PRO/1000 MT Dual Port Server Adapter #20x10005 ...08 00 09 dc e3 9c ...... Intel 8255x-based PCI Ethernet Adapter (10/100)======================================================================================================================================================Active Routes:Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.14.16.1 10.14.16.172 20 10.14.16.0 255.255.255.0 10.14.16.172 10.14.16.172 20 10.14.16.172 255.255.255.255 127.0.0.1 127.0.0.1 20 10.14.108.0 255.255.255.0 10.14.108.78 10.14.108.78 10 10.14.108.78 255.255.255.255 127.0.0.1 127.0.0.1 10 10.14.109.0 255.255.255.0 10.14.109.78 10.14.109.78 10 10.14.109.78 255.255.255.255 127.0.0.1 127.0.0.1 10 10.255.255.255 255.255.255.255 10.14.16.172 10.14.16.172 20 10.255.255.255 255.255.255.255 10.14.108.78 10.14.108.78 10 10.255.255.255 255.255.255.255 10.14.109.78 10.14.109.78 10 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 224.0.0.0 240.0.0.0 10.14.16.172 10.14.16.172 20 224.0.0.0 240.0.0.0 10.14.108.78 10.14.108.78 10 224.0.0.0 240.0.0.0 10.14.109.78 10.14.109.78 10 255.255.255.255 255.255.255.255 10.14.16.172 10.14.16.172 1 255.255.255.255 255.255.255.255 10.14.108.78 10.14.108.78 1 255.255.255.255 255.255.255.255 10.14.109.78 10.14.109.78 1Default Gateway: 10.14.16.1===========================================================================Persistent Routes: None

Example 3 Multiple subnets, Source IP is "Default"

The Default setting can be verified through the iscsicli sessionlist command. The bold output shows an Initiator Portal of 0.0.0.0/<TCP port>:

Microsoft iSCSI Initiator version 2.0 Build 1941

Total of 4 sessions

Session Id : ffffffff8b0aa904-4000013700000002Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a0ISID : 40 00 01 37 00 00

EMC Host Connectivity Guide for Windows

Page 193: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

TSID : 5a 34Number Connections : 1

Connections: Connection Id : ffffffff8b0aa904-1 Initiator Portal : 0.0.0.0/1066 Target Portal : 10.14.108.46/3260 CID : 01 00

Session Id : ffffffff8b0aa904-4000013700000003Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a1ISID : 40 00 01 37 00 00TSID : 1a 85Number Connections : 1

Connections: Connection Id : ffffffff8b0aa904-2 Initiator Portal : 0.0.0.0/1067 Target Portal : 10.14.109.48/3260 CID : 01 00

Session Id : ffffffff8b0aa904-4000013700000004Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b0ISID : 40 00 01 37 00 00TSID : 5a 34Number Connections : 1

Connections: Connection Id : ffffffff8b0aa904-3 Initiator Portal : 0.0.0.0/1068 Target Portal : 10.14.108.47/3260 CID : 01 00

Session Id : ffffffff8b0aa904-4000013700000005Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b1ISID : 40 00 01 37 00 00TSID : 1a 85Number Connections : 1

Connections: Connection Id : ffffffff8b0aa904-4 Initiator Portal : 0.0.0.0/1071 Target Portal : 10.14.109.49/3260 CID : 01 00

The operation completed successfully.

Notes on Microsoft iSCSI Initiator 193

Page 194: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

194

iSCSI Attach Environments

In this configuration, NIC iSCSI108 will direct all iSCSI traffic targeted to VNX series and CLARiiON SP ports A0 and B0 since they are all on the same subnet (10.14.108). NIC iSCSI109 will direct all iSCSI traffic targeted to VNX series and CLARiiON SP ports A1 and B1 since they are all on the same subnet (10.14.109).

If NIC iSCSI108 fails, Windows automatically fails over iSCSI traffic targeted to the VNX series and CLARiiON SP ports on the 10.14.108 network. In this configuration, the Corporate NIC (10.14.16.172) is chosen, since that NIC has a default gateway defined and a routable network path to the 10.14.108 subnet.

This failure is transparent to PowerPath, as shown in the powermt display output below:

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=CLAROpt; priority=0; queued-IOs=2Owner: default=SP B, current=SP BArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors============================================================================== 7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active alive 0 0 7 port7\path0\tgt1\lun1 c7t1d1 SP A1 active alive 0 0 7 port7\path0\tgt2\lun1 c7t2d1 SP B0 active alive 1 0 7 port7\path0\tgt3\lun1 c7t3d1 SP B1 active alive 1 0

All paths remain listed as active, no errors are indicated, and the Q-IOs Stats indicate that both paths to SP B are still being used for traffic to this LUN. Traffic to SP port B0 is being routed through the 10.14.16 subnet to the 10.14.108 subnet, while traffic to SP port B1 remains on the 10.14.109 subnet.

Since the Corporate network is now in use for iSCSI traffic, there may be performance implications from this failover. If the Corporate network or Corporate NIC is at a slower speed than the iSCSI subnet, throughput will be reduced. Additionally, iSCSI traffic is now competing on the same wire with non-iSCSI traffic, which may cause network congestion and/or reduced response times for both. If this is a concern, there are multiple ways to avoid this issue:

◆ Configure your iSCSI subnets so they are not routable with other subnets or the corporate network.

◆ Configure your iSCSI sessions to use a specific NIC, as described in “Multiple subnets, Source IPs use specific NIC IP addresses” on page 195.

EMC Host Connectivity Guide for Windows

Page 195: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

◆ Do not configure a default gateway on the VNX series and CLARiiON iSCSI ports. This prevents iSCSI traffic on these ports from leaving their subnet.

If the Corporate NIC were to subsequently fail, Windows would not be able to failover iSCSI traffic targeted to SP ports on the 10.14.108 subnet since no default gateway exists for the lone-surviving NIC iSCSI109 on the 10.14.109 subnet. In this case, PowerPath would mark paths to SP ports A0 and B0 as "dead."

Assuming only NIC iSCSI108 failed, when NIC iSCSI108 is subsequently repaired, Windows does return the iSCSI traffic to NIC iSCSI108 since that is the shortest route to the 10.14.108 subnet.

Example 4 Multiple subnets, Source IPs use specific NIC IP addresses

The following iscsicli sessionlist output shows that each iSCSI NIC is used for two iSCSI sessions; four in total. The iSCSI sessions were created such that traffic directed to a VNX series and CLARiiON SP port is routed through the NIC on that SP port's subnet. The bold output shows the IP addresses used in the Initiator Portals:

Microsoft iSCSI Initiator version 2.0 Build 1941

Total of 4 sessions

Session Id : ffffffff8ae2f424-4000013700000002Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a0ISID : 40 00 01 37 00 00TSID : 1f 34Number Connections : 1

Connections: Connection Id : ffffffff8ae2f424-1 Initiator Portal : 10.14.108.78/1050 Target Portal : 10.14.108.46/3260 CID : 01 00

Session Id : ffffffff8ae2f424-4000013700000003Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a1ISID : 40 00 01 37 00 00TSID : 1a 85Number Connections : 1

Connections: Connection Id : ffffffff8ae2f424-2

Notes on Microsoft iSCSI Initiator 195

Page 196: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

196

iSCSI Attach Environments

Initiator Portal : 10.14.109.78/1051 Target Portal : 10.14.109.48/3260 CID : 01 00

Session Id : ffffffff8ae2f424-4000013700000004Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b0ISID : 40 00 01 37 00 00TSID : 5a 34Number Connections : 1

Connections: Connection Id : ffffffff8ae2f424-3 Initiator Portal : 10.14.108.78/1052 Target Portal : 10.14.108.47/3260 CID : 01 00

Session Id : ffffffff8ae2f424-4000013700000006Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b1ISID : 40 00 01 37 00 00TSID : 1a 85Number Connections : 1

Connections: Connection Id : ffffffff8ae2f424-5 Initiator Portal : 10.14.109.78/1054 Target Portal : 10.14.109.49/3260 CID : 01 00

The operation completed successfully.

In this configuration, NIC iSCSI108 will direct all iSCSI traffic targeted to VNX series and CLARiiON SP ports A0 and B0 since they are all on the same subnet (10.14.108). NIC iSCSI109 will direct all iSCSI traffic targeted to VNX series and CLARiiON SP ports A1 and B1 since they are all on the same subnet (10.14.109).

If NIC iSCSI108 fails, Windows ON iSCSI SP ports A0 and B0, even though the Corporate NIC can physically reach those ports through its default gateway. Instead, Windows will fail iSCSI sessions connected to SP ports A0 and B0, which in turn leads PowerPath to mark paths to those ports as "dead."

The following powermt display output shows paths to A0 and B0 marked as dead. All iSCSI traffic for this LUN is directed to the single surviving path on SP B, the current owner on this LUN.

Pseudo name=harddisk2

EMC Host Connectivity Guide for Windows

Page 197: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=CLAROpt; priority=0; queued-IOs=2Owner: default=SP B, current=SP BArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors============================================================================== 7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active dead 0 1 7 port7\path0\tgt1\lun1 c7t1d1 SP A1 active alive 0 0 7 port7\path0\tgt2\lun1 c7t2d1 SP B0 active dead 0 1 7 port7\path0\tgt3\lun1 c7t3d1 SP B1 active alive 2 0

If NIC iSCSI108 is subsequently repaired, Windows re-establishes iSCSI sessions to VNX series and CLARiiON SP ports A0 and B0, and PowerPath marks those paths as "alive" and once again uses the path to B1 (as well as the path to B0) for IO to this LUN.

Unlicensed PowerPath and iSCSI failover behaviorUnlicensed PowerPath provides basic failover for hosts connected to VNX series and CLARiiON systems. Unlicensed PowerPath will only use a single path from a host to each SP for IO. Any paths besides these two paths are labeled “unlicensed” and cannot be used for IO. The Microsoft iSCSI Initiator always logs in to its targets in a specific order, connecting all SP A paths before connecting any SP B paths, and always connecting to the ports in numeric order. For example, the MS iSCSI Initiator will log in to port A0 first, then A1, A2, etc., for all SP A ports, then port B0, B1, etc., until the last SP B port. Unlicensed PowerPath chooses the first path discovered on each SP for IO. In all of the examples in this section, PowerPath will only use paths to A0 and B0 for IO.

The following is an example of a powermt display command with unlicensed PowerPath:

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy= BasicFailover; priority=0; queued-IOs=0Owner: default=SP B, current=SP BArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors============================================================================== 7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active alive 0 0 7 port7\path0\tgt1\lun1 c7t1d1 SP A1 active unlic 0 0 7 port7\path0\tgt2\lun1 c7t2d1 SP B0 active alive 0 0 7 port7\path0\tgt3\lun1 c7t3d1 SP B1 active unlic 0 0

Notes on Microsoft iSCSI Initiator 197

Page 198: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

198

iSCSI Attach Environments

Unlicensed PowerPath does not change any of the failover behavior in Example 1 on page 185 and Example 3 on page 192, where the Source IP address is “Default”. This is because PowerPath does not know that a NIC has failed in these examples since the MS iSCSI Initiator automatically fails over iSCSI sessions to a surviving NIC. The only impact unlicensed PowerPath has in these examples is that only a single path will be used for IO to an SP (to SP ports A0 and B0).

Unlicensed PowerPath will have an impact on failover behavior in Example 2 on page 187 and Example 4 on page 195, where the Source IP address uses a specific NIC IP address.

Revisiting Example 2 on page 187 with unlicensed PowerPath, the following shows an abridged version of the iscsicli sessionlist command output:

Microsoft iSCSI Initiator version 2.0 Build 1941

Total of 4 sessions

Session Id : ffffffff8ae2700c-4000013700000002Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a0

Connections: Initiator Portal : 10.14.108.78/1394 Target Portal : 10.14.108.46/3260

Session Id : ffffffff8ae2700c-4000013700000003Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a1

Connections: Initiator Portal : 10.14.108.79/1395 Target Portal : 10.14.108.48/3260

Session Id : ffffffff8ae2700c-4000013700000004Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b0

Connections: Initiator Portal : 10.14.108.78/1396 Target Portal : 10.14.108.47/3260

Session Id : ffffffff8ae2700c-4000013700000005Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b1

Connections: Initiator Portal : 10.14.108.79/1397 Target Portal : 10.14.108.49/3260

The operation completed successfully.

EMC Host Connectivity Guide for Windows

Page 199: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

If NIC1 fails (10.14.108.78), Windows will not attempt to re-route iSCSI traffic targeted to VNX series and CLARiiON iSCSI ports A0 and B0, even though NIC2 (10.14.108.79) can physically reach those ports. Instead, Windows will fail iSCSI sessions connected to SP ports A0 and B0, which in turn leads PowerPath to mark those paths as dead. However, the surviving two paths (to SP ports A1 and B1) are not licensed and not used for IO, as shown in the following powermt display output:

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=BasicFailover; priority=0; queued-IOs=0Owner: default=SP B, current=SP BArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors============================================================================== 7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active dead 0 1 7 port7\path0\tgt1\lun1 c7t1d1 SP A1 unlic alive 0 0 7 port7\path0\tgt2\lun1 c7t2d1 SP B0 active dead 0 1 7 port7\path0\tgt3\lun1 c7t3d1 SP B1 unlic alive 0 0

Since all paths are either dead or unlicensed, IO fails because no usable path exists from the host to the VNX series and CLARiiON.

However, a configuration can be designed that will avoid IO errors with a single NIC failure and unlicensed PowerPath. Revisiting Example 4 on page 195 with unlicensed PowerPath, the array IP addresses can be changed so that SP ports A0 and B0 do not reside on the same subnet. The VNX series and CLARiiON’s network configuration would be as follows:

Management port (10/100 Mb): IP Address 10.14.16.46, default gateway 10.14.16.1

iSCSI Port SP A0: IP Address 10.14.108.46, default gateway 10.14.108.1iSCSI Port SP A1: IP Address 10.14.109.48, default gateway 10.14.109.1iSCSI Port SP B0: IP Address 10.14.109.49, default gateway 10.14.109.1iSCSI Port SP B1: IP Address 10.14.108.47, default gateway 10.14.108.1

Using this array IP configuration, the following shows an abridged version of the iscsicli sessionlist command output:

Microsoft iSCSI Initiator version 2.0 Build 1941

Total of 4 sessions

Notes on Microsoft iSCSI Initiator 199

Page 200: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

200

iSCSI Attach Environments

Session Id : ffffffff8a8d9204-4000013700000003Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a0

Connections: Initiator Portal : 10.14.108.78/2787 Target Portal : 10.14.108.46/3260

Session Id : ffffffff8a8d9204-4000013700000004Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a1

Connections: Initiator Portal : 10.14.109.78/2788 Target Portal : 10.14.109.48/3260

Session Id : ffffffff8a8d9204-4000013700000005Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b0

Connections: Initiator Portal : 10.14.109.78/2789 Target Portal : 10.14.109.49/3260

Session Id : ffffffff8a8d9204-4000013700000006Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b1

Connections: Initiator Portal : 10.14.108.78/2790 Target Portal : 10.14.108.47/3260

The operation completed successfully.

Output from a powermt display command shows that paths to VNX series and CLARiiON SP ports A0 and B0 are usable for IO:

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=BasicFailover; priority=0; queued-IOs=5Owner: default=SP B, current=SP BArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors============================================================================== 7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active alive 0 0 7 port7\path0\tgt1\lun1 c7t1d1 SP A1 unlic alive 0 0 7 port7\path0\tgt2\lun1 c7t2d1 SP B0 active alive 5 0 7 port7\path0\tgt3\lun1 c7t3d1 SP B1 unlic alive 0 0

In this configuration, NIC “iSCSI109” (10.14.109.78) is directing all iSCSI traffic to VNX series and CLARiiON SP port B0 for this LUN, since this is the only path that is usable by unlicensed PowerPath to

EMC Host Connectivity Guide for Windows

Page 201: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

this LUN’s owning SP. If NIC “iSCSI109” fails, Windows will not attempt to re-route iSCSI traffic targeted to VNX series and CLARiiON iSCSI SP port B0, even though the “Corporate” NIC can physically reach this port through its default gateway. Instead, Windows will fail the iSCSI session connected to SP port B0 (as well as the session connected to SP port A1), which in turn leads PowerPath to mark paths to those SP ports as dead, as indicated in the following powermt display output:

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=BasicFailover; priority=0; queued-IOs=5Owner: default=SP B, current=SP AArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors============================================================================== 7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active alive 5 0 7 port7\path0\tgt1\lun1 c7t1d1 SP A1 unlic dead 0 1 7 port7\path0\tgt2\lun1 c7t2d1 SP B0 active dead 0 1 7 port7\path0\tgt3\lun1 c7t3d1 SP B1 unlic alive 0 0

However, a usable path for unlicensed PowerPath to this LUN still exists – the path from NIC “iSCSI108” (10.14.108.78) to VNX series and CLARiiON SP port A0. Therefore, PowerPath will trespass the LUN to SP A and successfully direct IO to this LUN through this surviving path.

If NIC “iSCSI109” is subsequently repaired, Windows will re-establish iSCSI sessions to VNX series and CLARiiON SP ports B0 and A1, and unlicensed PowerPath will mark these paths as alive. Additionally, since unlicensed PowerPath now has a healthy path to this LUN’s default SP, PowerPath will auto-restore this LUN by trespassing it back to SP B and then directing all iSCSI traffic to the path from NIC “iSCSI109” to VNX series and CLARiiON SP port B0.

Microsoft Cluster ServerMicrosoft Cluster Server (MSCS) shared storage (including the quorum disk) can be implemented using iSCSI disk volumes as the shared storage. There is no special iSCSI cluster or application configuration needed to support this scenario. Since the cluster service manages application dependencies, it is not needed to make

Notes on Microsoft iSCSI Initiator 201

Page 202: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

202

iSCSI Attach Environments

any cluster managed service (or the cluster service itself) dependent upon the Microsoft iSCSI service.

Microsoft MPIO and the Microsoft iSCSI DSM can be used with MSCS.

Note: Microsoft does not support the use of iSCSI Server clusters with Windows 2000.

File shares/DFS There is a special requirement if file shares are exposed on iSCSI disk volumes using the Microsoft software iSCSI Initiator. For example, if you have an iSCSI disk volume that is exposed as drive I: and have a file share point I:\Documents then you will need to configure the lan manager server service to have a dependency on the msiscsi (Microsoft iSCSI Initiator) service on Windows 2000. When using the Microsoft iSCSI Software Initiator 2.0 with Windows 2003, the dependency is not needed.

Dynamic disks

Note: Applies to Windows 2000 and Windows Server 2003.

Configuring volumes on iSCSI disks as Dynamic disk volumes using the Microsoft software iSCSI Initiator is not currently supported. It has been observed that timing issues may prevent dynamic disk volumes on iSCSI disks from being reactivated at system startup.

Boot

Currently, it is not possible to boot a Windows system using an iSCSI disk volume provided by the Microsoft software iSCSI Initiator kernel mode driver. It is possible to boot a Windows system using an iSCSI disk volume provided by an iSCSI HBA. The only currently supported method for booting a Windows system using an iSCSI disk volume is via a supported HBA.

NIC teaming

Microsoft does not support the use of NIC teaming on iSCSI interfaces.

EMC Host Connectivity Guide for Windows

Page 203: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Using the Initiator with EMC PowerPath v4.6.x or laterThis section provides an overview of how to set up the Initiator for use with PowerPath 4.6 and describes how to use the Initiator to create the paths that PowerPath then takes control of. This setup can be compared to creating zones in a Fibre Channel environment.

Note: PowerPath version 5.1 SP2 is the minimum version required for support on Windows Server 2008. Upon installing PowerPath, the installer will load the MPIO feature and claim all disks associated with the Microsoft Initiator. During the install a DOS windows will open. At this point, the MPIO feature is loaded. Do not close this window. Once the MPIO feature is installed, PowerPath will close this window automatically.

There are no manual steps that need to be done to configure MPIO. PowerPath will perform all the required steps as part of the PowerPath install.

The Initiator allows you to log in multiple paths to the same target and aggregate the duplicate devices into a single device exposed to Windows. Each path to the target can be established using different NICs, network infrastructure, and target ports. If one path fails then another session can continue processing I/O without interruption to the application. It is PowerPath that aggregates these paths and manages them to provide uninterrupted access to the device.

PowerPath 4.6.x uses the Microsoft MPIO framework that is installed with the Initiator in conjunction with EMC's DSM to provide multipathing functionality.

This section also describes how to use the Initiator to create the paths that PowerPath then takes control of. This setup can be compared to creating zones in a fibre channel environment.

This section is based on the following assumption:

◆ The following two network cards are installed in the server:

• NIC1 (192.168.150.155)

• NIC2 (192.168.150.156)

◆ We will log into the target on four different ports:

• A0 (192.168.150.102)

• A1 (192.168.150.103)

• B0 (192.168.150.104)

Notes on Microsoft iSCSI Initiator 203

Page 204: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

204

iSCSI Attach Environments

• B1 (192.168.150.105).

Logging into a target is described in “Examples” on page 145 of this document.

◆ We will log into the target to create four separate paths:

• NIC1 → A0

• NIC2 → A1

• NIC1→ B0

• NIC2 → B1

Follow the next steps to log in to the target.

1. Select the port to log into and click Log On.

Figure 58 iSCSI Initiator Properties dialog box

EMC Host Connectivity Guide for Windows

Page 205: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

The Log On to Target dialog box displays.

Figure 59 Log On to Target dialog box

2. Click Advanced.

Notes on Microsoft iSCSI Initiator 205

Page 206: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

206

iSCSI Attach Environments

The Advanced Settings dialog box displays.

Figure 60 Advanced Settings dialog box

3. For the source IP choose the IP address associated with NIC1 and the Target Portal (port) associated with port A0 on the target. Enter CHAP information, if needed, and click OK.

To create the other paths to the array repeat the steps above for each path you want to create.

EMC Host Connectivity Guide for Windows

Page 207: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

In our example, you would choose a different port on the Targets tab (A1, B0….).

Note: You can only log into a port once from a host. Multiple logins are not supported.

On the Advanced Settings page choose your source and Target IPs.

Once completed, you should have four paths as shown in Figure 61.

Figure 61 Four paths

You can now install PowerPath. PowerPath will aggregate all four paths and present one device to Windows Disk Management. From a host perspective, a failure to a NIC or the path between and NIC and the target will be managed properly by PowerPath and allow access to the device on the target. For example, if NIC1 fails the host would still have an active paths to both SPs since NIC2 has a path to SPA1 and SPB1.

Notes on Microsoft iSCSI Initiator 207

Page 208: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

208

iSCSI Attach Environments

Commonly seen issuesThis section lists some of the common issues E-Lab has seen during testing. For a more detailed list, refer to the Microsoft Initiator 2.0 Users Guide.

MultipathingThis section lists multipathing errors that may be recorded in the event logs and discusses potential solutions for these error conditions. Even though EMC does not support multiple connections per session (MCS) on VNX series and CLARiiON and Symmetrix, it is easy to confuse the configuration of MPIO and MCS. The following errors might help point out that you are actually configuring MCS instead of MPIO.

◆ Error: "Too many Connections" when you attempt to add a second connection to an existing session.

This issue can occur if the target does not support multiple connections per session (MCS). Consult with the target vendor to see if they plan on adding support for MCS.

◆ When you attempt to add a second connection to an existing session, you may notice that the Add button within the Session Connections window is grayed out.

This issue can occur if you logged onto the target using an iSCSI HBA that doesn't support MCS.

For more information on MCS vs. MPIO, please refer to the Microsoft iSCSI Software Initiator 2.x Users Guide located at http://microsoft.com.

Long boot timeIf your system takes a long time to display the login prompt after booting, or it takes a long time to log in after entering your login ID and password, then there may be an issue related to the Microsoft iSCSI Initiator service starting. First see the "Running automatic start services on iSCSI disks" section in the Microsoft Initiator 2.x Users Guide for information about persistent volumes and the binding operation. Check the system event log to see if there is an event "Timeout waiting for iSCSI persistently bound volumes…". If this is the case, then one or more of the persistently bound volumes did not reappear after reboot which could be due to network or target error.

EMC Host Connectivity Guide for Windows

Page 209: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Another error you may see on machines that are slow to boot is an event log message "Initiator Service failed to respond in time to a request to encrypt or decrypt data" if you have persistent logins that are configured to use CHAP. Additionally, the persistent login will fail to log in. This is due to a timing issue in the service startup order. To work around this issue, increase the timeout value for the IPSecConfigTimeout value in the registry under:

HKLM\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\<Instance Number> \Parameters

This has been seen in some cases where clusters are present.

Logging out of targetThe MS iSCSI Initiator service will not allow a session to be logged out if there are any handles that are open to the device. In this case, if the session is attempted to be logged out, the error "The session cannot be logged out since a device on that session is currently being used." is reported. This means that there is an application or device which has an open handle to the physical disk on the target. If you look in the system event log you should see an event that has the name of the device with the open handle.

Other event log errors of noteThe source for system events logged by Software Initiator will be iScsiPrt. The message in the log would convey the cause of that event.

Some of the common events are listed below. A complete list of events can be found in the Microsoft Initiator User’s Guide 2.x.

Event ID 1 Initiator failed to connect to the target. Target IP address and TCP Port number are given in dump data.

This event is logged when the Initiator could not make a TCP connection to the given target portal. The dump data in this event will contain the target IP address and TCP port to which Initiator could not make a TCP connection.

Event ID 9 Target did not respond in time for a SCSI request. The CDB is given in the dump data.

This event is logged when the target did not complete a SCSI command within the timeout period specified by SCSI layer. The dump data will contain the SCSI Opcode corresponding to the SCSI command. User can refer to SCSI specification for getting more information about the SCSI command.

Notes on Microsoft iSCSI Initiator 209

Page 210: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

210

iSCSI Attach Environments

Event ID 20 Connection to the target was lost. The Initiator will attempt to retry the connection.

This event is logged when the Initiator loses connection to the target when the connection was in iSCSI Full Feature Phase. This event typically happens when there are network problems, a network cable is removed, a network switch is shutdown, or target resets the connection. In all cases Initiator will attempt to reestablish the TCP connection.

Event ID 34 A connection to the target was lost, but Initiator successfully reconnected to the target. Dump data contains the target name.

This event is logged when the Initiator successfully reestablishes a TCP connection to the target.

TroubleshootingNote the following potential problems and their solutions.

Problem Adding a Symmetrix IP address to the target portals returns initiator error.

Solution◆ Verify that the iSCSI login parameter is correct.

◆ Verify that the Volume Logix setup is correct.

Problem Adding a Symmetrix IP address to the target portals returns authentication error.

Solution◆ Verify that the CHAP feature is enabled, and that the user name

and secret are correct.

◆ Verify that the Symmetrix system has the correct user name and secret.

Problem Login to the Symmetrix target returns The target had already been logged via a Symmetrix.

Solution◆ Press the Refresh button to verify that only one iSCSI session is

established.

◆ Log out of the current iSCSI session to the Symmetrix system, and log in again.

EMC Host Connectivity Guide for Windows

Page 211: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

Problem Adding a target portal returns Connection Failed.

Solution◆ Make sure the IP address of the Symmetrix system is correct.

◆ Make sure the connectivity is correct by using the ping utility to the Symmetrix GE port and vice versa.

Problem File shares on iSCSI devices may not be re-created when you restart your computer

SolutionThis issue can occur when the iSCSI Initiator service is not initialized when the Server service initializes. The Server service creates file shares. However, because iSCSI disk devices are not available, the Server service cannot create file shares for iSCSI devices until the iSCSI service is initialized. To resolve this issue, follow these steps on the affected server:

1. Make the Server service dependant on the iSCSI Initiator service.

2. Configure the BindPersistentVolumes option for the iSCSI Initiator service.

3. Configure persistent logons to the target. To do this, use one of the following methods.

Method 1

a. Double-click iSCSI Initiator in Control Panel.

b. Click the Available Targets tab.

c. Click a target in the Select a target list, and then click Log On.

d. Click to select the Automatically restore this connection when the system boots check box.

e. Click OK.

Method 2

a. Click Start, click Run, type cmd, and then click OK.

b. At the command prompt, type the following command, and then press ENTER:

scsicli persistentlogintarget target_iqn T * * * * * * * * * * * * * * * 0

Note: target_iqn is the iSCSI qualified name (iqn) of the target.

Notes on Microsoft iSCSI Initiator 211

Page 212: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

212

iSCSI Attach Environments

Note: This resolution applies only when you specifically experience this issue with the iSCSI Initiator service. Refer to Microsoft Knowledge Base article 870964 for more information.

Problem On a Microsoft Windows Server 2003-based computer, if you install Microsoft iSCSI Initiator version 2.04, or an earlier version, and then you install a Windows Server 2003 service pack, you cannot use the Add/Remove Programs item in Control Panel to uninstall iSCSI Initiator.

SolutionPlease follow the information found at: http://support.microsoft.com/kb/939749/en-us

Problem Slow performance may occur during network congestion when RFC 1122-delayed acknowledgements extend the error recovery process. In these situations, the default 200 millisecond delay on the acknowledgement can significantly impact read bandwidth. PowerPath, by load-balancing read requests across multiple array ports, increases the likelihood that simultaneous read completions from multiple ports will result in network congestion. This increases the likelihood of experiencing the problem.

As specified in RFC 1122, Microsoft TCP uses delayed acknowledgments to reduce the number of packets that are sent on the media. Instead of sending an acknowledgment for each TCP segment received, TCP in Windows 2000 and later takes a common approach to implementing delayed acknowledgments. As data is received by TCP on a particular connection, it sends an acknowledgment back only if one of the following conditions is true:

◆ No acknowledgment was sent for the previous segment received.

◆ A segment is received, but no other segment arrives within 200 milliseconds for that connection.

Typically, an acknowledgment is sent for every other TCP segment that is received on a connection unless the delayed ACK timer (200 milliseconds) expires. You can adjust the delayed ACK timer by editing the registry as outlined in the workaround below.

SolutionModify the TCP/IP settings for the network interfaces carrying iSCSI traffic to immediately acknowledge incoming TCP segments. This workaround solves the read performance issue.

EMC Host Connectivity Guide for Windows

Page 213: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

iSCSI Attach Environments

The procedure to modify the TCP/IP settings is different for Windows 2000 servers and Windows 2003 servers. Follow the directions appropriate for the version you are running on your servers. The procedures for Windows servers 2003 SP1 or later begins on page 213.

Note: These TCP/IP settings should not be modified for network interfaces not carrying iSCSI traffic as the increased acknowledgement traffic may negatively affect other applications.

Microsoft iSCSI clusters also require array software revision 3.22.xxx.5.511 or 3.24.xxx.5.007 or later.

CAUTION!This workaround contains information about modifying the registry. Before you modify the registry, be sure to back it up and make sure that you understand how to restore the registry if a problem occurs. For information about how to back up, restore, and edit the registry, click the following link to view the following article in the Microsoft Knowledge Base: http://support.microsoft.com/kb/256986/.

The registry change settings in the following steps are recommended for all Microsoft iSCSI configurations.

On a server that runs Windows Server 2003 SP1 or later, follow these steps:

1. Start Registry Editor (Regedit.exe).

2. Locate and then click the following registry subkey:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces

The interfaces will be listed underneath by automatically generated GUIDs like {064A622F-850B-4C97-96B3-0F0E99162E56}

3. Click each of the interface GUIDs and perform the following steps:

a. Check the IPAddress or DhcpIPAddress parameters to determine whether the interface is used for iSCSI traffic. If not, skip to the next interface.

b. On the Edit menu, point to New, and then click DWORD value.

Notes on Microsoft iSCSI Initiator 213

Page 214: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

214

iSCSI Attach Environments

c. Name the new value TcpAckFrequency, and assign it a value of 1.

4. Exit the Registry Editor.

5. Restart Windows for this change to take effect.

Problem Cannot uninstall the Initiator using Add/Remove programs.

Solution Refer to the You cannot use Add/Remove Programs on a Windows Server 2003-based computer to uninstall iSCSI Initiator after you install a service pack article located at

http://support.microsoft.com/kb/939749/en-us

EMC Host Connectivity Guide for Windows

Page 215: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

4Invisible Body Tag

This chapter provides procedures and information specific to the EMC Symmetrix, VNX series, CLARiiON, and Celerra environments.

◆ Symmetrix environment ................................................................. 216◆ VNX series and CLARiiON environment .................................... 222◆ Celerra environment........................................................................ 228

EMC Symmetrix, VNXSeries, CLARiiON, and

Celerra Information

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information 215

Page 216: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

216

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information

Symmetrix environmentThis section contains the following information:

◆ “Initial Symmetrix configuration” on page 216

◆ “Arbitrated loop addressing” on page 217

◆ “Fabric addressing” on page 219

◆ “SCSI-3 FCP addressing” on page 220

Initial Symmetrix configuration

The Symmetrix system is configured by an EMC Customer Engineer via the Symmetrix service processor.

Refer to the EMC Support Matrix for required bit settings on Symmetrix channel directors.

The EMC Customer Engineer (CE) should contact the EMC Configuration Specialist for updated online information. This information is necessary to configure the Symmetrix system to support the customer’s host environment.

Symmetrix SPC-2 director bit considerationsEMC Enginuity™ code versions 5671.58.64 (and later) for DMX and DMX-2, and 5771.87.95 (and later) for DMX-3, provide support for compliance with newer SCSI protocol specifications; specifically, SCSI Primary Commands - 2 (SPC-2) as defined in the SCSI document at http://www.t10.org.

The SPC-2 implementation in Enginuity includes functionality which, based on OS and application support, may enhance disk attach behavior to use newer SCSI commands which are optimized for a SAN environment (as implemented in Fibre Channel), as opposed to legacy (non SPC-2) functionality, targeted for older SCSI implementations utilizing physical SCSI bus-based connectivity (which cannot leverage the enhanced functionality of newer SCSI specifications).

In environments sharing director ports between hosts with multiple vendor operating systems, ensure that all hosts’ operating systems are capable of supporting the SPC-2 functionality before enabling it on the port. If any OS sharing the affected director port does not support SPC-2 functionality, then the SPC-2 bit cannot be set on a

EMC Host Connectivity Guide for Windows

Page 217: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information

per-port basis and must be set on a per-initiator basis using Solutions Enabler 6.4 CLI. Refer to the EMC Solutions Enabler Symmetrix Array Management CLI Product Guide, located on Powerlink, for details regarding setting the SPC-2 bit on a per-initiator basis.

SPC-2 must be enabled for all initiators on a per-host basis, globally; which means if SPC-2 conformance is enabled for a specific Symmetrix LUN visible to a specific host, SPC-2 conformance must be enabled for all paths to that same LUN, from that same host.

SPC-2 director bit support for WindowsThe SPC-2 director bit setting is required in all EMC-supported Windows OS versions.

The functionality provided by the SPC-2 director bit became a requirement for Windows 2003 Service Pack 1 (or later) in specific software environments that use Volume Shadow Copy Services. This functionality was not a requirement for Windows 2000 nor the original release of Windows 2003 "RTM" (without an installed Service Pack). Refer to your software documentation regarding these requirements.

Note: While the SPC-2 functionality may affect only a subset of Windows functionality, all new Enginuity releases have been Microsoft certified for the "Windows Server Catalog" (formerly "Hardware Compatibility List") with the SPC-2 director bit functionality enabled.

Some Veritas Volume Manager and Veritas Storage Foundation configurations may show incorrect LUN information on devices numbered higher than 8000 when attached to Symmetrix arrays if the SPC-2 director bit is disabled. Refer to EMC Knowledgebase article emc179861 for more information on this occurrence.

Arbitrated loop addressing

The Fibre Channel arbitrated loop (FC-AL) topology defines a method of addressing ports, arbitrating for use of the loop, and establishing a connection between Fibre Channel NL_Ports (level FC-2) on HBAs in the host and Fibre Channel directors (via their adapter cards) in the Symmetrix system. Once loop communications are established between the two NL_Ports, device addressing proceeds in accordance with the SCSI-3 Fibre Channel protocol (SCSI-3 FCP, level FC-4).

Symmetrix environment 217

Page 218: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

218

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information

The Loop Initialization Process (LIP) assigns a physical address (AL_PA) to each NL_Port in the loop. Ports that have a previously acquired AL_PA are allowed to keep it. If the address is not available, another address may be assigned, or the port may be set to non-participating mode.

Note: The AL-PA is the low-order 8 bits of the 24-bit address. (The upper 16 bits are used for Fibre Channel fabric addressing only; in FC-AL addresses, these bits are x’0000’.)

Symmetrix Fibre Channel director parameter settings, shown in Table 1 on page 218, control how the Symmetrix system responds to the LIP.

After the loop initialization is complete, the Symmetrix port can participate in a logical connection using the hard-assigned or soft-assigned address as its unique AL_PA. If the Symmetrix port is in non-participating mode, it is effectively off line and cannot make a logical connection with any other port.

A host initiating I/O with the Symmetrix system uses the AL_PA to request an open loop between itself and the Symmetrix port. Once the arbitration process has established a logical connection between the Symmetrix system and the host, addressing specific logical devices is done through the SCSI-3 FCP.

FC-AL addressing parametersTable 1describes the FC-AL addressing parameters.

Table 1 FC-AL addressing parameters

Parameter Bit Description Default

Disk Array A If enabled, the Fibre Channel Director presents the port as a disk array. Refer to Table 2 on page 221 for settings for each addressing mode.

Enabled

Volume Set V If enabled and Disk Array is enabled, Volume Set addressing mode is enabled. Refer to Table 2 on page 221 for settings for each addressing mode.

Enabled

Use Hard Addressing

H If enabled, entering an address (00 through 7D) in the Loop ID field causes the port to attempt to get the AL_PA designated by the Loop ID. If the port does not acquire the AL_PA, the Symmetrix reacts based on the state of the non-participating (NP) bit. If the NP bit is set, the port switches to non-participating mode and is not assigned an address. If non-participating mode is not selected, or if the H-bit was not set, the Symmetrix port accepts the new address that is soft-assigned by the host port.

Enabled

EMC Host Connectivity Guide for Windows

Page 219: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information

Fabric addressingEach port on a device attached to a fabric is assigned a unique 64-bit identifier called a World Wide Port Name (WWPN). These names are factory-set on the HBAs in the hosts, and are generated on the Fibre Channel directors in the Symmetrix system.

Note: For comparison to Ethernet terminology, an HBA is analogous to a NIC card, and a WWPN to a MAC address.

Note: The ANSI standard also defines a World Wide Node Name (WWNN), but this name has not been consistently defined by the industry.

When an N_Port (host server or storage device) connects to the fabric, a login process occurs between the N_Port and the F_Port on the fabric switch. During this process, the devices agree on such operating parameters as class of service, flow control rules, and fabric addressing. The N_Port’s fabric address is assigned by the switch and sent to the N_Port. This value becomes the Source ID (SID) on the N_Port's outbound frames and the Destination ID (DID) on the N_Port's inbound frames.

The physical address is a pair of numbers that identify the switch and port, in the format s,p, where s is a domain ID and p is a value associated to a physical port in the domain. The physical address of the N_Port can change when a link is moved from one switch port to another switch port. The WWPN of the N_Port, however, does not change. A Name Server in the switch maintains a table of all logged-in devices, so N_Ports can automatically adjust to changes in the fabric address by keying off the WWPN.

Hard Addressing Non-participating

NP If enabled and the H bit is set, the director uses only the hard address. If it cannot get this address, it re-initializes and changes its state to non-participating. If the NP bit is not set, the director accepts the soft-assigned address.

Disabled

Loop ID --- One-byte address (00 through 7D), valid only if the H bit is set. 00

Third-party Logout Across the Port

TP Allows broadcast of the TPRLO extended link service through all of the FC-AL ports. Disabled

Table 1 FC-AL addressing parameters (continued)

Parameter Bit Description Default

Symmetrix environment 219

Page 220: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

220

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information

The highest level of login that occurs is the process login. This is used to establish connectivity between the upper-level protocols on the nodes. An example is the login process that occurs at the SCSI FCP level between the HBA and the Symmetrix system.

SCSI-3 FCP addressingThe Symmetrix director extracts the SCSI Command Descriptor Blocks (CDB) from the frames received through the Fibre Channel link. Standard SCSI-3 protocol is used to determine the addressing mode and to address specific devices.

The Symmetrix supports three addressing methods based on a single-layer hierarchy as defined by the SCSI-3 Controller Commands (SCC):

◆ Peripheral Device Addressing◆ Logical Unit Addressing◆ Volume Set Addressing

All three methods use the first two bytes (0 and 1) of the 8-byte LUN addressing structure. The remaining six bytes are set to 0s.

For Logical Unit and Volume Set addressing, the Symmetrix port identifies itself as an Array Controller in response to a host’s Inquiry command sent to LUN 00. This identification is done by returning the byte 0x0C in the Peripheral Device Type field of the returned data for Inquiry. If the Symmetrix system returns the byte 0x00 in the first byte of the returned data for Inquiry, the Symmetrix system is identified as a Direct Access device.

Upon identifying the Symmetrix system as an Array Controller device, the host should issue a SCSI-3 Report LUNS command (0xA0), in order to discover the LUNS.

The three addressing modes, contrasted in Table 2 on page 221, differ in the addressing scheme (target ID, LUN and virtual bus) and number of addressable devices.

EMC Host Connectivity Guide for Windows

Page 221: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information

a. Bits 7-6 of byte 0 of the address.

b. The actual number of supported devices may be limited by the type host or host bus adapter used.

Note: The addressing modes are provided to allow flexibility in interfacing with various hosts. In all three cases the received address is converted to the internal Symmetrix addressing structure. Volume Set addressing is the default for Symmetrix systems. Select the addressing mode that is appropriate to your host.

Federated Live Migration (FLM)Federated Live Migration (FLM) allows nondisruptive data movement from an older Symmetrix system to a Symmetrix VMAX™

or VMAXe™ array running Enginuity 5875 or later and a VMAX 40K or VMAX 20K array running Enginuity 5876, without downtime to applications, without loading software on any connected host and no host interruption to load virtualization software.

Federated Live Migration supports migrations with Zero Space Reclamation.

Federated Live Migration makes use of Open Replicator to move the data between the Symmetrix arrays and PowerPath, or other multipathing solutions, to manage host access to the arrays while the migration is taking place.

Table 2 Symmetrix SCSI-3 addressing modes

Addressing Mode Codea

“A” Bit

“V” Bit

Response to “Inquiry”

LUNDiscovery Method

Possible Addresses

Maximum Logical Devicesb

Peripheral Device

00 0 X 0x00Direct Access

Access LUNs directly 16,384 256

Logical Unit 10 1 0 0x0CArray Controller

Host issues “Report LUNS” command

2,048 128

Volume Set 01 1 1 0x0CArray Controller

Host issues “Report LUNS” command

16,384 512

Symmetrix environment 221

Page 222: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

222

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information

VNX series and CLARiiON environmentThis section describes VNX series-specific and CLARiiON-specific configuration and support information.

Storage components

The basic components of a storage system configuration for VNX series and CLARiiON systems are:

◆ One or more storage systems.

◆ One or more servers connected to the storage system(s), directly or through switches.

◆ For Navisphere 6.X, a host that is running an operating system that supports the Unisphere/Navisphere Manager browser-based client and is connected over a LAN to storage-system servers and CX-Series storage systems. For a current list of such operating systems, refer to the EMC Navisphere Manager 6.X Release Notes located on Powerlink.

Note: The procedures described in this document assume that all hardware equipment (for example: switches and storage systems) used in the documented configuration is already installed and connected.

Required storage system setupVNX series and CLARiiON product documentation and installation procedures for connecting a VNX series and CLARiiON storage system to a server are available on Powerlink.

Note: EMC recommends that you download the latest information before you install any server.

Host connectivity

Refer to the EMC Support Matrix or contact your EMC representative for the most up-to-date information on qualified switches, hosts, host bus adapters, and software.

EMC Host Connectivity Guide for Windows

Page 223: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information

The latest EMC-approved HBA drivers and software are available for download at the following websites:

http://www.emulex.com

http:/www.qlogic.com

http://www.brocade.com

The EMC HBA installation and configurations guides are available at the EMC-specific download pages of these websites.

LUNZ visibility to host

In the VNX series and CLARiiON context, LUNZ refers to a fake logical unit zero presented to the host to provide a path for host software to send configuration commands to the array when no physical logical unit zero is available to the host.

When EMC Access Logix™ is used on a VNX series and CLARiiON system, an agent runs on the host and communicates with the storage system through either LUNZ or a storage device. On a VNX series and CLARiiON system, the LUNZ device is replaced when a valid LUN is assigned to the HLU LUN0 by the storage group. The agent then communicates through the storage device.

LUNZ has been implemented on VNX series and CLARiiON systems to make arrays visible to the host OS and PowerPath when no LUNs are bound on that array. When using a direct-connect configuration, and there is no Unisphere/Navisphere Management station to talk directly to the array over IP, the LUNZ can be used as a pathway for Navisphere CLI to send Bind commands to the array.

Refer to EMC Knowlegebase solution emc65060 for a discussion on this topic.

Unisphere/Navisphere Windows host agent and server utilityThe type of storage system your Windows host is attached to will determine the use of the host agent or server utility.

If you have an AX-series storage system running Unisphere/Navisphere Express, you must install the server utility. The host agent is supported only on CX3-Series, CX-Series, and AX-Series storage systems running Unisphere/Navisphere Manager.

VNX series and CLARiiON environment 223

Page 224: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

224

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information

About Unisphere/Navisphere host agentThe host agent registers the server's HBA (host bus adapter) with the attached storage system when the host agent service starts. This action sends the initiator records for each HBA to the storage system. Initiator records are used to control access to storage-system data. For legacy storage systems, the host agent will send the initiator records only if Access Logix software is installed. The agent can then automatically retrieve information from the storage system at startup or when requested by Manager or CLI.

The host agent can also:

◆ Send drive mapping information to the attached VNX series and CLARiiON storage systems.

◆ Monitor storage system events and notify personnel by email, page, or modem when any designated event occurs.

◆ Retrieve LUN WWN (world wide name) and capacity information from Symmetrix storage systems.

The host agent runs on servers attached to CX3-Series and CX-Series. It also runs on servers attached to AX-series storage systems that have been upgraded to Unisphere/Navisphere Manager, (that is, that have the Unisphere/Navisphere Manager enabler installed).

About Unisphere/Navisphere storage system initialization utilityFor CX3-series and CX-Series Fibre Channel storage systems, use the utility to discover storage systems, and set network parameters (IP address, subnet mask, and default gateway). In addition, for CX3-series storage systems with iSCSI data ports attached to Windows servers, use the utility to set network parameters for these ports.

Note: For CX-Series storage systems, an authorized service provider must install and run the Initialization Utility.

Detailed information about the Unisphere/Navisphere Host Agent andUnisphere/Navisphere Storage System Initialization Utility can be found in the EMC CLARiiON Server Support Products for Windows Server Installation Guide available on Powerlink. This guide describes how to install and remove the Unisphere/Navisphere Host Agent, Unisphere/Navisphere Storage System Initialization Utility,

EMC Host Connectivity Guide for Windows

Page 225: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information

Unisphere/Navisphere Server Utility, Navisphere Command Line Interface (CLI), VSS Provider (Windows Server 2003 only), admhost, and admsnap software on a Microsoft server running the Windows Server 2008, Windows Server 2003, or Windows 2000 operating system. For information on supported operating system versions and server software with a CX4 Series, CX3 UltraScale Series, or a CX Series storage system, refer to the E-Lab Interoperability Navigator at: http://elabnavigator.EMC.com for the most up-to-date information. For AX4-5 Series or AX150 Series storage systems, refer to the Support Matrix link on the Install page of the storage-system support website. For AX100 Series systems, refer to the Supported Configurations in the "Technical descriptions" section of the storage-system support website.

Unisphere/Navisphere Management Suite

The Unisphere/Navisphere Management Suite of integrated software tools allows customers to manage, provision, monitor, and configure VNX series and CLARiiON systems, bind LUNs, create RAID groups, storage groups, LUN assignment, and control all platform replication applications from an easy-to-use, secure, web-based management console. Unisphere/Navisphere-managed VNX series and CLARiiON platform applications include Navisphere Analyzer, EMC SnapView™, EMC MirrorView™, and EMC SAN Copy™.

More information about Unisphere and Navisphere is available on Powerlink.

LUN expansion recognitionLUN expansion is accomplished with Unisphere/Navisphere Manager or CLI using the VNX series and CLARiiON MetaLUN feature. A MetaLUN is a type of LUN whose capacity is the combined capacities of all the LUNs that comprise it. Currently, MetaLUNs are only supported on CX3-series and CX-Series storage systems. The MetaLUN feature lets you dynamically expand the capacity of a single LUN (base LUN) into a larger unit called a MetaLUN. You do this by adding additional LUNs to the base LUN. You can also add additional LUNs to a MetaLUN to further increase its capacity. During the expansion operation, you can still access the existing data, but you cannot access the additional capacity until the expansion is complete. Like a LUN, a MetaLUN can belong to a Storage Group

VNX series and CLARiiON environment 225

Page 226: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

226

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information

and can participate in SnapView, MirrorView, and SANCopy sessions.

After expanding one or more LUNs through Unisphere/Navisphere Manager, steps must be taken at the host operating system level and at the volume management level before Windows will recognize the increased LUN capacity. These steps have varying levels of impact on normal user operations.

In Windows, partitions are the building blocks for file system volumes. A LUN is viewed at the operating system level as a physical disk, and can contain partitions based on the classification of disk as assigned by the OS.

In Windows, there are two types of disk groups:

◆ Basic

Basic disks have a maximum of four partitions.

◆ Dynamic.

Dynamic disks refer to partitions as volumes, and the number of volumes is theoretically unlimited.

These two classifications of disks have different properties with respect to LUN expansion (called Data Volume Extension in Windows documentation).

Refer to the following articles for more information:

◆ Microsoft Knowledge Base article 325590, How to use Diskpart.exe to extend a data volume in Windows Server 2003, in Windows XP, and in Windows 2000, available at http://support.microsoft.com/kb/325590/en-us.

◆ Microsoft Knowledge Base article 175761, Dynamic vs. Basic Storage in Windows 2000 for more information, available at http://support.microsoft.com/kb/175761.

◆ Microsoft Knowledge Base article 323442, How to use the Disk Management Snap-in to manage Basic and Dynamic Disks in Windows Server 2003, available at http://support.microsoft.com/kb/323442/.

◆ Microsoft Knowledge Base article 222189, Description of Disk Groups in Windows Disk Management, available at http://support.microsoft.com/kb/222189.

EMC Host Connectivity Guide for Windows

Page 227: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information

CAUTION!Windows 2000 systems must be upgraded to Service Pack 4 before using VNX series and CLARiiON metaLUN expansion with Microsoft Dynamic Disks. Failure to do so can result in loss of dynamic volumes and/or volume data, and cause system STOP errors. Refer to Microsoft Knowledge Base article 327020 (formerly Q327020), Error Message Occurs When You Start Disk Management After Extending a Hardware Array, for more information.

Configuring Veritas Volume Manager for VNX series and CLARiiONWhen using the Veritas Volume Manager (VxVM) with Windows servers and VNX series and CLARiiON, keep in mind the following:

◆ The Veritas Volume Manager replaces the native disk manager that is accessed from the system management console. Refer to Veritas documentation for installation and operating instructions for your version.

◆ If the Veritas DMP (Dynamic Multi-Pathing) feature is to be used with a VNX series and CLARiiON system, the minimum supported version of FLARE that supports DMP must be installed on the array. Refer to the EMC Support Matrix for minimum FLARE versions. Also, the Dynamic Multi-Pathing driver update for the EMC VNX series and CLARiiON systems must be installed on the host server. This driver is available for download from http://www.semantec.com.

◆ HBA and array settings for DMP are the same as those that are configured for use with EMC PowerPath. Refer to EMC documentation for your HBA on which settings are used for PowerPath configurations.

Refer to the EMC Support Matrix for supported VxVM versions, service packs, and hotfixes.

VNX series and CLARiiON environment 227

Page 228: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

228

EMC Symmetrix, VNX Series, CLARiiON, and Celerra Information

Celerra environmentFor information on configuring iSCSI targets with an EMC Celerra Network Server, refer to the Configuring iSCSI Targets on EMC Celerra Technical Module, available on on Powerlink.

EMC Host Connectivity Guide for Windows

Page 229: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

5Invisible Body Tag

This chapter provides information about Virtual Provisioning in a Windows environment.

Note: For further information regarding the correct implementation of Virtual Provisioning, refer to the Symmetrix Virtual Provisioning Implementation and Best Practices Technical Note, available on http://Powerlink.EMC.com.

◆ Virtual Provisioning on Symmetrix............................................... 230◆ Implementation considerations ..................................................... 235◆ Operating system characteristics ................................................... 240

Virtual Provisioning

Virtual Provisioning 229

Page 230: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

230

Virtual Provisioning

Virtual Provisioning on SymmetrixEMC Virtual Provisioning™ enables organizations to improve speed and ease of use, enhance performance, and increase capacity utilization for certain applications and workloads. EMC Symmetrix Virtual Provisioning integrates with existing device management, replication, and management tools, enabling customers to easily build Virtual Provisioning into their existing storage management processes. Figure 62 shows an example of Virtual Provisioning on Symmetrix.

Virtual Provisioning, which marks a significant advancement over technologies commonly known in the industry as “thin provisioning,” adds a new dimension to tiered storage in the array, without disrupting organizational processes.

Figure 62 Virtual Provisioning on Symmetrix

EMC Host Connectivity Guide for Windows

Page 231: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Virtual Provisioning

TerminologyThis section provides common terminology and definitions for Symmetrix and thin provisioning.

Symmetrix Basic Symmetrix terms include:

Thin provisioning Basic thin provisioning terms include:

Device A logical unit of storage defined within an array.

Device Capacity The storage capacity of a ‘Device’.

Device Extent Specifies a quantum of logically contiguous blocks of storage.

Host Accessible Device A device that can be made available for host use.

Internal Device A device used for a Symmetrix internal function that cannot be made accessible to a host.

Storage Pool A collection of ‘Internal Devices’ for some specific purpose.

Thin Device A ‘Host Accessible Device’ that has no storage directly associated with it.

Data Device An 'Internal Device' that provides storage capacity to be used by 'Thin Devices'.

Thin Pool A collection of 'Data Devices' that provide storage capacity for 'Thin Devices'.

Thin Pool Capacity The sum of the capacities of the member 'Data Devices'.

Thin Pool Allocated Capacity A subset of 'Thin Pool Enabled Capacity' that has been allocated for the exclusive use of all 'Thin Devices' bound to that 'Thin Pool'.

Thin Device User Pre-Allocated Capacity

The initial amount of capacity that is allocated when a 'Thin Device' is bound to a 'Thin Pool'. This property is under user control.

Bind Refers to the act of associating one or more 'Thin Devices' with a 'Thin Pool'.

Virtual Provisioning on Symmetrix 231

Page 232: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

232

Virtual Provisioning

Management tools

Configuring, replicating, managing, and monitoring thin devices and thin poolsinvolves the same tools and the same or similar functions as those used to manage traditional arrays.

Use Symmetrix Management Console or Solutions Enabler to configure and manage Virtual Provisioning.

Thin device

Symmetrix Virtual Provisioning introduces a new type of host-accessible device called a thin device that can be used in many of the same ways that regular host-accessible Symmetrix devices have traditionally been used. Unlike regular Symmetrix devices, thin devices do not need to have physical storage completely allocated at the time the devices are created and presented to a host. The physical storage that is used to supply disk space for a thin device comes from a shared thin storage pool that has been associated with the thin device.

A thin storage pool is comprised of a new type of internal Symmetrix device called a data device that is dedicated to the purpose of providing the actual physical storage used by thin devices. When they are first created, thin devices are not associated with any particular thin pool. An operation referred to as binding must be performed to associate a thin device with a thin pool.

Pre-Provisioning An approach sometimes used to reduce the operational impact of provisioning storage. The approach consists of satisfying provisioning operations with larger devices that needed initially, so that the future cycles of the storage provisioning process can be deferred or avoided.

Over-Subscribed Thin Pool A thin pool whose thin pool capacity is less than the sum of the reported sizes of the thin devices using the pool.

Thin Device Extent The minimum quantum of storage that must be mapped at a time to a thin device.

Data Device Extent The minimum quantum of storage that is allocated at a time when dedicating storage from a thin pool for use with a specific thin device.

EMC Host Connectivity Guide for Windows

Page 233: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Virtual Provisioning

When a write is performed to a portion of the thin device, the Symmetrix allocates a minimum allotment of physical storage from the pool and maps that storage to a region of the thin device, including the area targeted by the write. The storage allocation operations are performed in small units of storage called data device extents. A round-robin mechanism is used to balance the allocation of data device extents across all of the data devices in the pool that have remaining un-used capacity.

When a read is performed on a thin device, the data being read is retrieved from the appropriate data device in the storage pool to which the thin device is bound. Reads directed to an area of a thin device that has not been mapped does not trigger allocation operations. The result of reading an unmapped block is that a block in which each byte is equal to zero will be returned. When more storage is required to service existing or future thin devices, data devices can be added to existing thin storage pools. New thin devices can also be created and associated with existing thin pools.

It is possible for a thin device to be presented for host use before all of the reported capacity of the device has been mapped. It is also possible for the sum of the reported capacities of the thin devices

Virtual Provisioning on Symmetrix 233

Page 234: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

234

Virtual Provisioning

using a given pool to exceed the available storage capacity of the pool. Such a thin device configuration is said to be over-subscribed.

Figure 63 Thin device and thin storage pool containing data devices

In Figure 63, as host writes to a thin device are serviced by the Symmetrix array, storage is allocated to the thin device from the data devices in the associated storage pool. The storage is allocated from the pool using a round-robin approach that tends to stripe the data devices in the pool.

EMC Host Connectivity Guide for Windows

Page 235: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Virtual Provisioning

Implementation considerationsWhen implementing Virtual Provisioning, it is important that realistic utilization objectives are set. Generally, organizations should target no higher than 60 percent to 80 percent capacity utilization per pool. A buffer should be provided for unexpected growth or a “runaway” application that consumes more physical capacity than was originally planned for. There should be sufficient free space in the storage pool equal to the capacity of the largest unallocated thin device.

Organizations also should balance growth against storage acquisition and installation timeframes. It is recommended that the storage pool be expanded before the last 20 percent of the storage pool is utilized to allow for adequate striping across the existing data devices and the newly added data devices in the storage pool.

Thin devices can be deleted once they are unbound from the thin storage pool. When thin devices are unbound, the space consumed by those thin devices on the associated data devices is reclaimed.

Note: Users should first replicate the data elsewhere to ensure it remains available for use.

Data devices can also be disabled and/or removed from a storage pool. Prior to disabling a data device, all allocated tracks must be removed (by unbinding the associated thin devices). This means that all thin devices in a pool must be unbound before any data devices can be disabled.

The following information is provided in this section:

◆ “Over-subscribed thin pools” on page 236

◆ “Thin-hostile environments” on page 236

◆ “Pre-provisioning with thin devices in a thin hostile environment” on page 237

◆ “Host boot/root/swap/dump devices positioned on Symmetrix VP (tdev) devices” on page 238

◆ “Cluster configurations” on page 239

Implementation considerations 235

Page 236: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

236

Virtual Provisioning

Over-subscribed thin poolsIt is permissible for the amount of storage mapped to a thin device to be less than the reported size of the device. It is also permissible for the sum of the reported sizes of the thin devices using a given thin pool to exceed the total capacity of the data devices comprising the thin pool. In this case the thin pool is said to be over-subscribed. Over-subscribing allows the organization to present larger-than-needed devices to hosts and applications without having to purchase enough physical disks to fully allocate all of the space represented by the thin devices.

The capacity utilization of over-subscribed pools must be monitored to determine when space must be added to the thin pool to avoid out-of-space conditions.

Not all operating systems, filesystems, logical volume managers, multipathing software, and application environments will be appropriate for use with over-subscribed thin pools. If the application, or any part of the software stack underlying the application, has a tendency to produce dense patterns of writes to all available storage, thin devices will tend to become fully allocated quickly. If thin devices belonging to an over-subscribed pool are used in this type of environment, out-of-space and undesired conditions may be encountered before an administrator can take steps to add storage capacity to the thin data pool. Such environments are called thin-hostile.

Thin-hostile environmentsThere are a variety of factors that can contribute to making a given application environment thin-hostile, including:

◆ One step, or a combination of steps, involved in simply preparing storage for use by the application may force all of the storage that is being presented to become fully allocated.

◆ If the storage space management policies of the application and underlying software components do not tend to reuse storage that was previously used and released, the speed in which underlying thin devices become fully allocated will increase.

◆ Whether any data copy operations (including disk balancing operations and de-fragmentation operations) are carried out as part of the administration of the environment.

EMC Host Connectivity Guide for Windows

Page 237: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Virtual Provisioning

◆ If there are administrative operations, such as bad block detection operations or file system check commands, that perform dense patterns of writes on all reported storage.

◆ If an over-subscribed thin device configuration is used with a thin-hostile application environment, the likely result is that the capacity of the thin pool will become exhausted before the storage administrator can add capacity unless measures are taken at the host level to restrict the amount of capacity that is actually placed in control of the application.

Pre-provisioning with thin devices in a thin hostile environment

In some cases, many of the benefits of pre-provisioning with thin devices can be exploited in a thin-hostile environment. This requires that the host administrator cooperate with the storage administrator by enforcing restrictions on how much storage is placed under the control of the thin-hostile application.

For example:

◆ The storage administrator pre-provisions larger than initially needed thin devices to the hosts, but only configures the thin pools with the storage needed initially. The various steps required to create, map, and mask the devices and make the target host operating systems recognize the devices are performed.

◆ The host administrator uses a host logical volume manager to carve out portions of the devices into logical volumes to be used by the thin-hostile applications.

◆ The host administrator may want to fully preallocate the thin devices underlying these logical volumes before handing them off to the thin-hostile application so that any storage capacity shortfall will be discovered as quickly as possible, and discovery is not made by way of a failed host write.

◆ When more storage needs to be made available to the application, the host administrator extends the logical volumes out of the thin devices that have already been presented. Many databases can absorb an additional disk partition non-disruptively, as can most file systems and logical volume managers.

◆ Again, the host administrator may want to fully allocate the thin devices underlying these volumes before assigning them to the thin-hostile application.

Implementation considerations 237

Page 238: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

238

Virtual Provisioning

In this example it is still necessary for the storage administrator to closely monitor the over-subscribed pools. This procedure will not work if the host administrators do not observe restrictions on how much of the storage presented is actually assigned to the application.

Host boot/root/swap/dump devices positioned on Symmetrix VP (tdev) devicesA boot /root /swap /dump device positioned on Symmetrix Virtual Provisioning (thin) device(s) is supported with Enginuity 5773 and later. However, some specific processes involving boot /root/swap/dump devices positioned on thin devices should not have exposure to encountering the out-of-space condition. Host-based processes such as kernel rebuilds, swap, dump, save crash, and Volume Manager configuration operations can all be affected by the thin provisioning out-of-space condition. This exposure is not specific to EMC's implementation of Thin Provisioning. EMC strongly recommends that the customer avoid encountering the out-of-space condition involving boot / root /swap/dump devices positioned on Symmetrix VP (thin) devices using the following recommendations;

◆ It is strongly recommended that Virtual Provisioning devices utilized for boot /root/dump/swap volumes must be fully allocated or the VP devices must not be oversubscribe.

Should the customer use an over-subscribed thin pool, they should understand that they need to take the necessary precautions to ensure that they do not encounter the out-of-space condition.

◆ It is not recommended to implement space reclamation, available with Enginuity 5874 and later, with pre-allocated or over-subscribed Symmetrix VP (thin) devices that are utilized for host boot/root/swap/dump volumes. Although not recommended, Space reclamation is supported on the listed types of volumes

Should the customer use space reclamation on this thin device, they need to be aware that this freed space may ultimately be claimed by other thin devices in the same pool and may not be available to that particular thin device in the future.

EMC Host Connectivity Guide for Windows

Page 239: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Virtual Provisioning

Cluster configurationsWhen using high availability in a cluster configuration, it is expected that no single point of failure exists within the cluster configuration and that one single point of failure will not result in data unavailability, data loss, or any significant application becoming unavailable within the cluster. Virtual provisioning devices (thin devices) are supported with cluster configurations; however, over-subscription of virtual devices may constitute a single point of failure if an out-of-space condition should be encountered. To avoid potential single points of failure, appropriate steps should be taken to avoid under-provisioned virtual devices implemented within high availability cluster configurations.

Implementation considerations 239

Page 240: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

240

Virtual Provisioning

Operating system characteristics Most host applications will behave in a similar manner in comparison to the normal devices when writing to thin devices. This same behavior can also be observed as long as the thin device written capacity is less than thin device subscribed capacity. However, issues can arise when the application writes beyond the provisioned boundaries. With the current behavior of the Windows Operating System, the exhaustion of the thin pool causes undesired results. Specifics are included below:

◆ Logical Volume Manager software SVM and VxVM

Cannot write to any volumes that are built on the exhausted pool.

◆ Windows NTFS File System

• The host reports the errors "File System is full" to the Windows system event log. The larger the data file size that is being written to the thin device, the more ‘file system is full’ error messages will be reported.

• The writing data file has corrupted data.

• Cannot create a file system on the exhausted pool.

• Cannot write a data file to the exhausted pool.

In the condition where the host is exposed to pre-provisioned thin devices that had not been bound to the thin pool, the host may take a little longer time during boot up.

EMC Host Connectivity Guide for Windows

Page 241: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

6Invisible Body Tag

This chapter describes Invista-specific configuration in the Windows environment and contains support information.

◆ EMC Invista overview..................................................................... 242◆ Prerequisites...................................................................................... 246◆ Storage components......................................................................... 247◆ Configuration guidelines ................................................................ 249◆ Required storage system setup ...................................................... 250◆ Host connectivity ............................................................................. 252◆ Front-end paths ................................................................................ 253◆ Making volumes in an Invista Virtual Frame visible to a

Windows host ................................................................................. 259◆ LUNZ visibility to host ................................................................... 260◆ Guidelines for optimizing the configuration ............................... 261◆ Installing AdmReplicate ................................................................. 262◆ ICRV requirements........................................................................... 263◆ Using PowerPath Migration Enabler ............................................ 266◆ Invista and Veritas Volume manager interaction ........................ 268

Invista

Invista 241

Page 242: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

242

Invista

EMC Invista overviewEMC Invista® is a high-performance, block storage, SAN-based storage virtualization solution that runs on intelligent SAN switches. Invista’s split-path architecture leverages intelligent SAN-switch hardware from EMC’s Connectrix partners.

Invista can be deployed in small and enterprise-class organizations. It is flexible enough to virtualize a small storage environment with a few arrays, yet it is resilient and robust enough to virtualize a large datacenter.

This section contains basic information on the following:

◆ “Invista architecture” on page 242

◆ “EMC Invista advantages” on page 243

◆ “EMC Invista documentation” on page 244

◆ “EMC Invista offerings” on page 245

Invista architecture

Invista uses split-path architecture to supply virtualization services to a host. Split-path combines intelligent Fibre Channel switches (using purpose-built ASICs) and EMC Invista software to separate the control path from the data path. Key storage services and virtualization services are performed at the network level. (The network in this case refers to the Fibre Channel SAN and not the IP LAN/MAN/WAN.) With IO processing occurring in the intelligent FC switch at the network level, latency associated to “in-band” solutions is non-existent. By using purpose-built switches with port-level processing (ASICs), the Invista architecture delivers wire-speed performance with negligible latency to meet the demands of the most intensive, random I/O applications delivering consistently scalable performance. Latency introduced by the ASIC does not impact IO on host or storage.

An Invista instance is a network-based entity that mediates between storage arrays, which supply storage as a resource, and storage consumers (hosts, servers, backup applications, and so forth), that consume the storage. An Invista instance is defined as a single managed entity that consists of one or more intelligent switches, the Control Path Cluster (CPC), and the back-end storage provisioned for use by Invista. For a given Invista instance, visualize the storage

EMC Host Connectivity Guide for Windows

Page 243: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Invista

arrays being behind the instance, and the storage consumers being in front of the instance as illustrated in Figure 64.

Figure 64 Invista instance

Note that the intelligent switches in any Layer 2 SANs in the front-end or back-end path of an Invista instance must be from the same switch vendor as the intelligent switches in the instance.

EMC Invista advantagesThe advantages of the Invista virtualization technology include:

◆ Increase flexibility of your storage infrastructure

◆ Move data seamlessly and minimize storage-related downtime, across multi-tiered, heterogeneous SANs

◆ Reduce management complexity and simplify storage allocation in heterogeneous SANs

◆ Choose from a wide range of intelligent SAN switches with an open, standards-based solution

There are three key services of Invista when discussing virtualization:

◆ Data mobility — Allows administrators to move primary volumes between heterogeneous storage arrays while the application remains online. This enabler of information lifecycle

EMC Invista overview 243

Page 244: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

244

Invista

management allows one to move applications non-disruptively to the appropriate storage tier based on application requirements and service levels.

◆ Networked-based volume management — The basis for what many people think of as virtualization. Invista enables one to create and configure virtual volumes from a heterogeneous storage pool and present them out to hosts. It makes sense for the network to be the control point for this—abstracting and aggregating the back-end storage, configuring it, and making it available to all of the connected hosts.

◆ Heterogeneous cloning — Allows one to extend the use of clones to areas where their use may have previously been impossible, due to compatibility issues—for example, customers who have implemented an ILM (Infrastructure Lifecycle Management) strategy can create a clone Tier 1 storage array and extend it to a Tier 3, lower-cost storage array.

Tiered storage is the assignment of different categories of data to different types of storage arrays in order to reduce total storage cost. Categories may be based on levels of protection, performance, usage, and more.

EMC Invista documentationEMC Invista documentation is available on Powerlink. Refer to the following documents for configuration and administration operations.

Invista Element Manager Administration Guide

This guide describes how to use the Invista Element Manager to manage an Invista instance.

Invista Element Manager Command Line Interface Reference

This manual describes the command line interface (CLI) for EMC Invista. You should read this manual if you will use typed or scripted commands instead of (or in addition to) the Invista Element Manager graphical user interface (GUI) to configure and manage an Invista instance.

EMC Host Connectivity Guide for Windows

Page 245: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Invista

There are a number of release notes that include more specific, up-to-date Invista-related information that can be also be found on Powerlink. Information in the release notes include:

◆ Support parameters◆ Supported configurations◆ New features and functions◆ SAN compatibility◆ Bug fixes◆ Expected behaviors◆ Technical notes◆ Documentation issues◆ Upgrade information

For the most up-to-date support information, you should always refer to the EMC Support Matrix, PDFs and Guides > Invista.

EMC Invista offerings

The Invista virtualization solution is available on the Connectrix

series products. Refer to the EMC Support Matrix for required versions.

Note: End of support life (EOSL) is September 30, 2009 for Invista 1.0. Refer to the EMC Support Matrix for the most current support information.

EMC Invista overview 245

Page 246: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

246

Invista

PrerequisitesBefore configuring Invista in the Windows environment, complete the following on each host:

◆ Confirm that all necessary remediation has been completed.

This ensures that OS-specific patches and software on all hosts in the Invista environment are at supported levels according to the EMC Support Matrix.

◆ Confirm that each host is running Invista-supported failover software and has at least one available path to each Invista fabric.

Note: Always refer to the EMC Support Matrix for the most up-to-date support information and prerequisites.

◆ If a host is running EMC PowerPath, confirm that the load-balancing and failover policy is set to Adaptive.

CAUTION!Failure to do this will cause data unavailability during a DPC upgrade or a DPC reboot.

EMC Host Connectivity Guide for Windows

Page 247: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Invista

Storage componentsThis section contains Invista component terminology and the basic Invista components used when configuring Invista in the Windows environment.

Invista component terminology

VI Virtual Initiators

VT Virtual Targets

CPC Control Path Cluster

DPCs Date Path Controllers. DPCs are located in intelligent EMC Connectrix Fibre Channel switches or directors.

Invista componentsThe basic components of a storage system configuration for Invista are:

◆ One or more storage systems connected to Invista Instance through intelligent switches (VIs)

◆ One or more servers connected to Invista Instances through Intelligent switches (VTs)

Note: The procedures described in this chapter assume that all hardware equipment (for example: Invista CPCs, Intelligent switches, hosts, storage systems, etc.) used in the documented configuration is already installed and connected.

Invista instance componentsThere are four major components to an Invista instance:

◆ Control Path Cluster (CPC) — The CPC is a dual-node Intel architecture cluster that runs EMC’s storage application and manages the virtualization metadata (mapping tables). The CPC supports active-active failover between nodes. Because Invista is a true active/active design, performance is not impacted on a

Storage components 247

Page 248: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

248

Invista

failover. The metadata is stored on a non-virtualized Symmetrix or VNX series and CLARiiON LUN on the SAN. The CPCs can span two racks spaced 300m (1000 ft) apart.

◆ Data Path Controller (DPC) — This is the intelligent-switch hardware that contains specialized port-level processors (ASICs) to perform virtualization operations on IO at line speed. The DPC is available from two vendors: Brocade and Cisco. Brocade’s intelligent switch is the Connectrix B AP-7600B which can be connected through an ISL to a new or existing SAN.

Cisco’s intelligent blade is the Connectrix MDS Storage Services Module (SSM) which can be installed in a Connectrix MDS 9513, 9509, 9506, 9216i, 9216A, or 9222i. Invista can be configured using two or four DPCs that span at least two fabrics. This creates an additional level of redundancy.

◆ Invista software — This runs on the CPC, communicates with the DPC, and provides application functionality.

◆ IP routers — These IP devices support the communication between the CPC and the DPC. The vendor is Allied Telesis. They provide a firewall to isolate the control path communication from a customer’s public network while at the same time providing management access to the Invista instance’s hardware components. The routers are sold in pairs to provide redundancy on the LAN. Using VRRP (virtual redundancy router protocol), only one router is active on the network at any one time to prevent IP loops from forming.

EMC Host Connectivity Guide for Windows

Page 249: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Invista

Configuration guidelinesThe following configuration guidelines are applicable for all arrays and switches when installing Invista in the Windows environment:

◆ Follow the array manufacturer's guidelines to provision storage devices (LUNs) for use by Invista.

◆ To avoid data corruption, LUNs that are managed by Invista cannot also be managed by other hosts or storage management entities. Configure LUN masking on the array to enforce this rule.

◆ Each Invista Virtual Initiator must be zoned to all storage ports that expose storage to Invista.

◆ Each Virtual Initiator should have access to all storage elements. Otherwise, hosts will have asymmetric access to the Virtual Volumes created from the storage elements.

◆ Configure no more than eight paths between a DPC and a storage element.

◆ Before exposing a back-end array to Invista, delete SCSI reservations on the array. To do this, you must scrub the SCSI reservations on the back-end devices before importing the devices into the Invista instance.

Configuration guidelines 249

Page 250: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

250

Invista

Required storage system setupSymmetrix,VNX series, and CLARiiON product documentation and installation procedures for connecting a Symmetrix, VNX series, and CLARiiON storage system to an Invista Instance are available on Powerlink.

Note: EMC recommends that you download the latest information before installing any server.

a. For the Symmetrix 8000 series, the flags should be Unique Worldwide Name (UWN), Common Serial Number, and Enable Point-to-Point (PP).

b. Must be set if VPLEX is sharing Symmetrix directors with hosts that require conflicting bit settings. For any other configuration, the VCM/ACLX bit can be either set or not set.

For a Symmetrix configuration, set the UWN, PP, and EAN flags on the Symmetrix Fibre Channel director. If you are using the VXM database, you can also set the VCM flag.

Note: For optimum performance, provision up to 512 Symmetrix devices per Fibre Channel director port for Invista use.

Note: The FA bit settings listed in Table 3 are for connectivity of Invista to EMC Symmetrix only. For host to EMC Symmetrix FA bit settings, refer to the EMC Support Matrix

Table 3 Required Symmetrix FA bit settings for connection to Invista

Set a Do not set Optional

SPC-2 Compliance (SPC2)SCSI-3 Compliance (SC3)Enable Point-to-Point (PP)Unique Worldwide Name (UWN)Common Serial Number (C)

Disable Queue Reset on UnitAttention (D)AS/400 Ports Only (AS4)Avoid Reset Broadcast (ARB)Environment Reports to Host (E)Soft Reset (S)Open VMS (OVMS)Return Busy (B)Enable Sunapee (SCL)Sequent Bit (SEQ)Non Participant (N)OS-2007 (OS compliance)

LinkspeedEnable Auto-Negotiation (EAN)VCM/ACLXb

EMC Host Connectivity Guide for Windows

Page 251: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Invista

Required storage system setup 251

Page 252: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

252

Invista

Host connectivityFor the most up-to-date information on qualified switches, hosts, host bus adapters, and software, refer to the always consult the EMC Support Matrix (ESM), available through E-Lab Interoperability Navigator (ELN) at: http://elabnavigator.EMC.com, under the PDFs and Guides tab, or contact your EMC Customer Representative.

The latest EMC-approved HBA drivers and software are available for download at the following websites:

◆ http://www.emulex.com

◆ http:/www.qlogic.com

◆ http://www.brocade.com

The EMC HBA installation and configurations guides are available at the EMC-specific download pages of these websites.

EMC Host Connectivity Guide for Windows

Page 253: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Invista

Front-end pathsA front-end path is a connection between an HBA port on the host and a Virtual Target in the Invista instance. A host accesses the volumes in Virtual Frames through a front-end path. An example is shown in Figure 65.

Figure 65 Front-end path example

The following information is included in this section:

◆ “Guidelines for optimizing the configuration” on page 253

◆ “Viewing the World Wide Name for an HBA port” on page 254

◆ “InvistaServerUtilCLI (PushApp)” on page 254

◆ “Manually registering a front-end path” on page 254

◆ “Verifying the status of a front-end path” on page 256

Guidelines for optimizing the configuration

Use the following guidelines to optimize your configuration:

◆ Ensure that the host initiators are well distributed across the front-end ports (Virtual Targets).

◆ Assign at least one VT from each DPC to form a highly-available fabric.

Note: Maximum number of ITLs supported per Virtual Target is 256.

Front-end paths 253

Page 254: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

254

Invista

Viewing the World Wide Name for an HBA portEach HBA port has a World Wide Name (WWN) associated with it. WWNs are unique identifiers that the Invista instance uses to identify its Virtual Targets and Virtual Initiators. You can uses one of the following ways to view WWNs:

◆ Switch’s name server output

◆ EMC ControlCenter or Solution Enabler

◆ syminq command (Symmetrix users)

InvistaServerUtilCLI (PushApp)

InvistaServerUtilCLI lists volumes mapped to the hosts and writes host initiator information to the Invista CPC, which makes it available to the management layers for reporting to users. You can use InvistaServerUtilCLI on any host supported by Invista.

Note: Installing InvistaServerUtilCLI overwrites the previous version.

Installing InvistaServerUtilCLI will not run the utility automatically. It must be run manually.

To install InvistaServerUtilCLI onto a Windows host, complete the following steps:

1. Ensure that host HBAs are installed and configured.

2. Insert the Invista host software CD, and copy the folder EMC-INV-Host-<version>\PushApp\windows to the \tmp directory on the host.

3. In the \tmp\windows directory on the host, double-click setup.bat.

This creates a folder named C:\Program Files\EMC\PushApp, and copies InvistaServerUtilCLI.exe to that folder.

Manually registering a front-end path

Front-end paths are created in the Front-end Paths tab of the Virtual Frame Properties window. They are automatically registered in the Connectivity Status window once you select them in the Virtual Frames Properties window. If, for some reason, the front-end path is

EMC Host Connectivity Guide for Windows

Page 255: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Invista

not registered in the Connectivity Status window, you must manually register the front-end path as described in the following steps.

1. Right-click the Virtual Frames folder and select Connectivity Status.

The Connectivity Status window displays.

Figure 66 Connectivity status window

2. Select the path to register, and click Change HBA Port.

The Change HBA Port dialog box opens.

3. In the HBA Port WWN field, verify that the port WWN is correct.

4. In the HBA Port Type field, select Type 1.

5. Select LunZ Comm Path.

Select: If the host in running...

Type 1 (SPC-2) Windows, MS-CS Clusters, Linux, VMware

Type 2 HP

Type 3 (SPC-2) Sun, VCS

Type 4 AIX

Front-end paths 255

Page 256: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

256

Invista

This checkbox enables or disables a communication path from the host to the CPC.

CAUTION!Entering incorrect values could cause the instance to be unmanageable or unreachable by the host and could cause the failover software to operate incorrectly.

Note: Setting the arraycommpath to 1 (enabled) creates LUNZ devices.A LUNZ device allows Unisphere/Navisphere to communicate with the storage system when no LUNs in the storage system are connected to the server.

6. If the Invista Instance is connected to a Windows server, set Unit Serial Number to Virtual Volume. Otherwise, set it to Array.

7. Enter the vendor and model for the HBA port.

8. Under Host Information, select Existing Host. This is done in case any of the other paths to the same host initiator is registered. Otherwise, provide the host name and IP address. Then, from the drop-down list, select the host in which the HBA port resides.

9. Save your changes and close the dialog box to return to the Connectivity Status window.

IMPORTANT!All HBAs belonging to hosts that are connected to a Brocade switch must be configured for point-to-point Fibre Channel mode. If they are set to loop mode, the switch will hang and not allow Invista SPs to connect to it.

Verifying the status of a front-end pathTo verify the status of a front-end path, complete the following step:

Right-click the Virtual Frames folder and select Connectivity Status.

EMC Host Connectivity Guide for Windows

Page 257: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Invista

The Connectivity Status window displays.

Figure 67 Connectivity status window

The Connectivity Status window includes an entry for each front-end path ((HBA port, Virtual Target to which it is logged in or the HBA, Virtual Target which is registered). It also shows the Logged In status of the port and the WWN of the host server in which the HBA resides.

◆ A path can have a Registered state of Yes or No.

• When a path’s registered state is Yes, the path is automatically registered using the pushapp. You can change the setting later by using the Group HBA Ports Change option.

• If the registered state is No, the path is still not registered. You must manually register it as described in “Manually registering a front-end path” on page 254.

◆ An HBA port can have a Logged In state of Yes or No.

• When a port’s Logged In state is Yes:

– The port is powered up.– There is a connection from the host to the Invista instance.– The Invista software recognizes the port.

• When an HBA port’s Logged In state is No, ensure that:

– The port is working.

Front-end paths 257

Page 258: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

258

Invista

– The switch is working, and it is configured to connect the port to the SP.

– The initiator is logged into the switch, the initiator is in the same VSAN as the Virtual Target, and the initiator is zoned with the Virtual Target.

– Switches and DPCs are up/online.

Note: IO can happen through a path only if the path is Logged-in = YES and Registered = YES.

EMC Host Connectivity Guide for Windows

Page 259: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Invista

Making volumes in an Invista Virtual Frame visible to a Windows host

Volumes are only visible to the host if they are in a Virtual Frame. Once you configure HBA ports and front-end paths for a Virtual Frame, you must take one or more steps at the host so that it can see the volumes in that Virtual Frame. The steps depend on the host’s operating system.

Some host types are unable to see the volumes in a Virtual Frame if one of the volumes belongs to LUN 0 on the array and that LUN fails.

Note: If a host is exposed to a Virtual Frame with no volumes (for example, the Virtual Frame has Virtual Targets and Initiator(s), but no volumes), you will see an EMC LUNZ SCSI Disk Device entry.

This is a temporary device used for initial SCSI-level communications between arrays and hosts. This device contains no actual storage resources, and disappears when you add volumes to the Virtual Frame.

However, if you add volumes to the Virtual Frame first, and then add the Initiators and Virtual Targets, you will not see the EMC LUNZ SCSI Disk Device entry.

Log in as the StorageAdmin when performing the following step at the host:

◆ Open Disk Manager and rescan several times.

Making volumes in an Invista Virtual Frame visible to a Windows host 259

Page 260: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

260

Invista

LUNZ visibility to hostLUNZ refers to a fake logical unit zero presented to the host to provide a path for host software to send configuration commands to the array when no physical logical unit zero is available to the host. On an Invista instance, the LUNZ device is replaced when a valid LUN is assigned to the HLU LUN0 by the virtual frame. LUNZ has been implemented on Invista to make the array visible to the host OS and PowerPath when no LUNs are bound on that array.

Note: Refer to EMC Knowlegebase solution emc65060 for a discussion on this topic.

EMC Host Connectivity Guide for Windows

Page 261: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Invista

Guidelines for optimizing the configurationUse the following guidelines to optimize your configuration:

◆ To improve redundancy to logical devices, distribute switch connections across as many array ports as possible.

◆ Tune the storage array to maximize performance for the application.

◆ Ensure that the host initiators are well distributed across the front-end ports (Virtual Targets).

Note: Maximum number of ITLs supported per Virtual Target is 256.

Guidelines for optimizing the configuration 261

Page 262: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

262

Invista

Installing AdmReplicate AdmReplicate allows a host to control fracturing and resynchronizing of its clones without the full management access that would be required to use the Invista Element Manager. You can use AdmReplicate on any host supported by Invista.

Note: Installing AdmReplicate overwrites the previous version.

To install AdmReplicate onto a Windows host, complete the following steps:

1. Insert the Invista host software CD and copy the folder EMC-INV-Host-<version>/AdmReplicate/windows/ to a /tmp directory on the host.

2. On the host, execute \tmp\setup.exe.

3. In the EMC Invista Local Replication CLI Setup window, click Next.

4. Read the License Agreement and click Yes.

5. When prompted for a username and company name, accept the default values or type in new information and then click Next.

6. When prompted to choose a destination, accept the default folder or browse to select a different one, and then click Next.

7. When prompted to select a program folder, accept the default or select a different folder, and then click Next.

8. When the InstallShield Wizard Complete dialog box displays, click Finish.

AdmReplicate is installed into the directory C:\Program Files\ EMC\EMC Invista Local Replication CLI.

EMC Host Connectivity Guide for Windows

Page 263: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Invista

ICRV requirementsThis section contains the following Invista Configuration Repository Volume (ICRV) requirements and recommendations for a:

◆ “Symmetrix-specific array” on page 263

◆ “VNX series- and CLARiiON-specific system” on page 264

Note: The information in this section is presented to customers in the EMC Invista Release 2.3 Element Manager Administration Guide and is included here for reference.

Preservation of the Invista metadata requires maintaining it on ICRVs. All back-end LUNs of 5 GB or larger that are accessible from the Invista SPs are considered candidate ICRVs. Any two candidates can be added to the Invista instance for metadata storage and replication.

CAUTION!Do not use disks 0 through 4 in an EMC VNX series and CLARiiON system as Invista ICRVs.

CAUTION!To avoid data corruption, make sure that ICRV LUNs are not consumed for any other storage purpose.

For more information, refer to the Invista Element Manager Administration Guide located on http://Powerlink.EMC.com.

Symmetrix-specific array

This section contains Symmetrix-specific requirements and recommendations when an ICRV is on a Symmetrix array:

◆ Review and adhere to the array-specific requirements described in the Invista administration guide.

◆ Provide the back-end LUNs that will be the ICRVs:

• Ensure that each ICRV LUN is at least 5 GB.

• Ensure that no ICRV LUN is provisioned for use as an Invista Virtual Volume.

ICRV requirements 263

Page 264: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

264

Invista

• Map/mask the ICRV LUNs to allow the Invista SPs to access only those LUNs.

◆ Zone the layer 2 ports that connect the ICRVs to the Invista SPs.

IMPORTANT!EMC recommends zoning by WWN.

Provide the layer 2 switch(es) between the Invista instance and the ICRVs, unless non-virtualizing ports on Invista MDS series (Cisco) switches will be used:

• The layer 2 switches must be EMC-qualified models.

• The layer 2 ports must be operating at a minimum of 2 Gb/s.

• Layer 2 ports on a switch’s supervisor or a layer 2 module can be used instead of separate layer 2 switches.

CAUTION!Do not route the ICRV connection through an Invista MDS SSM port.

◆ Provide the Fibre Channel cable(s) to connect the ICRV(s) to the Invista SPs.

If an ICRV is on a Symmetrix array, ensure that the Symmetrix Fibre Channel director (FA) bits are set (as listed in Table 3 on page 250) on all directors being used to connect Symmetrix devices to the Invista instance.

Note: These settings are required for all FAs connecting to the Invista instance, whether the devices are being used as ICRVs or Storage Elements.

VNX series- and CLARiiON-specific systemThis section contains VNX series- and CLARiiON-specific requirements and recommendations when an ICRV is on a VNX series and CLARiiON system:

◆ ICRV LUNs must be carved out of a RAID 1 (mirrored) or higher RAID group.

ICRV performance is critical for correct Invista operation.

EMC Host Connectivity Guide for Windows

Page 265: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Invista

IMPORTANT!EMC recommends that these be the only LUNs in the RAID group. If other LUNs are configured in the RAID group, avoid any load on them that will cause significant increases in response time for the ICRV LUN.

◆ If the array has less than 1 GB of cache memory per SP, the read cache must be at least 20 percent of the available cache space.

In larger arrays, reserve at least 200–500 MB for read cache, and the rest for write cache.

◆ Consider the I/O characteristics of Invista metadata: single-threaded, mostly sequential, 1 MB I/O with small response time requirements. Follow the recommended practices for the array when configuring the ICRV LUNs.

ICRV requirements 265

Page 266: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

266

Invista

Using PowerPath Migration Enabler This section explains how to use PowerPath Migration Enabler to migrate data in the Windows environment.

Note: If a host is running EMC PowerPath, confirm that the load-balancing and failover policy is set to Adaptive.

This section provides information on the following:

◆ “Storage Elements” on page 266

◆ “Preparing a Storage Element for PowerPath Migration Enabler” on page 266

◆ “Making unimported Storage Element unavailable” on page 267

Storage Elements

Back-end storage arrays are configured to provide storage to the Invista instance in the form of unimported Storage Elements, which are designated for the exclusive use and management by the Invista instance. Unimported Storage Elements must be imported into the Invista instance where they become imported Storage Elements. You can then use the imported Storage Elements to create Virtual Volumes that can be used by hosts and servers.

Preparing a Storage Element for PowerPath Migration Enabler

When using PowerPath Migration Enabler to migrate data, the data remains accessible to host applications during the migration. For this reason, an Invista Storage Element used in such a migration must not be sliced or used in a Clone operation, a Data Mobility job, or a volume expansion operation during the migration.

◆ When PowerPath Migration Enabler is used with a Brocade-based Invista instance, a single host adapter can be connected to both a front-end host port and a back-end storage port.

◆ When PowerPath Migration Enabler is used with a Cisco-based Invista instance with different VSANs in the front-end and back-end, IVR zoning must be utilized.

EMC Host Connectivity Guide for Windows

Page 267: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Invista

◆ Invista SE should be set to Not_Ready before proceeding with PowerPath Migration Enabler.

Making unimported Storage Element unavailable

To make an unimported Storage Element unavailable for Invista operations, complete the following steps:

1. Filter Storage Elements as necessary.

2. Right-click the unimported Storage Element.

3. Select Set Not Ready.

4. After setting a Storage Element to Not Ready, right-click the instance and select Update Now.

5. Import the Storage Element

6. Create a Virtual Volume from the Storage Element.

Note: Choose the maximum capacity when creating the volume.

7. Add the volume to a Virtual Frame and expose it to the host as you would any other Virtual Volume.

Refer to "Implementing PowerPath Migration Enabler in Open Replicator and Invista Environments" in the PowerPath Migration Enabler User Guide available on Powerlink.

Using PowerPath Migration Enabler 267

Page 268: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

268

Invista

Invista and Veritas Volume manager interactionVeritas Volume Manager interaction with Invista requires PowerPath or DMP on the host before creating Invista Virtual Frames that will be accessed by a host running PowerPath and Veritas Volume Manager.

IMPORTANT! MPIO-based versions of EMC PowerPath cannot be installed at the same time as the Veritas MPIO multipath solution. Veritas MPIO must be disabled when PowerPath is installed.

If PowerPath and Veritas MPIO are installed together, you may not see EMC disk devices appear in the Storage Foundation management application.

Note: Refer to the EMC Support Matrix for supported VxVM versions, service packs, and hotfixes.

EMC Host Connectivity Guide for Windows

Page 269: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

7Invisible Body Tag

This chapter describes EMC VPLEX. Topics include:

◆ EMC VPLEX overview.................................................................... 270◆ Prerequisites...................................................................................... 277◆ Provisioning and exporting storage .............................................. 278◆ Storage volumes ............................................................................... 280◆ System volumes................................................................................ 283◆ Required storage system setup ...................................................... 284◆ Host connectivity ............................................................................. 286◆ Exporting virtual volumes to hosts ............................................... 287◆ Front-end paths ................................................................................ 292◆ Configuring Windows hosts to recognize VPLEX volumes ...... 294◆ Configuring quorum on Windows Failover Cluster for

VPLEX Metro or Geo clusters ........................................................ 296

EMC VPLEX

EMC VPLEX 269

Page 270: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

270

EMC VPLEX

EMC VPLEX overviewThis section contains basic information on EMC VPLEX™:

◆ “Product description” on page 270

◆ “Product offerings” on page 271

◆ “GeoSynchrony” on page 271

◆ “VPLEX advantages” on page 274

◆ “VPLEX management” on page 274

◆ “SAN switches” on page 275

◆ “VPLEX limitations” on page 275

◆ “VPLEX documentation” on page 275

Product descriptionThe EMC VPLEX family is a solution for federating EMC and non-EMC storage. The VPLEX family is a hardware and software platform that resides between the servers and heterogeneous storage assets supporting a variety of arrays from various vendors. VPLEX can extend data over distance within, between, and across data centers. VPLEX simplifies storage management by allowing LUNs, provisioned from various arrays, to be managed through a centralized management interface.

With a unique scale-up and scale-out architecture, VPLEX's advanced data caching and distributed cache coherency provides workload resiliency, automatic sharing, balancing and failover of storage domains, and enables both local and remote data access with predictable service levels.

Note: A VPLEX cabinet can accommodate up to four engines, making it easy to convert from a small configuration to a medium or large cluster configuration.

VPLEX delivers distributed, dynamic, and smart functionality into existing or new data centers to provide storage virtualization across existing boundaries.

EMC Host Connectivity Guide for Windows

Page 271: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

◆ VPLEX is distributed, because it is a single interface for multi-vendor storage and it delivers dynamic data mobility, the ability to move applications and data in real-time, with no outage required.

◆ VPLEX is dynamic, because it provides data availability and flexibility as well as maintaining business through failures traditionally requiring outages or manual restore procedures.

◆ VPLEX is smart, because its unique AccessAnywhere™ technology can present and keep the same data consistent within and between sites and enable distributed data collaboration.

Product offeringsVPLEX first meets high-availability and data mobility requirements and then scales up to the I/O throughput you require for the front-end applications and back-end storage. The three available VPLEX product offerings are:

◆ VPLEX Local™

Provides seamless data mobility and high availability within a data center. It also allows you to manage multiple heterogeneous arrays from a single interface.

◆ VPLEX Metro™

Provides data mobility, enhanced high availability, and collaboration between two sites within synchronous distances.

◆ VPLEX Geo™

Provides data mobility, high availability, and collaboration between two sites within asynchronous distances.

More details on these offerings can be found in the EMC VPLEX with GeoSynchrony 5.0 Product Guide, located on Powerlink.

GeoSynchronyGeoSynchrony™ is the operating system running on VPLEX directors. GeoSynchrony is an intelligent, multitasking, locality-aware operating environment that controls the data flow for virtual storage. GeoSynchrony is:

◆ Optimized for mobility, availability, and collaboration

EMC VPLEX overview 271

Page 272: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

272

EMC VPLEX

◆ Designed for highly available, robust operation in geographically distributed environments

◆ Driven by real-time I/O operations

◆ Intelligent about locality of access

◆ Provides the global directory that supports AccessAnywhere

GeoSynchrony supports your mobility, availability, and collaboration needs.

EMC VPLEX with GeoSynchrony 5.0 is a scalable, distributed, storage-federation solution that provides non-disruptive, heterogeneous data movement and volume management functionality. GeoSynchrony 5.0 runs on both the VS1 hardware and the VS2 hardware offerings. A VPLEX cluster (both VS1 and VS2) consists of a single-engine, dual-engines, or quad-engines and a management server. Each engine contains two directors. A dual-engine or quad-engine cluster also contains a pair of Fibre Channel switches for communication between directors and a pair of UPS (Uninterruptible Power Sources) for battery power backup of the Fibre Channel switches.

The management server has a public Ethernet port, which provides cluster management services when connected to the customer network.

GeoSynchrony 5.0 provides support for some features already provided by existing array and software packages that might be in use in your storage configuration.

Specifically, GeoSynchrony 5.0 now supports the following features:

◆ Cross connect

You can deploy a VPLEX Metro high availability Cross Connect when two sites are within campus distance of each other (up to 1ms round trip time latency) and the sites are running VMware HA and VMware Distributed Resource Scheduler (DRS). You can then deploy A VPLEX Metro distributed volume across the two sites using a cross connect front-end configuration and install a VPLEX Witness server in a different failure domain.

◆ ALUA

GeoSynchrony 5.0 supports Asymmetric Logical Unit Access (ALUA), a feature provided by some new active/passive arrays. VPLEX with GeoSynchrony 5.0 can now take advantage of arrays that support ALUA. In active/passive arrays, logical units or

EMC Host Connectivity Guide for Windows

Page 273: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

LUNs are normally exposed through several array ports on different paths and the characteristics of the paths might be different. ALUA calls these paths characteristics access states. ALUA provides a framework for managing these access states.

Note: For more information on supported arrays, refer to the EMC VPLEX with GeoSynchrony 5.0 Product Guide, located on Powerlink.

The most important access states are active/optimized and active/non-optimized.

◆ Active optimized paths usually provide higher bandwidth than active non-optimized paths. Active/optimized paths are paths that go to the service processor of the array that owns the LUN.

◆ I/O that goes to the active non-optimized ports must be transferred to the service processor that owns the LUN internally. This transfer increases latency and has an impact on the array.

VPLEX is able to detect the active optimized paths and the active/non-optimized paths and performs round robin load balancing across all of the active optimized paths. Because VPLEX is aware of the active/optimized paths, it is able to provide better performance to the LUN.

With implicit ALUA, the array is in control of changing the access states of the paths. Therefore, if the controller that owns the LUN being accessed fails, the array changes the status of active/non-optimized ports into active/optimized ports and trespasses the LUN from the failed controller to the other controller.

With explicit ALUA, the host (or VPLEX) is able to change the ALUA path states. If the active/optimized path fails, VPLEX causes the active/non-optimized paths to become active/optimized paths and as a result, increase the performance. I/O can go between the controllers to access the LUN through a very fast bus. There is no need to trespass the LUN in this case.

For more information on VPLEX with GeoSynchrony, refer to the EMC VPLEX with GeoSynchrony 5.0 Product Guide, located on Powerlink.

EMC VPLEX overview 273

Page 274: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

274

EMC VPLEX

VPLEX advantagesVPLEX delivers unique and differentiated value to address three distinct needs:

◆ Mobility: The ability to dynamically move applications and data across different compute and storage installations, be they within the same data center, across a campus, within a geographical region and now, with VPLEX Geo, across even greater distances.

◆ Availability: The ability to create high-availability storage and a compute infrastructure across these same varied geographies with unmatched resiliency.

◆ Collaboration: The ability to provide efficient real-time data collaboration over distance for such "big data" applications as video, geographic/oceanographic research, and more.

VPLEX management

VPLEX supports a web-based graphical user interface (GUI) and a command line interface (CLI) for managing your VPLEX implementation. For more information on using these interfaces, refer to the EMC VPLEX Management Console Help or the EMC VPLEX CLI Guide, available on Powerlink.

GeoSynchrony supports multiple methods of management and monitoring for the VPLEX cluster:

◆ Web-based GUI

For graphical ease of management from a centralized location.

◆ VPLEX CLI

For command line management of clusters.

◆ VPLEX Element Manager API

Software developers and other users use the API to create scripts to run VPLEX CLI commands.

◆ SNMP Support for performance statistics:

Supports retrieval of performance-related statistics as published in the VPLEX-MIB.mib.

EMC Host Connectivity Guide for Windows

Page 275: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

◆ LDAP/AD support

VPLEX offers Lightweight Directory Access Protocol (LDAP) or Active Directory as an authentication directory service.

◆ Call home

The Call Home feature in GeoSynchrony is a leading technology that alerts EMC support personnel of warnings in VPLEX so they can arrange for proactive remote or on-site service. Certain events trigger the Call Home feature. Once a call-home event is triggered, all informational events are blocked from calling home for 8 hours.

SAN switchesALL EMC-recommended FC SAN switches are supported, including Brocade, Cisco, and QLogic.

VPLEX limitationsAlways refer to the VPLEX Simple Support Matrix for the most up-to-date support information. Refer to the EMC VPLEX Release Notes, available on EMC Powerlink, for the most up-to-date capacity limitations.

VPLEX documentationEMC VPLEX documentation is available on Powerlink. Refer to the following documents for configuration and administration operations:

◆ EMC VPLEX with GeoSynchrony 5.0 Product Guide

◆ EMC VPLEX with GeoSynchrony 5.0 CLI Guide

◆ EMC VPLEX with GeoSynchrony 5.0 Configuration Guide

◆ EMC VPLEX Hardware Installation Guide

◆ EMC VPLEX Release Notes

◆ Implementation and Planning Best Practices for EMC VPLEX Technical Notes

◆ VPLEX online help, available on the Management Console GUI

◆ VPLEX Procedure Generator, available on EMC Powerlink

EMC VPLEX overview 275

Page 276: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

276

EMC VPLEX

◆ EMC Simple Support Matrix, EMC VPLEX and GeoSynchrony, available at http://elabnavigator.EMC.com under the Simple Support Matrix tab.

For the most up-to-date support information, you should always refer to the EMC Support Matrix, PDFs and Guides > VPLEX.

EMC Host Connectivity Guide for Windows

Page 277: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

PrerequisitesBefore configuring VPLEX in the Windows environment, complete the following on each host:

◆ Confirm that all necessary remediation has been completed.

This ensures that OS-specific patches and software on all hosts in the VPLEX environment are at supported levels according to the EMC Support Matrix.

◆ Confirm that each host is running VPLEX-supported failover software and has at least one available path to each VPLEX fabric.

Note: Always refer to the EMC Support Matrix for the most up-to-date support information and prerequisites.

◆ If a host is running EMC PowerPath, confirm that the load-balancing and failover policy is set to Adaptive.

IMPORTANT!For optimal performance in an application or database environment, ensure alignment of your host's operating system partitions to a 32 KB block boundary.

Prerequisites 277

Page 278: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

278

EMC VPLEX

Provisioning and exporting storageThis section provides information for the following:

◆ “VPLEX with GeoSynchrony v4.x” on page 278

◆ “VPLEX with GeoSynchrony v5.x” on page 279

VPLEX with GeoSynchrony v4.xTo begin using VPLEX, you must provision and export storage so that hosts and applications can use the storage. Storage provisioning and exporting refers to the following tasks required to take a storage volume from a storage array and make it visible to a host:

1. Discover available storage.

2. Claim and name storage volumes.

3. Create extents from the storage volumes.

4. Create devices from the extents.

5. Create virtual volumes on the devices.

6. Create storage views to allow hosts to view specific virtual volumes.

7. Register initiators with VPLEX.

8. Add initiators (hosts), virtual volumes, and VPLEX ports to the storage view.

You can provision storage using the GUI or the CLI. Refer to the EMC VPLEX Management Console Help or the EMC VPLEX CLI Guide, located on Powerlink, for more information.

Figure 68 on page 279 illustrates the provisioning and exporting process.

EMC Host Connectivity Guide for Windows

Page 279: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

Figure 68 VPLEX provisioning and exporting storage process

VPLEX with GeoSynchrony v5.x

VPLEX allows easy storage provisioning among heterogeneous storage arrays. After a storage array LUN volume is encapsulated within VPLEX, all of its block-level storage is available in a global directory and coherent cache. Any front-end device that is zoned properly can access the storage blocks.

Two methods available for provisioning: EZ provisioning and Advanced provisioning. For more information, refer to the EMC VPLEX with GeoSynchrony 5.0 Product Guide, located on Powerlink.

Provisioning and exporting storage 279

Page 280: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

280

EMC VPLEX

Storage volumesA storage volume is a LUN exported from an array. When an array is discovered, the storage volumes view shows all exported LUNs on that array. You must claim, and optionally name, these storage volumes before you can use them in a VPLEX cluster. Once claimed, you can divide a storage volume into multiple extents (up to 128), or you can create a single full size extent using the entire capacity of the storage volume.

Note: To claim storage volumes, the GUI supports only the Claim Storage wizard, which assigns a meaningful name to the storage volume. Meaningful names help you associate a storage volume with a specific storage array and LUN on that array, and helps during troubleshooting and performance analysis.

This section contains the following information:

◆ “Claiming and naming storage volumes ” on page 280

◆ “Extents ” on page 281

◆ “Devices ” on page 281

◆ “Distributed devices” on page 281

◆ “Rule sets” on page 281

◆ “Virtual volumes ” on page 282

◆ “Virtual volumes are created on devices or distributed devices and presented to a host via a storage view. Virtual volumes can be created only on top-level devices and always use the full capacity of the device.” on page 282

Claiming and naming storage volumes You must claim storage volumes before you can use them in the cluster (with the exception of the metadata volume, which is created from an unclaimed Storage Volume). Only after claiming a storage volume, can you use it to create extents, devices, and then virtual volumes.

EMC Host Connectivity Guide for Windows

Page 281: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

Extents An extent is a slice (range of blocks) of a storage volume. You can create a full size extent using the entire capacity of the storage volume, or you can carve the storage volume up into several contiguous slices. Extents are used to create devices, and then virtual volumes.

Devices Devices combine extents or other devices into one large device with specific RAID techniques, such as mirroring or striping. Devices can only be created from extents or other devices. A device's storage capacity is not available until you create a virtual volume on the device and export that virtual volume to a host.

You can create only one virtual volume per device. There are two types of devices:

◆ Simple device — A simple device is configured using one component, which is an extent.

◆ Complex device — A complex device has more than one component, combined using a specific RAID type. The componenes can be extents or other devices (both simple and complex).

Distributed devicesDistributed devices are configured using storage from both clusters and therefore are used only in multi-cluster plexes. A distributed device's components must be other devices and those devices must be created from storage in different clusters in the plex.

Note: These devices are required for Windows Clustering for Metro and Geo environments.

Rule sets

Rule sets are predefined rules that determine how a cluster behaves when it loses communication with the other cluster, for example, during an inter-cluster link failure or cluster failure. In these

Storage volumes 281

Page 282: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

282

EMC VPLEX

situations, until communication is restored, most I/O workloads require specific sets of virtual volumes to resume on one cluster and remain suspended on the other cluster.

VPLEX provides a Management Console on the management server in each cluster. You can create distributed devices using the GUI or CLI on either management server. The default rule set used by the GUI makes the cluster used to create the distributed device detach during communication problems, allowing I/O to resume at the cluster. For more information, on creating and applying rule sets, refer to the EMC VPLEX CLI Guide, available on Powerlink.

There are cases in which all I/O must be suspended resulting in a data unavailability. VPLEX with GeoSynchrony 5.0 is introduces the new functionality of VPLEX Witness. When a VPLEX Metro or a VPLEX Geo configuration is augmented by VPLEX Witness, the resulting configuration provides the following features:

◆ High availability for applications in a VPLEX Metro configuration (no single points of storage failure)

◆ Fully automatic failure handling in a VPLEX Metro configuration

◆ Significantly improved failure handling in a VPLEX Geo configuration

◆ Better resource utilization

For information on VPLEX Witness, refer to the EMC VPLEX with GeoSynchrony 5.0 Product Guide, located on Powerlink.

Virtual volumes

Virtual volumes are created on devices or distributed devices and presented to a host via a storage view. Virtual volumes can be created only on top-level devices and always use the full capacity of the device.

EMC Host Connectivity Guide for Windows

Page 283: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

System volumesVPLEX stores configuration and metadata on system volumes created from storage devices. There are two types of system volumes. Each is briefly discussed in this section:

◆ “Metadata volumes” on page 283

◆ “Logging volumes” on page 283

Metadata volumesVPLEX maintains its configuration state, referred to as metadata, on storage volumes provided by storage arrays. Each VPLEX cluster maintains its own metadata, which describes the local configuration information for this cluster as well as any distributed configuration information shared between clusters.

For more information about metadata volumes for VPLEX with GeoSynchrony v4.x, refer to the EMC VPLEX CLI Guide, available on Powerlink.

For more information about metadata volumes for VPLEX with GeoSynchrony v5.x, refer to the EMC VPLEX with GeoSynchrony 5.0 Product Guide, located on Powerlink.

Logging volumesLogging volumes are created during initial system setup and are required in each cluster to keep track of any blocks written during a loss of connectivity between clusters. After an inter-cluster link is restored, the logging volume is used to synchronize distributed devices by sending only changed blocks over the inter-cluster link.

For more information about logging volumes for VPLEX with GeoSynchrony v4.x, refer to the EMC VPLEX CLI Guide, available on Powerlink.

For more information about logging volumes for VPLEX with GeoSynchrony v5.x, refer to the EMC VPLEX with GeoSynchrony 5.0 Product Guide, located on Powerlink.

System volumes 283

Page 284: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

284

EMC VPLEX

Required storage system setupSymmetrix, VNX series, and CLARiiON product documentation and installation procedures for connecting a Symmetrix, VNX series, and CLARiiON storage system to a VPLEX instance are available on Powerlink.

Required Symmetrix FA bit settings

For Symmetrix-to-VPLEX connections, configure the Symmetrix Fibre Channel directors (FAs) as shown in Table 4.

Note: EMC recommends that you download the latest information before installing any server.

a. For the Symmetrix 8000 series, the flags should be Unique Worldwide Name (UWN), Common Serial Number, and Enable Point-to-Point (PP).

b. Must be set if VPLEX is sharing Symmetrix directors with hosts that require conflicting bit settings. For any other configuration, the VCM/ACLX bit can be either set or not set.

Note: When setting up a VPLEX-attach version 4.x or earlier with a VNX series or CLARiiON system, the initiator type must be set to CLARiiON Open and the Failover Mode set to 1. ALUA is not supported.

When setting up a VPLEX-attach version 5.0 or later with a VNX series or CLARiiON system, the initiator type can be set to CLARiiON Open and the Failover Mode set to 1 or Failover Mode 4 since ALUA is supported.

Table 4 Required Symmetrix FA bit settings for connection to VPLEX

Set a Do not set Optional

SPC-2 Compliance (SPC2)SCSI-3 Compliance (SC3)Enable Point-to-Point (PP)Unique Worldwide Name (UWN)Common Serial Number (C)

Disable Queue Reset on UnitAttention (D)AS/400 Ports Only (AS4)Avoid Reset Broadcast (ARB)Environment Reports to Host (E)Soft Reset (S)Open VMS (OVMS)Return Busy (B)Enable Sunapee (SCL)Sequent Bit (SEQ)Non Participant (N)OS-2007 (OS compliance)

LinkspeedEnable Auto-Negotiation (EAN)VCM/ACLXb

EMC Host Connectivity Guide for Windows

Page 285: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

If you are using the LUN masking, you will need to set the VCM/ACLX flag. If sharing array directors with hosts which require conflicting flag settings, VCM/ACLX must be used.

Note: The FA bit settings listed in Table 4 are for connectivity of VPLEX to EMC Symmetrix arrays only. For host to EMC Symmetrix FA bit settings, please refer to the EMC Support Matrix.

Supported storage arraysThe EMC VPLEX Simple Support Matrix lists the storage arrays that have been qualified for use with VPLEX.

Refer to the VPLEX Procedure Generator, available on Powerlink, to verify supported storage arrays.

VPLEX automatically discovers storage arrays that are connected to the back-end ports. All arrays connected to each director in the cluster are listed in the storage array view.

Initiator settings on back-end arraysRefer to the VPLEX Procedure Generator, available on Powerlink, to verify the initiator settings for storage arrays when configuring the arrays for use with VPLEX.

Required storage system setup 285

Page 286: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

286

EMC VPLEX

Host connectivityFor the most up-to-date information on qualified switches, hosts, host bus adapters, and software, refer to the always consult the EMC Support Matrix (ESM), available through E-Lab Interoperability Navigator (ELN) at: http://elabnavigator.EMC.com, under the PDFs and Guides tab, or contact your EMC Customer Representative.

The latest EMC-approved HBA drivers and software are available for download at the following websites:

◆ http://www.emulex.com

◆ http:/www.qlogic.com

◆ http://www.brocade.com

The EMC HBA installation and configurations guides are available at the EMC-specific download pages of these websites.

Note: Direct connect from a host bus adapter to a VPLEX engine is not supported.

EMC Host Connectivity Guide for Windows

Page 287: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

Exporting virtual volumes to hosts A virtual volume can be added to more than one storage view. All hosts included in the storage view will be able to access the virtual volume.

The virtual volumes created on a device or distributed device are not visible to hosts (or initiators) until you add them to a storage view. For failover purposes, two or more front-end VPLEX ports can be grouped together to export the same volumes.

A volume is exported to an initiator as a LUN on one or more front-end port WWNs. Typically, initiators are grouped into initiator groups; all initiators in such a group share the same view on the exported storage (they can see the same volumes by the same LUN numbers on the same WWNs).

An initiator must be registered with VPLEX to see any exported storage. The initiator must also be able to communicate with the front-end ports over a Fibre Channel switch fabric. Direct connect is not supported. Registering an initiator attaches a meaningful name to the WWN, typically the server’s DNS name. This allows you to audit the storage view settings to determine which virtual volumes a specific server can access.

Exporting virtual volumes to hosts 287

Page 288: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

288

EMC VPLEX

Exporting virtual volumes consists of the following tasks:

1. Creating a storage view, as shown in Figure 69.

Figure 69 Create storage view

EMC Host Connectivity Guide for Windows

Page 289: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

2. Registering initiators, as shown in Figure 70.

Figure 70 Register initiators

Note: When Windows hosts, including MSCS and MS Failover Clustering, are registered, use the initiator type "default".

Exporting virtual volumes to hosts 289

Page 290: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

290

EMC VPLEX

3. Adding ports to the storage view, as shown in Figure 71.

Figure 71 Add ports to storage view

EMC Host Connectivity Guide for Windows

Page 291: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

4. Adding virtual volumes to the storage view, as shown in Figure 72.

Figure 72 Add virtual volumes to storage view

Exporting virtual volumes to hosts 291

Page 292: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

292

EMC VPLEX

Front-end pathsThis section contains the following information:

◆ “Viewing the World Wide Name for an HBA port” on page 292

◆ “VPLEX ports” on page 292

◆ “Initiators” on page 292

Viewing the World Wide Name for an HBA portEach HBA port has a World Wide Name (WWN) associated with it. WWNs are unique identifiers that the VPLEX engine uses to identify its ports and Host Initiators. You can uses one of the following ways to view WWNs:

◆ Switch’s name server output

◆ EMC Ionix ControlCenter or Solution Enabler

◆ syminq command (Symmetrix users)

VPLEX portsThe virtual volumes created on a device are not visible to hosts (initiators) until you export them. Virtual volumes are exported to a host through front-end ports on the VPLEX directors and HBA ports on the host/server. For failover purposes, two or more front-end VPLEX ports can be used to export the same volumes. Typically, to provide maximum redundancy, a storage view will have two VPLEX ports assigned to it, preferably from two different VPLEX directors. When volumes are added to a view, they are exported on all VPLEX ports in the view, using the same LUN numbers.

Initiators

For an initiator to see the virtual volumes in a storage view, it must be registered and included in the storage view's registered initiator list. The initiator must also be able to communicate with the front-end ports over Fibre Channel connections through a fabric.

A volume is exported to an initiator as a LUN on one or more front-end port WWNs. Typically, initiators are grouped so that all initiators in a group share the same view of the exported storage

EMC Host Connectivity Guide for Windows

Page 293: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

(they can see the same volumes by the same LUN numbers on the same WWN host types).

Ensure that you specify the correct host type in the Host Type column as this attribute cannot be changed in the Initiator Properties dialog box once the registration is complete. To change the host type after registration, you must unregister the initiator and then register it again using the correct host type.

When Windows hosts, including MSCS and MS Failover Clustering, are registered, use the initiator type "default".

Front-end paths 293

Page 294: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

294

EMC VPLEX

Configuring Windows hosts to recognize VPLEX volumesVolumes are only visible to the host if they are in a Storage View. Once you configure HBA ports and front-end paths for a Virtual Frame, you must take one or more steps at the host so that it can see the volumes in that Storage View. The steps depend on the host’s operating system.

IMPORTANT!For optimal performance in an application or database environment, ensure alignment of your host's operating system partitions to a 32 KB block boundary.

Some host types are unable to see the volumes in a Storage View if one of the volumes belongs to LUN 0 on the array and that LUN fails.

Note: If a host is exposed to a Storage View with no volumes (for example, the Storage View has Virtual Targets and Initiator(s), but no volumes), you will see an EMC LUNZ SCSI Disk Device entry.This is a temporary device used for initial SCSI-level communications between arrays and hosts. This device contains no actual storage resources, and disappears when you add volumes to the Storage View. However, if you add volumes to the Storage View first, and then add the Initiators and Virtual Targets, you will not see the EMC LUNZ SCSI Disk Device entry.

Log in as the StorageAdmin when performing the following steps at the host:

1. Open Disk Manager and rescan several times.

2. If the volume is still invisible to the initiator, disable the switch port connected to the initiator and then reenable it. This forces a rescan. The initiator should then be able to identify the new volume to the operating system.

Note: When the switch port is disabled, targets should disappear from the Disk Manager. If they do not, when the port is enabled, the new volume may not be visible.

3. If the new volume is still invisible to the initiator, physically disconnect the host from the switch port and then reconnect it.

EMC Host Connectivity Guide for Windows

Page 295: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

Note: When you disconnect, targets should disappear from the Disk Manager. If they do not, when the host is reconnected, the new volume may not be visible.

4. If the volume is still invisible to the initiator, reboot the host.

Configuring Windows hosts to recognize VPLEX volumes 295

Page 296: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

296

EMC VPLEX

Configuring quorum on Windows Failover Cluster for VPLEX Metro or Geo clusters

This section contains the following information:

◆ “VPLEX Metro or Geo cluster configuration” on page 296

◆ “Prerequisites” on page 297

◆ “Setting up quorum on a Windows Failover Cluster for VPLEX Metro or Geo clusters” on page 298

VPLEX Metro or Geo cluster configurationTwo VPLEX Metro clusters connected within metro (synchronous) distances, approximately 60 miles (100 kilometers), form a Metro-Plex cluster. Figure 73 shows an example of a VPLEX Metro cluster configuration. VPLEX Geo cluster configuration is the same and adds the ability to dynamically move applications and data across different compute and storage installations across even greater distances..

Figure 73 VPLEX Metro cluster configuration example

EMC Host Connectivity Guide for Windows

Page 297: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

Note: All connections shown in Figure 73 are Fibre Channel, except the network connections, as noted.

The environment in Figure 73 consists of the following:

◆ Node-1 – Windows 2008 or Windows 2008 R2 Server connected to the VPLEX instance over Fibre Channel.

◆ Node -2 – Windows 2008 or Windows 2008 R2 Server connected to the VPLEX instance over Fibre Channel.

◆ VPLEX instance – One or more engine VPLEX having a connection through the L2 switch to back-end and front-end devices.

PrerequisitesEnsure the following before configuring the VPLEX Metro or Geo cluster:

◆ VPLEX firmware is installed properly and the minimum configuration is created.

◆ All volumes to be used during the cluster test should have multiple back-end and front-end paths.

Note: Refer to the Implementation and Planning Best Practices for EMC VPLEX Technical Notes, availalbe on Powerlink, for best practices for the number of paths for back-end and front-end paths.

◆ All hosts/servers/nodes of the same configuration, version, and service pack of the operating system are installed.

◆ All nodes are part of the same domain and are able to communicate with each other before installing Windows Failover Clustering.

◆ One free IP address is available for cluster IP in the network.

◆ EMC PowerPath or MPIO is installed and enabled on all the cluster hosts.

◆ The hosts are registered to the appropriate View and visible to VPLEX.

◆ All volumes to be used during cluster test should be shared by all nodes and accessible from all nodes.

◆ A network fileshare is required for cluster quorum.

Configuring quorum on Windows Failover Cluster for VPLEX Metro or Geo clusters 297

Page 298: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

298

EMC VPLEX

Setting up quorum on a Windows Failover Cluster for VPLEX Metro or Geo clusters To set up a quorum on VPLEX Metro or Geo clusters for Windows Failover Cluster, complete the following steps.

1. Select the quorum settings. In the Failover Cluster Manager, right-click on the cluster name and select More Actions > Configure Cluster Quorum Settings > Node and File Share Majority. The Node and File Share Majority model is recommended for VPLEX Metro and Geo environments.

EMC Host Connectivity Guide for Windows

Page 299: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

2. The Configure Cluster Quorum Wizard displays. Click Next.

Configuring quorum on Windows Failover Cluster for VPLEX Metro or Geo clusters 299

Page 300: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

300

EMC VPLEX

3. The Select Quorum Configuration window displays. Ensure that the Node and File Share Majority radio button is selected and click Next.

EMC Host Connectivity Guide for Windows

Page 301: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

4. The Configure File Share Witness window displays. Make sure that the \\sharedfolder from any Windows host in a domain other than the configured Windows Failover Cluster nodes is in the Shared Folder Path and click Next.

Configuring quorum on Windows Failover Cluster for VPLEX Metro or Geo clusters 301

Page 302: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

302

EMC VPLEX

5. The Confirmation window displays. Click Next.

EMC Host Connectivity Guide for Windows

Page 303: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC VPLEX

6. The Summary window displays. Go to the Failover Cluster Manager and verify the quorum configuration is set to \\sharedfolder.

7. Click Finish.

Configuring quorum on Windows Failover Cluster for VPLEX Metro or Geo clusters 303

Page 304: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

304

EMC VPLEX

EMC Host Connectivity Guide for Windows

Page 305: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

8Invisible Body Tag

This section provides information regarding EMC PowerPath for Windows.

◆ PowerPath and PowerPath iSCSI .................................................. 306◆ PowerPath for Windows ................................................................. 307◆ PowerPath verification and problem determination .................. 310

EMC PowerPath forWindows

EMC PowerPath for Windows 305

Page 306: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

306

EMC PowerPath for Windows

PowerPath and PowerPath iSCSIEMC PowerPath for Windows software is available in two different packages: PowerPath for Windows and PowerPath iSCSI for Windows. It is important to know the differences between the two packages before deploying the software:

PowerPath for Windows: Supports both Fibre Channel and iSCSI environments. PowerPath for Windows is Microsoft digitally certified only for Fibre Channel environments. PowerPath for Windows supports failover path management and load-balancing for up to 32 paths in heterogeneous storage environments. PowerPath for Windows is not currently supported by Microsoft for iSCSI implementations, although it is supported by EMC for EMC iSCSI storage systems.

PowerPath iSCSI for Windows: Supports EMC VNX series and CLARiiON iSCSI storage systems. PowerPath iSCSI for Windows is Microsoft digitally certified and is built on the Microsoft MPIO framework. PowerPath iSCSI for Windows supports failover path management for up to 8 paths in iSCSI storage environments.

EMC Host Connectivity Guide for Windows

Page 307: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC PowerPath for Windows

PowerPath for WindowsThe following information is included in this section:

◆ “PowerPath and MSCS” on page 307

◆ “Integrating PowerPath into an existing MSCS cluster” on page 307

PowerPath and MSCSIf you are installing PowerPath and MSCS for the first time, install PowerPath first, and then install MSCS. Installing PowerPath first avoids having to disrupt cluster services at a later time.

Integrating PowerPath into an existing MSCS clusterYou can integrate PowerPath into an existing MSCS cluster without shutting down the cluster, if there is close coordination between the nodes and the storage system. Each node in a cluster can own a distinct set of resources. Node A is the primary node for its resources and the failover node for Node B’s resources. Conversely, Node B is the primary node for its resources and the failover node for Node A’s resources.

If after installing PowerPath on the cluster, you test node failover by disconnecting all cables for a LUN or otherwise disrupting the path between the active host and the array, Windows logs event messages indicating hardware or network failure and possible data loss. If working correctly, the cluster will failover to a node with an active path and you can ignore the messages from the original node as logged in the event log. (You should check the application generating I/O to see if there are any failures. If there are none, everything is working normally.)

Installing PowerPath in a clustered environment requires the following steps:

◆ Move all resources to Node A

◆ Install PowerPath on Node B

◆ Configure additional paths between storage array and Node B

◆ Move all resources to Node B

PowerPath for Windows 307

Page 308: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

308

EMC PowerPath for Windows

◆ Install PowerPath on Node A

◆ Configure additional paths between storage array and Node A

◆ Return Node A’s resources back to Node A

Moving resources toNode A

To move all resources to Node A:

1. Start the MSCS Cluster Administrator utility, select Start, Programs, Administrative Tools, Cluster Administrator.

2. In the left pane of the window, select all groups owned by Node B.

3. To move the resources to Node A, select File, Move Group. Alternatively, select Move Group by right-clicking all group names under Groups in the left pane.

4. To pause Node b, click Node B and select File, Pause Node. This keeps the node from participating in the cluster during PowerPath installation.

Installing PowerPathonto Node B

To install PowerPath onto Node B:

1. Install PowerPath.

2. Shut down Node B. In a cluster with greater than two nodes, install PowerPath on these other nodes. For example, in a four-node cluster, replace Node B with Nodes B, C, and D in step 4 of the previous section, “Moving resources to Node A,” and also in steps 1 and 2, above.

Configuring additionalpaths between the

storage system andNode B

To configure additional paths:

1. If necessary, reconfigure the storage system so its logical devices appear on multiple ports.

2. If necessary, install additional HBAs on Node b.

3. Connect cables for new paths between Node b and the storage system.

4. Power on Node b.

5. To resume Node b, click Node b and select File, Resume Node. In a cluster with greater than two nodes, configure additional paths between the storage system and these other nodes. For example, in a four-node cluster, replace Node B with Nodes B, C, and D in steps 2, 3, 4, and 5 above.

EMC Host Connectivity Guide for Windows

Page 309: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC PowerPath for Windows

Moving resources toNode B

To move all resources to Node B:

1. In the left pane of the Cluster Administrator window, select all groups.

2. To move the resources to Node B, select File, Move Group. In a cluster with greater than two nodes, move all resources to any of the remaining nodes. For example, in a four-node cluster, replace Node b with any combination of Nodes B, C, and D to which you want to move resources. For example, you could move resources to Nodes B and C or move them to B, C, and D, or any permutation of Nodes B, C, and D taken alone or together.

3. To pause Node A, click Node A and select File, Pause Node.

Installing PowerPathonto Node A

To install PowerPath onto Node A:

1. Install PowerPath.

2. Shut down Node A.

Configuring additionalpaths between the

storage system andNode A

To configure additional paths:

1. If necessary, configure the storage system so its logical devices appear on multiple ports.

2. If necessary, install additional HBAs on Node A.

3. Connect cables for new paths between Node A and the storage system.

4. Power on Node A.

5. To resume Node A, click Node A and select File, Resume Node.

Returning Node A’sresources to Node A

To return Node A’s resources:

1. Using the MSCS Cluster Administrator utility, select all groups previously owned by Node A.

2. To move the resources back to Node A, select File, Move Group.

PowerPath for Windows 309

Page 310: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

310

EMC PowerPath for Windows

PowerPath verification and problem determinationThe following section assumes that PowerPath has been installed properly. Please refer to the appropriate PowerPath Installation and Administration Guide on Powerlink for instructions on how to install PowerPath. This section will help to verify that PowerPath was installed correctly and help you to recognize some common failure points.

Click the circled icon shown in Figure 74 to access the PowerPath Administration.

Figure 74 PowerPath Administration icon

Figure 75 shows the administration icon and the status it represents.

Figure 75 PowerPath Monitor Taskbar icons and status

EMC Host Connectivity Guide for Windows

Page 311: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC PowerPath for Windows

Figure 76 shows what PowerPath Administrator would look like if installed correctly. Notice that in this case there is one path zoned between the HBA and one port on the storage device.

Figure 76 One path

PowerPath verification and problem determination 311

Page 312: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

312

EMC PowerPath for Windows

When multiple paths are zoned to your storage device, PowerPath Administrator would look like Figure 77:

Figure 77 Multiple paths

Problem determination

Determining the cause of loss if connectivity to the storage devices can be simplified by using the PowerPath Administrator. Array ports that are offline, defective HBAs or broken paths show up in the administrator GUI in various ways.

Table 5 on page 313 shows the known possible failure states. Referencing this table can greatly reduce problem determination time.

EMC Host Connectivity Guide for Windows

Page 313: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC PowerPath for Windows

Table 5 Possible failure states

PowerPath verification and problem determination 313

Page 314: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

314

EMC PowerPath for Windows

Examples of some failures follow:

An error with an Array port, or the path leading to the Array port, is displayed in Figure 78. This is symbolized by the red X through one of the Array ports. Notice that while the Array port is down access to the disk device is still available - degraded access is noted by a red slash.

Figure 78 Error with an Array port

EMC Host Connectivity Guide for Windows

Page 315: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

EMC PowerPath for Windows

Figure 79 shows the result of a problem with one of the HBA or the path leading to the HBA. The failed HBA/path is marked with a red X. Again, notice that access to the disk devices, while degraded, still exists.

Figure 79 Failed HBA path

Making changes to your environment

You must reconfigure PowerPath after making configuration changes that affect host-to-storage-system connectivity or logical device identification, for example:

◆ Fibre Channel switch zone changes

◆ Adding or removing Fibre Channel switches

◆ Adding or removing HBAs or storage-system ports

◆ Adding or removing logical devices

PowerPath verification and problem determination 315

Page 316: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

316

EMC PowerPath for Windows

In most cases making changes to your environment will be detected automatically by PowerPath. Depending on the type HBA you are using you may have to scan for new devices in device manager. On some occasions and depending on the operating system version your are running you may need to reboot your system.

PowerPath messagesFor a complete list of PowerPath messages and their meanings please refer to the "PowerPath Product Guide - PowerPath Messages" chapter for the version of PowerPath you are running.

EMC Host Connectivity Guide for Windows

Page 317: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

9Invisible Body Tag

This section provides information for using Microsoft’s Native Multipath I/O (MPIO) with Windows Server 2008 and Windows Server 2008 R2.

◆ Support for Native MPIO in Windows Server 2008 and Windows Server 2008 R2................................................................. 318

◆ Installing and configuring Native MPIO...................................... 319◆ Known issues.................................................................................... 324

Using Microsoft NativeMPIO with Windows

Server 2008 andWindows Server 2008

R2

Using Microsoft Native MPIO with Windows Server 2008 and Windows Server 2008 R2 317

Page 318: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

318

Using Microsoft Native MPIO with Windows Server 2008 and Windows Server 2008 R2

Support for Native MPIO in Windows Server 2008 and Windows Server 2008 R2

Windows Server 2008 and Windows Server 2008 R2 include native multipathing (MPIO) support as a feature of the OS.

Native MPIO is supported with all EMC storage arrays.

Note the following:

◆ For Windows Server 2008 Core and Windows 2008 R2 Server Core installations, Native MPIO is failover only. There are no load balancing options available in the default DSM for EMC storage arrays.

◆ Default Microsoft MPIO Timer Counters are supported.

◆ Hosts running Windows Server 2008 and Windows Server 2008 R2 must be manually configured so that the initiators are registered using failover mode 4 [ALUA].

◆ CLARiiON systems need to be on FLARE 26 or above to support Native MPIO.

◆ VNX OE for Block v31 is the minimum

EMC Host Connectivity Guide for Windows

Page 319: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Using Microsoft Native MPIO with Windows Server 2008 and Windows Server2008 R2

Installing and configuring Native MPIO This section explains how to install and configure native MPIO for EMC storage arrays. For Windows 2008 Server Core and Windows 2008 R2 Server Core, use the procedure described in “Enabling Native MPIO on Windows Server 2008 Server Core and Windows Server 2008 R2 Server Core” on page 322.

Native MPIO is installed as an optional feature of the Windows Server 2008 and Windows Server 2008 R2.

To complete installation:

1. Open the Server Manager Management Console.

2. Select Features > Features Summary > Add Features.

The Add Features Wizard opens.

3. Follow the wizard, selecting “MultiPath I/O” when available.

4. Reboot when prompted.

Upon restart, the wizard will complete installation of MPIO.

Native MPIO must be configured to manage VPLEX, Symmetrix DMX, VNX series, and CLARiiON systems. Open Control Panel, then the MPIO applet.

The claiming of array/device families can be done in one of two ways as described in “Method 1,” next, and in “Method 2” on page 320.

Method 1 Manually enter the Vendor and Device IDs of the arrays for native MPIO to claim and manage.

Note: This may be the preferred method if all arrays are not initially connected during configuration and subsequent reboots are to be avoided.

To manually enter the array vendor and product ID information:

1. Use the MPIO-ed Devices tab in the MPIO Properties control panel applet.

2. Select Add and enter the vendor and product IDs of the array devices to be claimed by native MPIO.

The vendor ID must be entered as a string of eight characters (padded with trailing spaces) and followed by the product ID entered as a string of sixteen characters (padded with trailing spaces).

Installing and configuring Native MPIO 319

Page 320: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

320

Using Microsoft Native MPIO with Windows Server 2008 and Windows Server 2008 R2

For example, to claim a VNX series and CLARiiON RAID 1 device in MPIO, the string would be entered as

DGC*****RAID*1**********

where the asterisk is representative of a space.

The vendor and product IDs vary based on the type of array and device presented to the host, as shown in Table 6.

Method 2 Use the MPIO applet to discover, claim, and manage the arrays already connected during configuration.

Note: This may be the preferred method if ease-of-use is required and subsequent reboots are acceptable when each array is connected.

CAUTION!MPIO limits the number of paths per LUN to 32. Exceeding this number will result in the host crashing with a Blue Screen stop message. Do not exceed 32 paths per LUN when configuring MPIO on your system.

Table 6 Array and device types

Array type LUN type Vendor ID Product ID

VPLEX VS1/VS2 Any EMC Invista

DMX, DMX-2, DMX-3, DMX-4, VMAX 40K, VMAX 20K/VMAX, and VMAX 10K/VMAXe

Any EMC SYMMETRIX

CX300, CX500, CX700, all CX3-based arrayAX4 Series, CX4 Series, CX3 Series, VNX series and CLARiiON Virtual Provisioning

JBOD (single disk) DGC DISK

RAID 0 DGC RAID 0

RAID 1 DGC RAID 1

RAID 3 DGC RAID 3

RAID 5 DGC RAID 5

RAID 6 DGC VRAID

RAID 1/0 DGC RAID 10

EMC Host Connectivity Guide for Windows

Page 321: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Using Microsoft Native MPIO with Windows Server 2008 and Windows Server2008 R2

Automatic discovery is configured using the Discover Multi-Paths tab of the MPIO Properties control panel applet. Note that only arrays which are connected with at least two logical paths will be listed as available to be added in this tab, as follows:

◆ Devices from VNX OE for Block v31 and CLARiiON systems (running FLARE R26 or greater, configured for failover mode 4 [ALUA]) will be listed in the SPC-3 compliant section of the applet

◆ Devices from DMX, VMAX 40K, VMAX 20K/VMAX, and VMAX 10K/VMAXe arrays will be listed in the Others section of the applet

◆ Devices from VPLEX arrays will be listed in the Others section of the applet

Select the array / device types to be claimed and managed by MPIO by selecting the Device Hardware ID, and clicking the Add button.

Note: The OS will prompt you to reboot for each device type added. A single reboot will suffice after multiple devices types are added.

Path management in Multipath I/O for VPLEX, Symmetrix DMX, VMAX 40K, VMAX 20K/VMAX, and VMAX 10K/VMAXe, VNX series, and CLARiiON system

Following reboot, after all device types have been claimed by MPIO, each VPLEX-based, Symmetrix DMX, VMAX 40K, VMAX 20K/VMAX and VMAX 10K/VMAXe-based, VNX series-based, and CLARiiON-based disk will be shown in Device Manager as a Multi-Path Disk Device. When managed by MPIO, a new tab, named MPIO, will be available under Properties of the selected disk device. Under the MPIO tab, the number of logical paths configured between the host and array should be reported.

The default Load Balance Policy (as reported in the MPIO tab) for each disk device depends upon the type of disk device presented:

◆ In Windows server 2008, DMX, VMAX 40K, VMAX 20K/VMAX, and VMAX 10K/VMAXe devices will report a default Load Balance Policy as “Fail Over Only”, where the first reported path is listed as “Active/Optimized” and all other paths listed as “Standby.” In Windows server 2008 R2, DMX, VMAX 40K, VMAX 20K/VMAX, and VMAX 10K/VMAXe devices will report a default Load Balance Policy as "Round Robin," where all the

Installing and configuring Native MPIO 321

Page 322: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

322

Using Microsoft Native MPIO with Windows Server 2008 and Windows Server 2008 R2

paths are listed as "Active/Optimized." The default policy can be overridden by changing the Load Balance Policy to any available. See the Windows Server 2008 and Windows Server 2008 R2 documentation for a detailed description of available Load Balance Policies.

DMX, VMAX 40K, VMAX 20K/VMAX, and VMAX 10K/VMAXe array devices attached to the host with a default Load Balance Policy of “Fail Over Only”, can be overridden by changing the Load Balance Policy to any available. See the Windows Server 2008 and Windows Server 2008 R2 documentation for a detailed description of available Load Balance Policies. Note that the default Load Balance Policy cannot be changed globally for all disk devices, the change must be done on a per-disk device basis.

◆ VNX series and CLARiiON devices will report a default Load Balance Policy as “Round Robin With Subset”, where all paths to the SP owning the device as “Active/Optimized”, and all paths to the SP not owning the LUN as “Active/Unoptimized”.

VNX series and CLARiiON devices attached to the host in ALUA mode (as is required when using native MPIO) report the path state which is used directly by the host running native MPIO and cannot be overridden by changing the Load Balance Policy.

◆ VPLEX devices will report a default Load Balance Policy as "Round Robin" with all active paths as "Active/Optimized". The default policy can be overridden by changing the Load Balance Policy to any available, except "Fail Over Only". See the Windows Server 2008 and Windows Server 2008 R2 documentation for a detailed description of available Load Balance policies.

Note: The default Load Balance Policy cannot be changed globally for all disk devices. The change must be done on a per-disk device basis.

Enabling Native MPIO on Windows Server 2008 Server Core and Windows Server 2008 R2 Server Core

MPIO and other features must be started from the command line since Windows Server 2008 Server Core and Windows Server 2008 R2 Server Core are a "scaled-back" installation and no Windows Explorer shell is installed.

EMC Host Connectivity Guide for Windows

Page 323: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Using Microsoft Native MPIO with Windows Server 2008 and Windows Server2008 R2

To enable the native MPIO feature from the command line, type:

start /w ocsetup MultipathIo

After the system reboots, you can manage MPIO with the mpiocpl.exe utility. From the command prompt, type:

mpiocpl.exe

The MPIO Properties window displays.

Using the manual Vendor ID procedure described in “Method 1” on page 319, you can enter the Vendor IDs for EMC storage arrays for your native MPIO on Server Core.

Installing and configuring Native MPIO 323

Page 324: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

324

Using Microsoft Native MPIO with Windows Server 2008 and Windows Server 2008 R2

Known issuesThe following are known issues:

◆ When a Windows 2008 host with Native MPIO managing VNX series and CLARiiON systems boots, MPIO will move all CLARiiON LUNs to a single Storage Processor on the VNX series and CLARiiON system.

◆ Windows 2008 Native MPIO does not auto-restore a VNX series and CLARiiON LUN to its default Storage Processor after any type of fault is repaired. For example, after a non-disruptive upgrade of VNX series and CLARiiON software, all VNX series and CLARiiON LUNs will be owned on a single Storage Processor.

• To address the above behavior, VNX series and CLARiiON management software (Unisphere/Navisphere Manager or Navisphere Secure CLI) can be used to manually restore LUNs to their default Storage Processor.

• Also, a VNX series and CLARiiON LUN can be assigned a Load Balance Policy of "Failover Only" with the "Preferred" box selected on a path connected to the default Storage Processor. Native MPIO will attempt to keep the preferred path as active/optimized and will use that path for IO.

Only this single, preferred path will be used for IO; there is failover, but no multipathing, under this Load Balance Policy. If the preferred path fails, Native MPIO will select an alternate, healthy path for IO.

IMPORTANT!The implications of doing this should be clearly understood. There will be no multipathing to this LUN if the above method is implemented – only failover.

◆ The Windows 2008 Native MPIO statistics does not update the number of reads/writes for a LUN that is larger than 2 TB in size.

EMC Host Connectivity Guide for Windows

Page 325: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

AInvisible Body Tag

This appendix provides information on persistent binding.

◆ Understanding persistent binding ................................................ 326

Persistent Binding

Persistent Binding 325

Page 326: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

326

Persistent Binding

Understanding persistent bindingPersistent binding is the mechanism to create a continuous logical route from a storage device object in the Windows host to a volume in the EMC storage array across the fabric.

Without a persistent binding mechanism, the host cannot maintain persistent logical routing of the communication from a storage device object across the fabric to an EMC storage array volume. If the physical configuration of the switch is changed (for example, the cable is swapped or the host is rebooted), the logical route becomes inconsistent, causing possible data corruption if the user application is modifying data through inconsistent logical routing of the communication from the driver entry point to a volume in an EMC storage array across the fabric.

The Windows operating system (OS) does not provide a satisfactory means to allow persistent binding. Most software applications access storage using file systems managed by the Windows OS. (File systems are represented as <drive letter><colon>, that is, C:, D:, and so forth) For storage devices containing file systems, Windows writes a disk signature to the disk device. The operating system can then identify, and associate with, a particular drive letter and file system.

Since the signature resides on the disk device, changes can occur on the storage end (a cable swap, for example) that can cause a disk device to be visible to the host server in a new location. However, the OS looks for the disk signature and, providing that nothing on the disk changed, associate the signature with the correct drive letter and file system. This mechanism is strictly an operating system feature and is not influenced by the Fibre Channel device driver.

Some software applications, however, do not use the Windows file systems or drive letters for their storage requirements. Instead they access storage drives directly, using their own built-in “file systems.” Devices accessed in this way are referred to as raw devices and are known as physical drives in Windows terminology.

The naming convention for physical drives is simple and is always the same for software applications using them. A raw device under Windows is accessed by the name \\.\PHYSICALDRIVEXXX, where XXX is the drive number.

EMC Host Connectivity Guide for Windows

Page 327: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Persistent Binding

For example, a system with three hard disks attached using an Emulex Fibre Channel controller assigns the disks the names \\.\PHYSICALDRIVE0, \\.\PHYSICALDRIVE1, and \\.\PHYSICALDRIVE2. The number is assigned during the disk discovery part of the Windows boot process.

During boot-up, the Windows OS loads the driver for the storage HBAs. Once loaded, the OS performs a SCSI Inquiry command to obtain information about all the attached storage devices. Each disk drive that it discovers is assigned a number in a semi-biased first come, first serve fashion based on HBA. Semi-biased means the Windows system always begins with the controller in the lowest-numbered PCI slot where a storage controller resides. Once the driver for the storage controller is loaded, the OS selects the adapter in the lowest-numbered PCI slot to begin the drive discovery process.

It is this naming convention and the process by which drives are discovered that makes persistent binding (by definition) impossible for Windows. Persistent binding requires a continuous logical route from a storage device object in the Windows host to a volume in an EMC storage array across the fabric. As mentioned above, each disk drive is assigned a number in a first-come, first-serve basis. This is where faults can occur.

Example Imagine this scenario: A host system contains controllers in slots 0, 1, and 2. Someone removes a cable from the Emulex controller in host PCI slot 0, then reboots the host.

During reboot, the Windows OS loads the Emulex driver during reboot and begins disk discovery. Under the scenario presented above, there are no devices discovered on controller 0, so the OS moves to the controller in slot 1 and begins naming the disks it finds, starting with \\.\PHYSICALDRIVE0. Any software applications accessing \\.\PHSYICALDRIVE0 before the reboot will be unable to locate their data on the device, because it changed.

Figure 80 on page 328 shows the original configuration before the reboot. HBA0 is in PCI slot 0 of the Windows host. Each HBA has four disk devices connected to it, so Windows has assigned the name \\.\PHYSICALDRIVE0 to the first disk on HBA0. Each disk after that is assigned a number in sequence as shown in Figure 80.

Understanding persistent binding 327

Page 328: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

328

Persistent Binding

Figure 80 Original configuration before the reboot

Figure 81 shows the same host after the cable attached to HBA0 has been removed and the host rebooted. Since Windows was unable to do a discovery on HBA0, it assigned \\.\PHYSICALDRIVE0 to the first device it discovered. In this case, that first device is connected to HBA1. Due to the shift, any software application accessing \\.\PHYSICALDRIVE0 will not find data previously written on the original \\.\PHYSICALDRIVE0.

Figure 81 Host after the rebooted

Note: Tape devices are treated the same as disk devices in Windows with respect to persistent binding. Refer to your tape device documentation for more information.

HBA 0

HBA 1

HBA 2

PHYSICALDRIVE0

PHYSICALDRIVE4

PHYSICALDRIVE8

WindowsHost

HBA 0

HBA 1

HBA 2

WindowsHost

PHYSICALDRIVE0

PHYSICALDRIVE4

EMC Host Connectivity Guide for Windows

Page 329: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Persistent Binding

Methods of persistent bindingThe Windows device naming convention and disk discovery process does not allow the Windows operating system to establish persistent binding. Therefore, one of these methods must be used:

◆ EMC Volume Logix — Provides persistent binding through centralized control by the Symmetrix Fibre Channel fabric ports.

For more information, refer to the EMC Volume Logix Product Guide.

◆ Switch zone mapping — Provides persistent binding through centralized control by the Fibre Channel switch.

For more information, refer to the EMC Connectrix Enterprise Storage Network System Planning Guide.

◆ (Emulex HBAs only) Emulex configuration tool mapping — Provides persistent binding of targets through centralized control by the Emulex host adapter. This requires the user to modify the mapping manually.

For more information, refer to EMC Host Connectivity with Emulex Fibre Channel Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment, available in the EMC OEM section of the Emulex website http://www.emulex.com or on Powerlink.

a. Click drivers, software and manuals at the left side of the screen.

b. Click EMC at the upper center of the next screen.

c. Click the link to your HBA at the left side of the screen.

d. Under EMC Drivers, Software and Manuals, click the Installation and Configuration link under Drivers for Windows <version>.

Understanding persistent binding 329

Page 330: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

330

Persistent Binding

EMC Host Connectivity Guide for Windows

Page 331: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

BInvisible Body Tag

This appendix describes the Solutions Enabler and migration considerations.

◆ EMC Solutions Enabler ................................................................... 332◆ Migration considerations ................................................................ 333

Solutions Enabler

Solutions Enabler 331

Page 332: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

332

Solutions Enabler

EMC Solutions EnablerThe EMC Solutions Enabler SYMCLI is a specialized library consisting of commands that can be invoked via the command line or within scripts. These commands can be used to:

◆ Monitor device configuration and status

◆ Perform control operations on devices and data objects within your managed storage complex.

The target storage environments are typically Symmetrix-based, through some features are supported for VNX series and CLARiiON systems as well.

For more information, please refer to the EMC Solutions Enabler Symmetrix Array Management CLI Product Guide, available on Powerlink.

EMC Host Connectivity Guide for Windows

Page 333: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Solutions Enabler

Migration considerationsThere are three phases to consider when migrating, each discussed further in this section:

◆ “Information collection phase,” next

◆ “Reconnection phase” on page 334

◆ “Troubleshooting phase” on page 334

Information collection phaseWhen collecting information, consider the following:

Question What device/host information should be collected prior to the array migration that may be needed after the migration to properly/seamlessly reconfigure the host environment?

Recommendations ◆ List all drive letters, LUN sizes, Raid types, and number of LUNs. Note which data, applications, etc., are on which particular drives.

If there are applications in which the LUN enumeration cannot be changed from the HBA GUI, note the SCSI_ ID so it can be set as a Persistent Binding on the HBAs. See the appropriate host bus adapter guide, available on Powerlink, for instructions:

• EMC Host Connectivity with Emulex Fibre Channel Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment

• EMC Host Connectivity with QLogic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment

• EMC Host Connectivity with Brocade Fibre Channel and Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment

◆ Note the HBA WWNs for zoning as well as the array port numbers to new array.

◆ Note the zoning configuration and replicate it with new target array.

Migration considerations 333

Page 334: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

334

Solutions Enabler

Reconnection phaseIn the reconnection phase, consider the following:

Question Upon host reconnection to the new array, for the items in the Information collection phase, which are required to be configured identical as before for the host to work seamlessly?

Answer All

Question What can be changed on the array or host, and what needs to be done if they are changed?

Answer ◆ RAID type can be changed as long as the lun is equal to or greater in size.

◆ If there is not a need to isolate an HBA, then the zoning configuration can be changed as long as all the new LUNs are seen by the host.

Troubleshooting phaseIn the troubleshooting phase, consider the following:

Question What are some common configuration pitfalls or errors when reconnecting to an array that may be different than a fresh install, and some basic troubleshooting recommendations to avoid these scenarios?

Recommendations ◆ A best practice is to reboot after reconnecting to a new array. A rescan of devices may not detect and enumerate the LUNs correctly.

◆ Note that with Windows 2003, the new LUNs will go online automatically whereas Windows 2008 will require manual intervention from Disk Management to bring the disks online.

◆ If there are many LUNs, then PowerPath may take some time claiming all the devices.

EMC Host Connectivity Guide for Windows

Page 335: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Solutions Enabler

ReferencesMore information can be found in the following guides, available on Powerlink:

◆ For HBA configurations, see the appropriate host bus adapter guides:

• EMC Host Connectivity with Emulex Fibre Channel Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment, available on Powerlink

• EMC Host Connectivity with QLogic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment, available on Powerlink

• EMC Host Connectivity with Brocade Fibre Channel and Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment document, available on Powerlink

◆ For additional migration considerations, refer to EMC's Data Migration All Inclusive guide.

Migration considerations 335

Page 336: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

336

Solutions Enabler

EMC Host Connectivity Guide for Windows

Page 337: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

CInvisible Body Tag

This appendix describes known issues and limitations.

◆ Issues.................................................................................................. 338◆ Capabilities and limitations............................................................ 339◆ How a server responds to a failure in the boot LUN path......... 343

General Host Behaviorand Limitations

General Host Behavior and Limitations 337

Page 338: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

338

General Host Behavior and Limitations

Issues◆ Basic-to-Dynamic Disk conversion is slower in Windows 2003

with a large number of disks — The user-interface is slower in Windows 2003 because it converts the disks to dynamic one-by-one, as opposed to the Windows 2000 user interface, which runs the conversions in parallel. The change was required by the introduction of the VDS service in Windows 2003. The behavior is also present when using the Diskpart utility. For additional information on this issue, please refer to Microsoft Knowledge Base article 828220.

◆ Windows 2000 Hosts running less than SP4 could be susceptible to data loss during LUN expansion operations — This issue is corrected by Microsoft in SP4, or by applying hotfix 327020. Refer to EMC Solution IS emc73538 and Microsoft Knowledge Base article 327020 for more information.

◆ To upgrade from Windows 2000 to Windows 2003, you must first uninstall PowerPath — After upgrading the host, you can reinstall an approved Windows 2003 version. Refer to the EMC PowerPath documentation for further details.

◆ Windows Server 2003 logs Event ID 51 although no data has been lost from the hard disk — This message may appear during a StorageGroup change, and can be ignored. See Microsoft Knowledge Base article 834910 for details.

EMC Host Connectivity Guide for Windows

Page 339: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Host Behavior and Limitations

Capabilities and limitationsThis section provides information on general host capabilities and limitations.

Operating system/driver capabilities and limitations

The following capabilities and limitations should be noted for Windows operating systems.

LUNs Theoretically, Windows supports up to 261,120 total LUNs. This figure is based on the listed support limitations for Windows Server from Microsoft. 8 buses per adapter, 128 targets per bus, and 255 luns per target. However, due to registry hive limitations, a Windows server is most likely to run out of registry space to keep track of these large LUN counts well before reaching this limit.

Microsoft Windows limits the number of LUNs per HBA to 255 (LUNs 00-FE) even though the HBAs are capable of presenting 256 (LUNs 00-FF). It is a function of the operating system that prevents that last LUN from being presented to the user, and not the HBA or driver. This limitation should be taken into consideration when planning your host configuration.

The Emulex SCSIPort driver allows for an increase in the number of LUNs per target that the HBA can report to the OS. Using the ELXCFG configuration tool, it is possible for the HBA to see LUNs with addresses greater than FE. To use these high LUN addresses, set the Max Number of LUNs to your target number (maximum 512), check the Lun Mapping box in the right pane of the configuration tool, and then check the Automatic Lun Mapping box.

Based on E-Lab testing, it has been determined that for EMC configurations Windows servers should be limited to a maximum of 500 LUNs. In most cases, this number will be more than sufficient. In others, it may be deemed too small. For configurations where a large amount of storage is necessary, but not necessarily a large number of disks, EMC storage can be configured to present LUNs of large sizes to the host.

Capabilities and limitations 339

Page 340: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

340

General Host Behavior and Limitations

Figure 82 LUN Mapping and Automatic LUN Mapping

Refer to the "Manually Installing the HBA Driver – Advanced Users" section of the EMC Host Connectivity with Emulex Fibre Channel Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment documentt, available on Powerlink, and your Emulex user guide for more information on using the ELXCFG configuration tool.

The QLogic SCSIPort driver for Windows 2003 is not capable of addressing LUNS above FE as this is a limitation of the operating system. Also, STORPort drivers under Windows 2003 are not capable of addressing LUNs above FE as this is a limitation of the operating system.

EMC Host Connectivity Guide for Windows

Page 341: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Host Behavior and Limitations

Note: This driver is no longer being developed for Windows 2003 configurations.

Note: EMC Storage arrays provide the ability to expand the size of a LUN presented to the host server. Refer to your EMC array's documentation for procedures on expanding LUN sizes. Windows has the ability to recognize the extra space on these expanded LUNs by performing a rescan via the Disk Administrator window.

Volume sizes Windows 2000 supports a maximum file system size of 2 TB (terabytes.)

Windows 2003 and Windows 2008 support a maximum file system size of 2 TB unless Service Pack 1 or 2 is installed. With SP1 or SP2, the maximum supported physical disk size is 256 TB (note that volumes larger than 2 TB must use GPT partitions to support them. Refer to your Windows users guide for information on GPT partitions.)

Note: Windows 2003 for x64 servers does not require SP1 or SP2 to create GPT partitions.

EMC Storage arrays provide the ability to expand the size of a LUN presented to the host server. Refer to your EMC array's documentation for procedures on expanding LUN sizes. Windows has the ability to recognize the extra space on these expanded LUNs by performing a rescan via the Disk Administrator window.

External booting Booting externally from an EMC storage array requires an NTFS filesystem if EMC PowerPath is installed on the host. EMC does not support FAT32 filesystems for external boot when using PowerPath.

iSCSI Microsoft iSCSI Initiator 2.0 known limitations:

◆ Dynamic disks on an iSCSI session are not supported.

◆ Initiator and target CHAP secrets should each be:

• 1 through 16 bytes if IPsec is being used.

• 12 through 16 bytes if IPsec is not being used.

Capabilities and limitations 341

Page 342: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

342

General Host Behavior and Limitations

◆ The default iSCSI node name is generated from the Windows computer name. If the Windows computer name contains a character that would be invalid in an iSCSI node name (such as the underscore character '_'), the Microsoft iSCSI Initiator service converts the invalid character to a dash (-).

◆ If the target returns CHECK CONDITION for a SCSI request but does not provide sense data, the initiator completes the request with a status target error. The initiator does not treat the target behavior as a protocol error.

◆ The iSCSI control panel applet does not create an icon in the system tray. If the applet is in the background, you can switch to it by using the ALT-TAB key combination or by double-clicking the icon that launches it.

EMC Host Connectivity Guide for Windows

Page 343: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

General Host Behavior and Limitations

How a server responds to a failure in the boot LUN pathFailure in the path to a SAN-based boot LUN can halt Windows in a fatal error condition. Depending on the failure, Windows may be able to transfer control to another path and continue.

Table 7 shows server reactions to failures in different components.

a. Depending on the fabric configuration, if multiple switches are used, then this behavior would qualify under the Multipath category.

b. VNX series and CLARiiON only.

Explanations of entries◆ STOP Error (fatal blue screen) — Indicates host failure and

chance of data corruption.

◆ No boot — Cannot boot Windows.

◆ Halt — Windows cannot recover before system has completed startup. (You must reboot and follow the power-up scenario.)

◆ Manual — Manual intervention is required to continue. (Typically, initiate a LUN trespass using CLI or Manager. With Manager, enable LUN Auto-Assignment in LUN properties.)

Table 7 Server response to failure in the boot LUN path (Single fault)

Configuration Server state HBA failure Switch failure Boot SP/Director Port failure

Boot SP/Director failure

Catastrophic Storage System failure

2 or more HBAs, failover software

Windows running Multipath STOP Error a Multipath Trespass b STOP Error

Windows booting Halt Halt Halt Halt Halt

Power up Multipath No Boot a Multipath Manual No Boot

1 HBA, failover software

Windows running STOP Error STOP Error Trespass Trespass b STOP Error

Windows booting Halt Halt Halt Halt Halt

Power up No boot No boot Manual Manual No boot

1 HBA, no failover software

Windows running STOP Error STOP Error STOP Error STOP Error STOP Error

Windows booting Halt Halt Halt Halt Halt

Power up No boot No boot No boot No boot No boot

How a server responds to a failure in the boot LUN path 343

Page 344: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

344

General Host Behavior and Limitations

◆ Multipath or Trespass — This automatic operation allows no disruption of service. (The delay caused by this operation may affect Windows stability.)

Impact of failureTable 8 shows the impact of failure and the remedy in the boot LUN path.

Table 8 Impact of failure in the boot LUN path

Failure Remedy Server power cycle required Impact

HBA failure Replace HBA, remap the path to the Boot device using the HBA BIOS utility, and mask the new WWN to EMC storage.

Yes This should not require the OS to be re-installed.

PCI slot failure Move the HBA to a different slot.

Yes Some servers will tolerate this without the OS having to be re-installed.

Server Replace the server. Yes Replacing a defective server with another may require the OS to be re-installed. Servers using PCI-X bus/HBAs require the OS to be re-installed. Some Servers using PCI Express slots/HBAs can tolerate server replacement with the OS intact.

Intermittent or defective FC switch port

Move the boot HBA cable to another port on the switch. This should not affect the SAN boot device, providing the OS is still up and running prior to moving the cable.

No With the Windows OS booted, it's possible to move the cable to another port without any ill effects; providing the new port is configured in the same VSAN (if ViSANs are configured).

FC switch Replace the switch. Yes Replacing the switch should not affect the boot from SAN device when the switch is replaced and the server is then booted.

EMC Host Connectivity Guide for Windows

Page 345: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

DInvisible Body Tag

This appendix contains additional information about Veritas Volume Management software used with Windows hosts.

◆ Veritas Volume Management software ......................................... 346◆ Veritas Storage Foundation feature functionality ....................... 350

Veritas VolumeManagement Software

Veritas Volume Management Software 345

Page 346: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

346

Veritas Volume Management Software

Veritas Volume Management softwareThis section contains information on Veritas Volume Management software for Windows operating systems.

Note: Refer to the latest EMC Support Matrix to determine which Veritas Volume Manager configurations are supported and what service packs may be required.

CAUTION!Configuring large numbers of device paths with Veritas Volume Manager can cause a Windows system to boot very slowly, and in some cases overrun the NTLDR boot-time registry size and halt. Systems that are configured with more than 512 device paths (total paths to all LUNs) should check with EMC Customer Service before installing Veritas Volume Manager 3.x.

Note: The C-bit is required on Symmetrix director ports connected to systems running Veritas DMP. Users of EMC ControlCenter 5.1 and greater should consult their ControlCenter documentation for directions on making this change. Other users must contact their EMC representative to make this change.

Note: For VNX series and CLARiiON systems, a failover mode is required for all DMP or MPIO multipathing configurations. For standard active/passive failover, failover mode one (1) should be selected. For CLARiiON FLARE R26 and later, select failover mode four (4) for ALUA (Asymmetric Logical Unit Access) mode.

Veritas Storage Foundation 5.0 and 5.1Storage Foundation encompasses Veritas Volume Manager as well as other available volume management software utilities. Refer to the EMC Support Matrix for supported features of Foundation Suite.

For versions 5.0 and 5.1, Volume Manager and DMP are supported with SCSIPort drivers on Windows 2000 operating systems only. For Windows 2003 Veritas MPIO multipathing solution is supported with STORPort drivers only and requires an EMC Symmetrix or CLARiiON DSM (device specific module).

EMC Host Connectivity Guide for Windows

Page 347: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Veritas Volume Management Software

DSMs for existing Symmetrix and VNX series and CLARiiON systems are included in the native installation of Storage Foundation. Future storage arrays may require an updated DSM for MPIO compatibility. STORPort drivers are also supported in configurations where PowerPath is installed. For Windows 2003 and 2008 STORPort driver configurations, the Microsoft STORPort hotfix is necessary. Refer to the EMC Support Matrix for current STORPort hotfix versions as well as currently supported driver versions.

Some Veritas Volume Manager and Veritas Storage Foundation configurations may show incorrect LUN information on devices numbered higher than 8000 when attached to Symmetrix arrays if the SPC-2 director bit is disabled. Refer to EMC Knowledgebase article emc179861 for more information on this occurrence.

IMPORTANT!MPIO-based versions of EMC PowerPath cannot be installed at the same time as the Veritas MPIO multipath solution.

Veritas MPIO must be disabled when PowerPath is installed. If PowerPath and Veritas MPIO are installed together, you may not see EMC disk devices appear in the Storage Foundation management application.

Veritas Storage Foundation 4.3Storage Foundation encompasses Veritas Volume Manager as well as other available volume management software utilities. Refer to the EMC Support Matrix for supported features of Foundation Suite. For version 4.3, Volume Manager and DMP are supported with SCSIPort drivers only. Veritas MPIO multipathing solution is supported with STORPort drivers only and requires an EMC Symmetrix or VNX series and CLARiiON DSM (device specific module) to be installed. STORPort drivers are also supported in configurations where Powerpath is installed. For Windows 2003 STORPort driver configurations, the Microsoft STORPort hotfix is necessary. Refer to the EMC Support Matrix for current STORPort hotfix versions as well as currently supported driver versions.

Note: Veritas MPIO configurations do not support load balancing and support a maximum of 16 paths to each device. DMP configurations do not have this 16 path limitation.

Veritas Volume Management software 347

Page 348: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

348

Veritas Volume Management Software

IMPORTANT!MPIO-based versions of EMC PowerPath cannot be installed at the same time as the Veritas MPIO multipath solution.

Veritas MPIO must be disabled when PowerPath is installed. If PowerPath and Veritas MPIO are installed together, you may not see EMC disk devices appear in the Storage Foundation management application.

Veritas Storage Foundation 4.2Storage Foundation encompasses Veritas Volume Manager as well as other available volume management software utilities. Refer to the EMC Support Matrix for supported features of Storage Foundation. For version 4.2, Volume Manager and DMP are supported with SCSIPort drivers only. STORPort drivers are supported in configurations where PowerPath is installed only. For Windows 2003 STORPort driver configurations, the Microsoft STORPort hotfix is necessary. Refer to the EMC Support Matrix for current STORPort hotfix versions as well as currently supported driver versions.

Veritas Foundation Suite 4.1Foundation suite encompasses Veritas Volume Manager as well as other available volume management software utilities. Refer to the EMC Support Matrix for supported features of Foundation Suite.

For version 4.1, Volume Manager and DMP are supported with SCSIPort drivers only. Veritas does not support STORPort drivers for Windows 2003 configurations.

Veritas Volume Manager 3.1 and Veritas DMPIf using PowerPath with Veritas Volume Manager 3.1, you also need Veritas Volume Manager Service Pack 1or 2.

EMC and Veritas now provide a Dynamic Multipathing (DMP) Driver Update for Veritas DMP to interface with CLARiiON CX-Series arrays, providing DMP high-availability capability. Refer to the EMC Support Matrix for the minimum supported revisions of

EMC Host Connectivity Guide for Windows

Page 349: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Veritas Volume Management Software

VxVM and DMP, as well as the CLARiiON Dynamic Multipathing Driver update.

Note: For more information about DMP and VNX series and CLARiiON, refer to page 227.

Veritas Volume Manager 3.0If using PowerPath with Veritas Volume Manager 3.0 with Service Pack 1 or 2, you must also make the following registry modifications before PowerPath devices will be available to the Veritas Enterprise Manager:

Use regedt32.exe to set the registry as follows:

HKEY_LOCAL_MACHINE\SOFTWARE\Veritas\VxSvc\CurrentVersion\VolumeManagervalue name = ShowGateKeeperDevicesdata type = REG_DWORDvalue = 0x1

HKEY_LOCAL_MACHINE\SOFTWARE\Veritas\VxSvc\CurrentVersion\VolumeManagervalue name = ShowEmcHiddenDevicesdata type = REG_DWORDvalue = 0x1

After completing these changes, reboot the host system.

Veritas Volume Management software 349

Page 350: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

350

Veritas Volume Management Software

Veritas Storage Foundation feature functionalityThis section contains general support rules and guidelines regarding special features and functionality available from the Veritas Storage Foundation product. Always consult the EMC Support Matrix for the latest supported versions of Veritas Storage Foundation for Microsoft Windows.

Please review the Veritas Storage Foundation Advanced Features Administrator's Guide for more details.

Thin Reclamation (VxVM)

Thin Reclamation is currently supported with VNX series and CLARiiON only.

EMC supports minimum VxVM 5.1 SP1 for Thin Reclamation.

EMC PowerPath is not supported with Thin Reclamation at this time.

SmartMove (VxVM)

VxVM SmartMove is supported with EMC VNX series and CLARiiON CX4 arrays and newer, and Symmetrix DMX, VMAX 40K, VMAX 20K/VMAX, and VMAX 10K/VMAXe arrays.

EMC supports SmartMove with minimum VxVM 5.1 SP1.

EMC Host Connectivity Guide for Windows

Page 351: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

Index

Aaddressing, Windows/Symmetrix

arbitrated loop 217fabric 219SCSI-3 220

Bboot support

Symmetrix 114

CCCR 59CHAP 138cluster continuous replication 59CNA (Converged Network Adapter) 133comments 19Converged Network Adapter (CNA) 133customer support 19

Ddevices, Symmetrix, adding on line

Windows 2000 50direct access device 220Disk Administrator 22disks, Symmetrix

initializing 44

Eerror messages, Windows 2000/Windows 2003

49

errors, drive, recovering from on Windows 2000 host 52

FFailover Clustering 58failure in boot LUN path, server response to 343FCoE (Fibre Channel over Ethernet) 133Fibre Channel over Ethernet (FCoE) 133File Share Witness, FSW 59file shares 211functions, Windows 22

Hhelp 19

II/O timeout value, adjusting 51initializing Symmetrix disks 44inq 338Inquiry utility 338Invista

overview 242Invista advantages 243, 274Invista documentation 244, 276Invista offerings 245IP address, Symmetrix 210iSCSI Network Portal 138issues 338

LLIP 218

EMC Host Connectivity Guide for Windows 351

Page 352: EMC Host Connectivity Guide for Windows · PDF fileFederated Live Migration ... DMX, VMAX 40K, ... Connectivity Guide for Windows. EMC Host Connectivity Guide for Windows. EMC Host

352

Index

logical unit addressing 220loop initialization process 218

Pperipheral device addressing 220persistent binding 326

SSCSI-3 FCP 217server response to failure in boot LUN path 343service 19Solutions Enabler 331status messages, Windows 2000/Windows 2003

49storage system components

VNX/CLARiiON 222storage volume 280support information

Fibre Channel 222VNX/CLARiiON 222

Symmetrix configuration 216Symmetrix IP address 210

Ttechnical support 19terminology, Windows 22timeout value, adjusting 51

UUnified Computing System 135utilities, Windows 22

VVERITAS DMP (Dynamic Multi-Pathing)

with 227VERITAS Volume Manager (VxVM)

with VNX/CLARiiON 227VERITAS Volume Manager 3.0 349VNX/CLARiiON

support information 222VERITAS DMP 227VERITAS Volume Manager 227

VNX/CLARiiON setup 222volume set addressing 220

volumes, Symmetrix, creating for Windows 2000 or Windows 2003 47

VPLEX, descirption 270

WWindows I/O timeout value, adjusting 51

Zzone, planning in Windows/Symmetrix

environment 115

EMC Host Connectivity Guide for Windows