136
EMC ® VNX Series Release 7.0 Managing Volumes and File Systems with VNX AVM P/N 300-011-806 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.EMC.com

Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Embed Size (px)

Citation preview

Page 1: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

EMC®VNX™ SeriesRelease 7.0

Managing Volumes and File Systems with VNX™ AVMP/N 300-011-806

REV A02

EMC CorporationCorporate Headquarters:

Hopkinton, MA 01748-91031-508-435-1000

www.EMC.com

Page 2: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Copyright © 1998 - 2011 EMC Corporation. All rights reserved.

Published September 2011

EMC believes the information in this publication is accurate as of its publication date. Theinformation is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATIONMAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TOTHE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIEDWARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires anapplicable software license.

For the most up-to-date regulatory document for your product line, go to the TechnicalDocumentation and Advisories section on EMC Powerlink.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks onEMC.com.

All other trademarks used herein are the property of their respective owners.

Corporate Headquarters: Hopkinton, MA 01748-9103

2 Managing Volumes and File Systems on VNX AVM 7.0

Page 3: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Contents

Preface.....................................................................................................7

Chapter 1: Introduction...........................................................................9Overview................................................................................................................10System requirements.............................................................................................10Restrictions.............................................................................................................11

AVM restrictions..........................................................................................11Automatic file system extension restrictions...........................................12Thin provisioning restrictions...................................................................13VNX for block system restrictions............................................................14

Cautions..................................................................................................................14User interface choices...........................................................................................17Related information..............................................................................................22

Chapter 2: Concepts.............................................................................23AVM overview.......................................................................................................24System-defined storage pools overview............................................................24Mapped storage pools overview.........................................................................25User-defined storage pools overview.................................................................26File system and automatic file system extension overview............................26AVM storage pool and disk type options..........................................................27

AVM storage pools .....................................................................................27Disk types.....................................................................................................27System-defined storage pools....................................................................30RAID groups and storage characteristics................................................33User-defined storage pools .......................................................................35

Storage pool attributes..........................................................................................35

Managing Volumes and File Systems on VNX AVM 7.0 3

Page 4: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

System-defined storage pool volume and storage profiles.............................39VNX for block system-defined storage pool algorithms.......................40VNX for block system-defined storage pools for RAID 5, RAID 3,

and RAID 1/0 SATA support................................................................43VNX for block system-defined storage pools for Flash support..........45Symmetrix system-defined storage pools algorithm.............................46VNX for block primary pool-based file system algorithm....................48VNX for block secondary pool-based file system algorithm................50Symmetrix mapped pool file systems......................................................51

File system and storage pool relationship.........................................................53Automatic file system extension.........................................................................55Thin provisioning..................................................................................................59Planning considerations.......................................................................................59

Chapter 3: Configuring.........................................................................65Configure disk volumes.......................................................................................66

Provide storage from a VNX or legacy CLARiiON system to agateway system......................................................................................67

Create pool-based provisioning for file storage systems.......................68Add disk volumes to an integrated system.............................................70

Create file systems with AVM.............................................................................70Create file systems with system-defined storage pools.........................72Create file systems with user-defined storage pools..............................74Create the file system..................................................................................78Create file systems with automatic file system extension.....................81Create file systems with the automatic file system extension

option enabled........................................................................................82Extend file systems with AVM............................................................................84

Extend file systems by using storage pools.............................................85Extend file systems by adding volumes to a storage pool....................87Extend file systems by using a different storage pool...........................89Enable automatic file system extension and options.............................91Enable thin provisioning............................................................................96Enable automatic extension, thin provisioning, and all options

simultaneously.......................................................................................98Create file system checkpoints with AVM.......................................................100

Chapter 4: Managing..........................................................................103List existing storage pools..................................................................................104

4 Managing Volumes and File Systems on VNX AVM 7.0

Contents

Page 5: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Display storage pool details...............................................................................105Display storage pool size information.............................................................106

Display size information for Symmetrix storage pools.......................108Modify system-defined and user-defined storage pool attributes...............109

Modify system-defined storage pool attributes....................................110Modify user-defined storage pool attributes.........................................113

Extend a user-defined storage pool by volume..............................................118Extend a user-defined storage pool by size.....................................................119Extend a system-defined storage pool.............................................................120

Extend a system-defined storage pool by size......................................121Remove volumes from storage pools...............................................................122Delete user-defined storage pools.....................................................................123

Delete a user-defined storage pool and its volumes............................124

Chapter 5: Troubleshooting................................................................125AVM troubleshooting considerations...............................................................126EMC E-Lab Interoperability Navigator............................................................126Known problems and limitations.....................................................................126Error messages.....................................................................................................127EMC Training and Professional Services.........................................................128

Glossary................................................................................................129

Index.....................................................................................................133

Managing Volumes and File Systems on VNX AVM 7.0 5

Contents

Page 6: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

6 Managing Volumes and File Systems on VNX AVM 7.0

Contents

Page 7: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Preface

As part of an effort to improve and enhance the performance and capabilities of its product lines,EMC periodically releases revisions of its hardware and software. Therefore, some functions describedin this document may not be supported by all versions of the software or hardware currently in use.For the most up-to-date information on product features, refer to your product release notes.

If a product does not function properly or does not function as described in this document, pleasecontact your EMC representative.

Managing Volumes and File Systems on VNX AVM 7.0 7

Page 8: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Special notice conventions

EMC uses the following conventions for special notices:

Note: Emphasizes content that is of exceptional importance or interest but does not relate to personalinjury or business/data loss.

Identifies content that warns of potential business or data loss.

Indicates a hazardous situation which, if not avoided, could result in minor ormoderate injury.

Indicates a hazardous situation which, if not avoided, could result in death orserious injury.

Indicates a hazardous situation which, if not avoided, will result in death or seriousinjury.

Where to get help

EMC support, product, and licensing information can be obtained as follows:

Product information — For documentation, release notes, software updates, or forinformation about EMC products, licensing, and service, go to the EMC Online Supportwebsite (registration required) at http://Support.EMC.com.

Troubleshooting — Go to the EMC Online Support website. After logging in, locatethe applicable Support by Product page.

Technical support — For technical support and service requests, go to EMC CustomerService on the EMC Online Support website. After logging in, locate the applicableSupport by Product page, and choose either Live Chat or Create a service request. Toopen a service request through EMC Online Support, you must have a valid supportagreement. Contact your EMC sales representative for details about obtaining a validsupport agreement or with questions about your account.

Note: Do not request a specific support representative unless one has already been assigned toyour particular system problem.

Your comments

Your suggestions will help us continue to improve the accuracy, organization, and overallquality of the user publications.

Please send your opinion of this document to:

[email protected]

8 Managing Volumes and File Systems on VNX AVM 7.0

Preface

Page 9: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

1

Introduction

Topics included are:◆ Overview on page 10◆ System requirements on page 10◆ Restrictions on page 11◆ Cautions on page 14◆ User interface choices on page 17◆ Related information on page 22

Managing Volumes and File Systems on VNX AVM 7.0 9

Page 10: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Overview

Automatic Volume Management (AVM) is an EMC® VNX™ feature that automates volumecreation and management. By using the VNX command options and interfaces that supportAVM, system administrators can create and extend file systems without creating andmanaging the underlying volumes.

The automatic file system extension feature automatically extends file systems created withAVM when the file systems reach their specified high water mark (HWM). Thin provisioningworks with automatic file system extension and allows the file system to grow on demand.With thin provisioning, the space presented to the user or application is the maximum sizesetting, while only a portion of that space is actually allocated to the file system.

This document is part of the VNX documentation set and is intended for use by systemadministrators responsible for creating and managing volumes and file systems by usingAVM.

System requirements

Table 1 on page 10 describes the EMC VNX series software, hardware, network, and storageconfigurations.

Table 1. System requirements

VNX series version 7.0Software

No specific hardware requirementsHardware

No specific network requirementsNetwork

Any VNX-qualified storage systemStorage

10 Managing Volumes and File Systems on VNX AVM 7.0

Introduction

Page 11: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Restrictions

The restrictions listed in this section are applicable to AVM, automatic file system extension,the thin provisioning feature, and the EMC VNX for block system.

AVM restrictions

The restrictions applicable to AVM are as follows:

◆ Create a file system by using only one storage pool. If you need to extend a file system,extend it by using either the same storage pool or by using another compatible storagepool. Do not extend a file system across storage systems unless it is absolutely necessary.

◆ File systems might reside on multiple disk volumes. Ensure that all disk volumes usedby a file system reside on the same storage system for file system creation and extension.This is to protect against storage system and data unavailability.

◆ RAID 3 is supported only with EMC VNX Capacity disk volumes.

◆ When building volumes on a VNX for file attached to an EMC Symmetrix® storagesystem, use regular Symmetrix volumes (also called hypervolumes), not Symmetrixmetavolumes.

◆ Use AVM to create the primary EMC TimeFinder®/FS (NearCopy or FarCopy) file system,if the storage pool attributes indicate that no sliced volumes are used in that storage pool.AVM does not support business continuance volumes (BCVs) in a storage pool withother disk types.

◆ AVM storage pools must contain only one disk type. Disk types cannot be mixed. Table4 on page 28 provides a complete list of disk types. Table 5 on page 31 provides a listof storage pools and the description of the associated disk types.

◆ LUNs that have been added to the file-based storage group are discovered during thenormal storage discovery (diskmark) and mapped to their corresponding storage poolson the VNX for file. If a pool is encountered with the same name as an existinguser-defined pool or system-defined pool from the same VNX for block system, diskmarkwill fail. It is possible to have duplicate pool names on different VNX for block systems,but not on the same VNX for block system.

◆ Names of pools mapped from a VNX for block system to a VNX for file cannot bemodified.

◆ A user cannot manually delete a mapped pool. Mapped storage pools overview on page25 provides a description of a mapped storage pool.

◆ For VNX for file, a storage pool cannot contain both mirrored and non-mirrored LUNs.If diskmark discovers both mirrored and non-mirrored LUNs, diskmark will fail. Also,data may be unavailable or lost during failovers.

Restrictions 11

Introduction

Page 12: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

◆ The VNX for file control volumes (LUNs 0 through 5) must be thick devices and use thesame data service policies. Otherwise, the NAS software installation will fail.

Automatic file system extension restrictions

The restrictions applicable to automatic file system extension are as follows:

◆ Automatic file system extension does not work on a Migration File System (MGFS), whichis the EMC file system type used while performing data migration from either a CommonInternet File System (CIFS) or network file system (NFS) to the VNX system by usingVNX File System Migration (also known as CDMS).

◆ Automatic extension is not supported on file systems created with manual volumemanagement. You can enable automatic file system extension on the file system only ifit is created or extended by using an AVM storage pool.

◆ Automatic extension is not supported on file systems used with TimeFinder/FS NearCopyor FarCopy.

◆ While automatic file system extension is running, the Control Station blocks all othercommands that apply to this file system. When the extension is complete, the ControlStation allows the commands to run.

◆ The Control Station must be running and operating properly for automatic file systemextension, or any other VNX feature, to work correctly.

◆ Automatic extension cannot be used for any file system that is part of a remote datafacility (RDF) configuration. Do not use the nas_fs command with the -auto_extendoption for file systems associated with RDF configurations. Doing so generates the errormessage: Error 4121: operation not supported for file systems of type EMC SRDF®.

◆ The options associated with automatic extension can be modified only on file systemsmounted with read/write permission. If the file system is mounted read-only, you mustremount the file system as read/write before modifying the automatic file systemextension, HWM, or maximum size options.

◆ Enabling automatic file system extension and thin provisioning does not automaticallyreserve the space from the storage pool for that file system. Administrators must ensurethat adequate storage space exists, so that the automatic extension operation can succeed.When there is not enough storage space available to extend the file system to the requestedsize, the file system extends to use all the available storage.

For example, if automatic extension requires 6 GB but only 3 GB are available, the filesystem automatically extends to 3 GB. Although the file system was partially extended,an error message appears to indicate that there was not enough storage space availableto perform automatic extension. When there is no available storage, automatic extensionfails. You must manually extend the file system to recover from this issue.

◆ Automatic file system extension is supported with EMC VNX Replicator. Enable automaticextension only on the source file system in a replication scenario. The destination file

12 Managing Volumes and File Systems on VNX AVM 7.0

Introduction

Page 13: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

system synchronizes with the source file system and extends automatically. Do not enableautomatic extension on the destination file system.

◆ When using automatic extension and thin provisioning, you can create replicated copiesof extendible file systems, but to do so, use slice volumes (slice=y).

◆ You cannot create iSCSI thick LUNs on file systems that have automatic extension enabled.You cannot enable automatic extension on a file system if there is a storage mode iSCSILUN present on the file system. You will receive an error, "Error 2216: <fs_name>: itemis currently in use by iSCSI." However, iSCSI virtually provisioned LUNs are supportedon file systems with automatic extension enabled.

◆ Automatic extension is not supported on the root file system of a Data Mover or on theroot file system of a Virtual Data Mover (VDM).

Thin provisioning restrictions

The restrictions applicable to the thin provisioning feature are as follows:

◆ VNX for file supports thin provisioning on Symmetrix DMX™-4 and legacy CLARiiON®CX4™ and CX5 disk volumes.

◆ The options associated with thin provisioning can be modified only on file systemsmounted with read/write permission. If the file system is mounted read-only, you mustremount the file system as read/write before modifying the thin provisioning, HWM, ormaximum size options.

◆ Do not use VNX for file thin provisioned objects (iSCSI LUNs or iSCSI file systems) withSymmetrix or VNX for block thin provisioned devices. A single file system should notspan virtual and regular Symmetrix or VNX for block volumes. Use only one layer ofthin provisioning, either on the Symmetrix or VNX for block storage system, or on theVNX for file, but not on both. If the user attempts to create VNX for file thin provisionedobjects with Symmetrix or VNX for block thin provisioned devices, the Data Movergenerates an error similar to the following: "VNX for File thin provisioning and VNX forBlock or Symmetrix thin provisioning cannot coexist on a file system."

◆ Thin provisioning is supported with VNX Replicator. Enable thin provisioning only onthe source file system in a replication scenario. The destination file system synchronizeswith the source file system and extends automatically. Do not enable thin provisioningon the destination file system.

◆ When using automatic file system extension and thin provisioning, you can createreplicated copies of extendible file systems, but to do so, use slice volumes (slice=y).

◆ With thin provisioning enabled, the NFS, CIFS, and FTP clients see the actual size of theVNX Replicator destination file system while they see the virtually provisioned maximumsize of the source file system. Interoperability considerations on page 59 provides moreinformation on using automatic file system extension with VNX Replicator.

Restrictions 13

Introduction

Page 14: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

◆ Thin provisioning is supported on the primary file system, but not supported withprimary file system checkpoints. NFS, CIFS, and FTP clients cannot see the virtuallyprovisioned maximum size of any EMC SnapSure™ checkpoint file system.

◆ If a file system is created by using a virtual storage pool, the -thin option of the nas_fscommand cannot be enabled. VNX for file thin provisioning and VNX for block thinprovisioning cannot coexist on a file system.

VNX for block system restrictions

The restrictions applicable to VNX for block systems are as follows:

◆ Use RAID group-based LUNs instead of pool-based LUNs to create system control LUNs.Pool-based LUNs can be created as thin LUNs or converted to thin LUNs at any time. Athin control LUN could run out of space and lead to a Data Mover panic.

◆ VNX for block mapped pools support only RAID 5, RAID 6, and RAID 1/0:

• RAID 5 is the default RAID type, with a minimum of three drives (2+1). Use multiplesof five drives.

• RAID 6 has a minimum of four drives (2+2). Use multiples of eight drives.

• RAID 1/0 has a minimum of two drives (1+1).

◆ EMC Unisphere™ is required to provision virtual devices (thin and thick LUNs) on theVNX for block system. Any platforms that do not provide Unisphere access cannot usethis feature.

◆ You cannot mix mirrored and non-mirrored LUNs in the same VNX for block systempool. You must separate mirrored and non-mirrored LUNs into different storage poolson VNX for block systems. If diskmark discovers both mirrored and non-mirrored LUNs,diskmark will fail.

Cautions

If any of this information is unclear, contact your EMC Customer Support Representativefor assistance:

◆ Do not span a file system (including checkpoint file systems) across multiple storagesystems. All parts of a file system must use the same disk volume type and be stored ona single storage system. Spanning more than one storage system increases the chance ofdata loss, data unavailability, or both. One storage system could fail while the othercontinues, and thus make failover difficult. In this case, the targets might not be consistent.In addition, a spanned file system is subject to any performance and feature set differencesbetween storage systems.

14 Managing Volumes and File Systems on VNX AVM 7.0

Introduction

Page 15: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

◆ If you plan to set quotas on a file system to control the amount of space that users andgroups can consume, turn on quotas immediately after creating the file system. UsingQuotas on VNX contains instructions on turning on quotas and general quotas information.

◆ If your user environment requires international character support (that is, support ofnon-English character sets or Unicode characters), configure the VNX system to supportthis feature before creating file systems. Using International Character Sets with VNXcontains instructions to support and configure international character support on a VNXsystem.

◆ If you plan to create TimeFinder/FS (local, NearCopy, or FarCopy) snapshots, do not useslice volumes (nas_slice) when creating the production file system (PFS). Instead, usethe full portion of the disk presented to the VNX system. Using slice volumes for a PFSslated as the source for snapshots wastes storage space and can result in loss of PFS data.

◆ Automatic file system extension is interrupted during VNX system software upgrades.If automatic extension is enabled, the Control Station continues to capture the HWMevents, but the actual file system extension does not start until the VNX system upgradeprocess completes.

◆ Closely monitor VNX for block pool space that contains pool LUNs to ensure that thereis enough space available. Use the nas_pool -size <AVM pool name> command and lookfor the physical usage information. An alert is generated when a VNX for block poolreaches the user-defined threshold level.

◆ Deleting a thin file system or a thin disk volume does not release any space on a system.

• To release the space in a thin pool on the Symmetrix storage system, unbind the LUNby using the symconfigure command.

• To release the space in a thin pool on either a VNX or a legacy CLARiiON system,unbind the LUN by using the nas_disk -delete -perm -unbind command.

◆ Before removing a data service policy from a Fully Automated Storage Tiering (FAST)Symmetrix Storage Group that is already mapped to a VNX for file storage pool and isin use with multiple tiers, to prevent an error from occurring on the VNX for file, youmust do one of the following:

• Configure a single tier policy with the disk type wanted and allow the FAST engineto move the disks. Once the disks are moved to the same tier, remove the data servicepolicy from the Symmetrix Storage Group and run diskmark.

• Use the Symmetrix nondisruptive LUN migration utility to ensure that every filesystem is built on top of a single type of disk.

• Migrate data through NFS or CIFS by using either VNX Replicator, the CLI nas_copycommand, file system migration, or a third-party vendor's migration software.

◆ The Flash BCV (BCVE), R1EFD, R2EFD, R1BCVE, or R2BCVE standalone disk types arenot supported on a VNX for file. However, a VNX for file supports using a FAST policythat contains a Flash tier as long as the FAST policy contains multiple tiers. When youneed to remove a FAST policy that contains a Flash tier from the VNX for file StorageGroup, an error will occur if the Flash technology is used in BCV, R1, or R2 devices. The

Cautions 15

Introduction

Page 16: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

nas_diskmark -mark -all operation cannot set disk types of BCVE, R1EFD, R2EFD,R1BCVE, or R2BCVE. To prevent an error from occurring, do one of the following:

• Configure a single tier policy by using either FC or ATA disks, and allow the FASTengine to move the Flash disks to the selected type.

• Use the Symmetrix nondisruptive LUN migration utility to ensure that the file systemis built on top of a single type of disk, either FC or SATA.

◆ VNX thin provisioning allows you to specify a value above the maximum supportedstorage capacity for the system. If an alert message indicates that you are running out ofspace, or if you reach the system's storage capacity limits and have virtually provisionedresources that are not fully allocated, you may need to do one of the following:

• Delete unnecessary data.

• Enable VNX File Deduplication and Compression to try to reduce file system storageusage.

• Migrate data to a different system that has space.

◆ Closely monitor Symmetrix pool space that contains pool LUNs to ensure that there isenough space available. Use the command /usr/symcli/bin/symcfg list -pool -thin -all todisplay pool usage.

◆ If the masking option is being used, moving LUNs between Symmetrix Storage Groupscan cause file system disruption. If the LUNs need to be moved frequently between FASTStorage Groups for various performance requests, you can create separate FAST StorageGroups and Masking Storage Groups to avoid disruptions. A single LUN can belong toboth a FAST Storage Group and a Masking Storage Group.

◆ The Symmetrix FAST capacity algorithm does not consider striping on the file systemside. The algorithm may mix different technologies in the same striping volume, whichcan affect performance until the performance algorithm optimizes it. The initialconfiguration of the striping volumes is very important to ensure that the performanceis maximized even before the initial data move is completed by the FAST engine. Forexample, a FAST policy contains 50 percent Performance disk volumes and 50 percentCapacity disk volumes, and the storage group has 16 disk volumes. The initialconfiguration should be 1 striping meta volume with 8 Performance disk volumes and1 striping meta volume with 8 Capacity disk volumes, instead of 4 Performance diskvolumes and 4 Capacity disk volumes in the same striping meta volume. The same pointneeds to be considered when the FAST policy is changed or devices are added to orremoved from the FAST storage group. AVM will try to use the same technology in thestriping meta volume.

◆ If you are using Symmetrix or legacy CLARiiON systems, and you need to migrate aLUN that is in a VNX for file storage group, the size of the target LUN must be the samesize as the source LUN or data unavailability and data loss may occur. For betterperformance and improved space usage, ensure that the target LUN is a newly-createdLUN with no existing data.

◆ Insufficient space on a Symmetrix pool that contains pool LUNs might result in a DataMover panic and data unavailability. To avoid this situation, pre-allocate 100 percent of

16 Managing Volumes and File Systems on VNX AVM 7.0

Introduction

Page 17: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

the TDEV when binding it to the pool. If you do not allocate 100 percent, there is thepossibility of overallocation. Closely monitor the pool usage.

◆ Insufficient space on a VNX for block system pool that contains thin LUNs might resultin a Data Mover panic and data unavailability. You cannot pre-allocate space on a VNXfor file storage pool. Closely monitor the pool usage to avoid running out of space.

◆ You can use FAST thin LUNs to configure the SnapSure checkpoint SavVol. However,insufficient space in a storage pool might result in a Data Mover panic or dataunavailability. Closely monitor the pool usage to avoid running out of space.

User interface choices

The system offers flexibility in managing networked storage that is based on your supportenvironment and interface preferences. This document describes how to use AVM by usingthe command line interface (CLI). You can also perform many of these tasks by using oneof the system's management applications:

◆ EMC Unisphere software

◆ Celerra Monitor

◆ Microsoft Management Console (MMC) snap-ins

◆ Active Directory Users and Computers (ADUC) extensions

The Unisphere software online help contains additional information about managing yoursystem.

Installing Management Applications on VNX for File includes instructions on launching theUnisphere software, and on installing the MMC snap-ins and the ADUC extensions.

The VNX Release Notes contain additional, late-breaking information about systemmanagement applications.

Table 2 on page 17 identifies the storage pool tasks that you can perform in each interface,and the command syntax or the path to the Unisphere software page to use to perform thetask. Unless otherwise noted in the task, the operations apply to user-defined andsystem-defined storage pools. The VNX Command Line Interface Reference for File containsmore information on the commands described in Table 2 on page 17.

Table 2. Storage pool tasks supported by user interface

Unisphere softwareControl Station CLITask

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File,and click Create.

nas_pool -create -name<name> -volumes <volumes>

Create a new user-defined storagepool by volumes.

Note: This task applies to user-defined storage pools only.

User interface choices 17

Introduction

Page 18: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 2. Storage pool tasks supported by user interface (continued)

Unisphere softwareControl Station CLITask

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File,and click Create.

nas_pool -create -name<name> -size<integer>[M|G|T]-template <system_pool_name>

Create a new user-defined storagepool by size.

Note: This task applies to user-defined storage pools only. -num_stripe_members <num>

-stripe_size <num>

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File,and click Create.

nas_pool -create -name<name> -is_greedy [y|n]

Create a new user-defined storagepool with the is_greedy attribute.

Specifying n (default) tells the sys-tem to use space from the user-defined storage pool's existingmember volumes in the order thatthe volumes were added to thepool to create a new file system orextend an existing file system.

Specifying y tells the system to usespace from the least-used membervolume of the user-defined storagepool to create a new file system.When there is more than one least-used member volume available,AVM selects the member volumethat contains the most disk vol-umes. For example, if one membervolume contains four disk volumesand another member volume con-tains eight disk volumes, AVM se-lects the one with eight disk vol-umes. If there are two or moremember volumes that have thesame number of disk volumes,AVM selects the one with the low-est ID.

Note: This task applies to user-defined storage pools only.

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File.nas_pool -list

List existing storage pools.

18 Managing Volumes and File Systems on VNX AVM 7.0

Introduction

Page 19: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 2. Storage pool tasks supported by user interface (continued)

Unisphere softwareControl Station CLITask

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File,and click Properties.

nas_pool -info <name>

Note: When you perform this operation, thetotal_potential_mb option does not includethe space in the storage pool in the output.

Display storage pool details.

Note: When you perform this opera-tion, the total_potential_mb optionrepresents the total available storage,including the storage pool.

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File,nas_pool -size <name>

Display storage pool size informa-tion.

and view the Storage Capacity andStorage Used(%) columns.

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File,nas_pool -modify{<name>|id=<id>}-default_slice_flag {y|n}

Specify whether AVM uses slicevolumes or entire unused disk vol-umes from the storage pool to cre-ate or extend a file system.

and click Properties. Select or clearSlice Pool Volumes by Default? as re-quired.

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File,nas_pool -modify{<name>|id=<id>} -is_dynamic{y|n}

Specify whether AVM extends thestorage pool automatically withunused disk volumes whenever thepool needs more space.

Note: This task applies to system-defined storage pools only.

and click Properties. Select or clearAutomatic Extension Enabled as re-quired.

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File,nas_pool -modify{<name>|id=<id>} -is_greedy{y|n}

Modify a system-defined storagepool with the is_greedy attribute.

Specifying y tells AVM to allocatenew, unused disk volumes to thesystem-defined storage pool whencreating or extending a file system,even if there is available space inthe pool.

Specifying n tells AVM to allocateall available system-defined stor-age pool space to create or extenda file system before adding vol-umes to the pool.

and click Properties. Select or clearObtain Unused Disk Volumes as re-quired.

User interface choices 19

Introduction

Page 20: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 2. Storage pool tasks supported by user interface (continued)

Unisphere softwareControl Station CLITask

When extending a file system, theis_greedy attribute is ignored un-less there is not enough free spaceon the existing volumes that the filesystem is using. Table 7 on page36 describes the is_greedy behav-ior.

Note: This task applies to system-defined storage pools only.

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File,nas_pool -modify{<name>|id=<id>} -is_greedy{y|n}

Modify a user-defined storage poolwith the is_greedy attribute.

Specifying n (default) tells the sys-tem to use space from the user-defined storage pool's existingmember volumes in the order thatthe volumes were added to thepool to create a new file system.

Specifying y tells the system to usespace from the least-used membervolume in the user-defined storagepool to create a new file system.When there is more than one least-used member volume available,AVM selects the member volumethat contains the most disk vol-umes. For example, if one membervolume contains four disk volumesand another member volume con-tains eight disk volumes, AVM se-lects the one with eight disk vol-umes. If there are two or moremember volumes that have thesame number of disk volumes,AVM selects the one with the low-est ID.

When extending a file system, theis_greedy attribute is ignored un-less there is not enough free spaceon the existing volumes that the filesystem is using.

and click Properties. Select or clearObtain Unused Disk Volumes as re-quired.

20 Managing Volumes and File Systems on VNX AVM 7.0

Introduction

Page 21: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 2. Storage pool tasks supported by user interface (continued)

Unisphere softwareControl Station CLITask

Table 7 on page 36 describes theis_greedy behavior.

Note: This task applies to user-defined storage pools only.

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File.nas_pool -xtend{<name>|id=<id>}-volumes <volume_name>

Add volumes to a user-definedstorage pool.

Note: This task applies to user-defined storage pools only.

Select the storage pool that you wantto extend, and click Extend. Select oneor more volumes to add to the pool.[,<volume_name>,...]

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File.nas_pool -xtend{<name>|id=<id>}-size <integer> [M|G|T]

Extend a storage pool by size andspecify a storage system fromwhich to allocate storage.

Note: This task applies to system-defined storage pools only whenthe is_dynamic attribute for thestorage pool is set to n.

Select the storage pool that you wantto extend, and click Extend. Select theStorage System to be used to extendthe file system, and type the size re-quested in MB, GB, or TB.

Note: The drop-down list shows allthe available storage systems. Thevolumes shown are only those createdon the storage system that is highlight-ed.

-storage <system_name>

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File.nas_pool -shrink{<name>|id=<id>}-volumes <volume_name>

Remove volumes from a storagepool.

Select the storage pool that you wantto shrink, and click Shrink. Select one

[,<volume_name>,...] [-deep] or more volumes that are not in useto be removed from the pool.The -deep setting is optional, and is used to

recursively remove all members.

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File.nas_pool -delete{<name>|id=<id>} [-deep]

Delete a storage pool.

Note: This task applies to user-defined storage pools only.

Select the storage pool that you wantto delete, and click Delete.The -deep setting is optional, and is used to

recursively remove all members.

User interface choices 21

Introduction

Page 22: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 2. Storage pool tasks supported by user interface (continued)

Unisphere softwareControl Station CLITask

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File,nas_pool -modify{<name>|id=<id>} -name <name>

Change the name of a storagepool.

Note: This task applies to user-defined storage pools only.

and click Properties. Type the newname in the Name text box.

Select Storage ➤ Storage

Configuration ➤ Storage Pools for File,$ nas_fs -name <name>

-type <type> -createpool=<pool>

Create a file system with automaticfile system extension enabled.

and click Create. Select Automatic

Extension Enabled.storage=<system_name>{size=<integer>[T|G|M]}-auto_extend {no|yes}

Related information

Specific information related to the features and functionality described in this guide areincluded in:

◆ VNX Command Line Interface Reference for File

◆ Parameters Guide for VNX for File

◆ Configuring NDMP Backups to Disk on VNX

◆ Controlling Access to System Objects on VNX

◆ Managing Volumes and File Systems for VNX Manually

◆ Online VNX man pages

EMC VNX documentation on the EMC Online Support website

The complete set of EMC VNX series customer publications is available on the EMCOnline Support website. To search for technical documentation, go tohttp://Support.EMC.com. After logging in to the website, click the VNX Support byProduct page to locate information for the specific feature required.

VNX wizards

Unisphere software provides wizards for performing setup and configuration tasks. TheUnisphere online help provides more details on the wizards.

22 Managing Volumes and File Systems on VNX AVM 7.0

Introduction

Page 23: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

2

Concepts

Topics included are:◆ AVM overview on page 24◆ System-defined storage pools overview on page 24◆ Mapped storage pools overview on page 25◆ User-defined storage pools overview on page 26◆ File system and automatic file system extension overview on page

26◆ AVM storage pool and disk type options on page 27◆ Storage pool attributes on page 35◆ System-defined storage pool volume and storage profiles on page

39◆ File system and storage pool relationship on page 53◆ Automatic file system extension on page 55◆ Thin provisioning on page 59◆ Planning considerations on page 59

Managing Volumes and File Systems on VNX AVM 7.0 23

Page 24: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

AVM overview

The AVM feature automatically creates and manages file system storage. AVM isstorage-system independent and supports existing requirements for automatic storageallocation (SnapSure, SRDF, and IP replication).

You can configure file systems created with AVM to automatically extend. The automaticextension feature enables you to configure a file system so that it extends automatically,without system administrator intervention, to support file system operations. Automaticextension causes the file system to extend when it reaches the specified usage point, theHWM, as described in Automatic file system extension on page 55. You set the size for thefile system you create, and also the maximum size to which you want the file system togrow. The thin provisioning option lets you present the maximum size of the file system tothe user or application, of which only a portion is actually allocated. Thin provisioningallows the file system to slowly grow on demand as the data is written.

Note: Enabling the thin provisioning option with automatic extension does not automatically reservethe space from the storage pool for that file system. Administrators must ensure that adequate storagespace exists so that the automatic extension operation can succeed. If the available storage is less thanthe maximum size setting, then automatic extension fails. Users receive an error message when thefile system becomes full, even though it appears that there is free storage space in the file system.

File systems support the following FAST data service policies:

◆ For VNX for block systems: thin LUNs and thick LUNs, compression, auto-tiering, andmirroring (EMC MirrorView™ or RecoverPoint).

◆ For Symmetrix systems: thin LUNs and thick LUNs, auto-tiering, and R1, R2, or BCVdisk volumes.

To create file systems, use one or more types of AVM storage pools:

◆ System-defined storage pools◆ User-defined storage pools

System-defined storage pools overview

System-defined storage pools are predefined and available with the VNX system. You cannotcreate or delete these predefined storage pools. You can modify some of the attributes ofthe system-defined storage pools, but this is unnecessary.

AVM system-defined storage pools do not preclude the use of user-defined storage poolsor manual volume and file system management, but instead give system administrators asimple volume and file system management tool. With command options and interfacesthat support AVM, you can use system-defined storage pools to create and extend filesystems without manually creating and managing stripe volumes, slice volumes, ormetavolumes. If your applications do not require precise placement of file systems on

24 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 25: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

particular disks or on particular locations on specific disks, using AVM is an efficient wayfor you to create file systems.

Flash drives behave differently than Performance or Capacity drives. AVM uses differentlogic to configure file systems on Flash drives. To configure Flash drives for maximumperformance, AVM may select more disk volumes than are needed to satisfy the requestedcapacity. While the individual disk volumes are no longer available for manual volumemanagement, the unused Flash drive space is still available for creating additional filesystems or extending existing file systems. VNX for block system-defined storage pools forFlash support on page 45 contains additional information about using Flash drives.

AVM system-defined storage pools are adequate for most high availability and performanceconsiderations. Each system-defined storage pool manages the details of allocating storageto file systems. When you create a file system by using AVM system-defined storage pools,storage is automatically allocated from the pool to the new file system. After the storage isallocated from that pool, the storage pool can dynamically grow and shrink to meet the filesystem needs.

Mapped storage pools overview

A mapped pool is a storage pool that is dynamically created during the normal storagediscovery (diskmark) process for use on the VNX for file. It is a one-to-one mapping witheither a VNX storage pool or a FAST Symmetrix Storage Group. A mapped pool can containa mix of different types of LUNs that use any combination of data services:

◆ thin◆ thick◆ auto-tiering◆ mirrored◆ VNX compression

However, ensure that the mapped pool contains only the same type of LUNs that use thesame data services for the best file system performance:

◆ all thick◆ all thin◆ all the same auto-tiering options◆ all mirrored or none mirrored◆ all compressed or none compressed

If a mapped pool is not in use and no LUNs exist in the file-based storage group thatcorresponds to the pool, the pool will be deleted automatically during diskmark.

VNX for block data services can be configured at the LUN level. When creating a file systemwith mapped pools, the default slice option is set to no to help prevent inconsistent dataservices on the file system.

Mapped storage pools overview 25

Concepts

Page 26: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

User-defined storage pools overview

User-defined storage pools allow you to create containers or pools of storage, filled withmanually created volumes. When the applications require precise placement of file systemson particular disks or locations on specific disks, use AVM user-defined storage pools formore control. User-defined storage pools also allow you to reserve disk volumes so that thesystem-defined storage pools cannot use them.

User-defined storage pools provide a better option for those who want more control overtheir storage allocation while still using the more automated management tool. User-definedstorage pools are not as automated as the system-defined storage pools. You must specifysome attributes of the storage pool and the storage system from which the space is allocatedto create file systems. While somewhat less involved than creating volumes and file systemsmanually, using these storage pools requires more manual involvement on your part thanthe system-defined storage pools. When you create a file system by using a user-definedstorage pool, you must:

1. Create the storage pool.

2. Choose and add volumes to it either by manually selecting and building the volumestructure or by auto-selection.

3. Extend it with new volumes when required.

4. Remove volumes you no longer require in the storage pool.

Auto-selection is performed by choosing a minimum size and a system pool which describesthe disk attributes. With auto-selection, whole disk volumes are taken from the volumesavailable in the system pool and placed in the user pool according to the selected stripeoptions. Auto-selection uses the same AVM algorithms that choose which disk volumes tostripe in a system pool. System-defined storage pool volume and storage profiles on page39 describes the AVM algorithms used.

File system and automatic file system extension overview

You can create or extend file systems with AVM storage pools and configure the file systemto automatically extend as needed. You can do one of the following:

◆ Enable automatic extension on a file system when it is created.◆ Enable and disable it at any later time by modifying the file system.

The options that work with automatic file system extension are as follows:

◆ HWM◆ Maximum size◆ Thin provisioning

The HWM and maximum size are described in Automatic file system extension on page 55.Thin provisioning is described in Thin provisioning on page 59.

26 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 27: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

The default supported maximum size for any file system is 16 TB.

With automatic extension, the maximum size is the size to which the file system could grow,up to the supported 16 TB. Setting the maximum size is optional with automatic extension,but mandatory with thin provisioning. With thin provisioning enabled, users and applicationssee the maximum size, while only a portion of that size is actually allocated to the file system.

Automatic extension allows the file system to grow as needed without system administratorintervention, and meet system operations requirements continuously, without interruptions.

AVM storage pool and disk type options

AVM provides a range of options for managing volumes. The VNX system can choose theconfiguration and placement of the file systems by using system-defined storage pools, oryou can create a user-defined storage pool and define its attributes.

This section contains the following:

◆ AVM storage pools on page 27◆ Disk types on page 27◆ System-defined storage pools on page 30◆ RAID groups and storage characteristics on page 33◆ User-defined storage pools on page 35

AVM storage pools

An AVM storage pool is a container or pool of volumes. Table 3 on page 27 lists the majordifference between system-defined and user-defined storage pools.

Table 3. System-defined and user-defined storage pool difference

User-defined storage poolsSystem-defined storage poolsFunctionality

Manual only — Administrators mustmanage the volume configuration, addi-tion, and removal of storage from thesestorage pools.

Automatic, but the dynamic behavior can be dis-abled.

Ability to grow and shrink

Chapter 4 provides more detailed information.

Disk types

A storage pool must contain volumes from only one disk type.

AVM storage pool and disk type options 27

Concepts

Page 28: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 4 on page 28 lists the available disk types associated with the storage pools and thedisk type descriptions.

Table 4. Disk types

DescriptionDisk type

Standard VNX for block disk volumes.CLSTD

VNX for block Capacity disk volumes.CLATA

VNX for block Serial Attached SCSI (SAS) disk volumes.CLSAS

VNX for block Performance and SATA II Flash drive disk vol-umes.

CLEFD

VNX for block Capacity disk volumes for use with EMC Mir-rorView/Synchronous.

CMATA

Standard VNX for block disk volumes for use with MirrorView/Syn-chronous.

CMSTD

VNX for block CLEFD disk volumes that are used with Mir-rorView/Synchronous.

CMEFD

VNX for block SAS disk volumes that are used with Mir-rorView/Synchronous.

CMSAS

Standard Symmetrix disk volumes, typically RAID 1 configuration.STD

Symmetrix Performance disk volumes, set up as source formirrored storage that uses SRDF functionality.

R1STD

Standard Symmetrix disk volume that is a mirror of anotherstandard Symmetrix disk volume over RDF links.

R2STD

High performance Symmetrix disk volumes built on Flash drives,typically RAID 5 configuration.

EFD

Standard Symmetrix disk volumes built on Capacity drives, typ-ically RAID 1 configuration.

ATA

Symmetrix Capacity disk volumes, set up as source for mirroredstorage that uses SRDF functionality.

R1ATA

Symmetrix Capacity disk volumes, set up as target for mirroredstorage that uses SRDF functionality.

R2ATA

VNX for block Performance disk volumes that correspond toVNX for block pool-based LUNs.

Performance

28 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 29: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 4. Disk types (continued)

DescriptionDisk type

VNX for block Capacity disk volumes that correspond to VNXfor block pool-based LUNs.

Capacity

VNX for block Flash disk volumes that correspond to VNX forblock pool-based LUNs.

Extreme_performance

◆ For VNX for block, a mixture of VNX for block Performance,Capacity, or Flash disk volumes that correspond to VNX forblock pool-based LUNs.

◆ For Symmetrix, a mixture of Symmetrix Flash, Performance,or Capacity disk volumes that correspond to devices in FASTStorage Groups.

Mixed

For VNX for block, a mixture of VNX for block Performance,Capacity, or Flash disk volumes that correspond to VNX for blockpool-based LUNs used with MirrorView/Synchronous.

Mirrored_mixed

For VNX for block, Performance disk volumes that correspondto VNX for block pool-based LUNs used with MirrorView/Syn-chronous.

Mirrored_performance

For VNX for block, Capacity disk volumes that correspond toVNX for block pool-based LUNs used with MirrorView/Syn-chronous.

Mirrored_capacity

For VNX for block, Flash disk volumes that correspond to VNXfor block pool-based LUNs used with MirrorView/Synchronous.

Mirrored_extreme_perfor-mance

Business continuance volume (BCV) for use by TimeFinder/FSoperations.

BCV

BCV, built from Capacity disks, for use by TimeFinder/FS oper-ations.

BCVA

BCV, built from Capacity disks, that is mirrored to a differentSymmetrix over RDF links, RAID 1 configuration, and used asa source volume by TimeFinder/FS operations.

R1BCA

BCV, built from Capacity disks, that is a mirror of another BCVover RDF links, and used as a target of destination volume byTimeFinder/FS operations.

R2BCA

BCV that is mirrored to a different Symmetrix over RDF links,RAID 1 configuration, and used as a source volume byTimeFinder/FS operations.

R1BCV

AVM storage pool and disk type options 29

Concepts

Page 30: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 4. Disk types (continued)

DescriptionDisk type

BCV that is a mirror of another BCV over RDF links, and usedas a target of destination volume by TimeFinder/FS operations.

R2BCV

BCV, built from a mixture of Symmetrix Flash, Performance, orCapacity disk volumes, and used by TimeFinder/FS operations.

BCVMixed

A mixture of Symmetrix Flash, Performance, or Capacity diskvolumes, set up as source for mirrored storage that uses SRDFfunctionality.

R1Mixed

Mixed BCV that is a mirror of another BCV over RDF links, andused as a target of destination volume by TimeFinder/FS opera-tions.

R2Mixed

Mixed BCV that is mirrored to a different Symmetrix over RDFlinks, RAID 1 configuration, and used as a source volume byTimeFinder/FS operations.

R1BCVMixed

Mixed BCV that is a mirror of another BCV over RDF links, andused as a target of destination volume by TimeFinder/FS opera-tions.

R2BCVMixed

System-defined storage pools

Choosing system-defined storage pools to build the file system is an efficient way to managevolumes and file systems. They are associated with the type of attached storage system youhave. This means that:

◆ VNX for block storage pools are available for attached VNX for block storage systems.◆ Symmetrix storage pools are available for attached Symmetrix storage systems.

System-defined storage pools are dynamic by default. The AVM feature adds and removesvolumes automatically from the storage pool as needed. Table 5 on page 31 lists thesystem-defined storage pools supported on the VNX for file. RAID groups and storagecharacteristics on page 33 contains additional information about RAID group combinationsfor system-defined storage pools.

Note: A storage pool can include disk volumes of only one type.

30 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 31: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 5. System-defined storage pools

DescriptionStorage pool name

Designed for high performance and availability at medium cost.This storagepool uses STD disk volumes (typically RAID 1).

symm_std

Designed for high performance and availability at low cost. This storagepool uses ATA disk volumes (typically RAID 1).

symm_ata

Designed for high performance and availability at medium cost, specificallyfor storage that will be mirrored to a remote VNX for file that uses SRDF, or

symm_std_rdf_src

to a local VNX for file that uses TimeFinder/FS. Using SRDF/S with VNXfor Disaster Recovery and Using TimeFinder/FS, NearCopy, and FarCopyon VNX for File provide more information about the SRDF feature.

Designed for high performance and availability at medium cost, specificallyas a mirror of a remote VNX for file that uses SRDF. This storage pool uses

symm_std_rdf_tgt

Symmetrix R2STD disk volumes. Using SRDF/S with VNX for Disaster Re-covery provides more information about the SRDF feature.

Designed for archival performance and availability at low cost, specificallyfor storage mirrored to a remote VNX for file that uses SRDF. This storage

symm_ata_rdf_src

pool uses Symmetrix R1ATA disk volumes. Using SRDF/S with VNX forDisaster Recovery provides more information about the SRDF feature.

Designed for archival performance and availability at low cost, specificallyas a mirror of a remote VNX for file that uses SRDF. This storage pool uses

symm_ata_rdf_tgt

Symmetrix R2ATA disk volumes. Using SRDF/S with VNX for Disaster Re-covery provides more information about the SRDF feature.

Designed for very high performance and availability at high cost.This storagepool uses Flash disk volumes (typically RAID 5).

symm_efd

Designed for high performance and availability at low cost. This storagepool uses CLSTD disk volumes created from RAID 1 mirrored-pair diskgroups.

clar_r1

Designed for high availability at low cost. This storage pool uses CLSTDdisk volumes created from RAID 6 disk groups.

clar_r6

Designed for medium performance and availability at low cost. This storagepool uses CLSTD disk volumes created from 4+1 RAID 5 disk groups.

clar_r5_performance

Designed for medium performance and availability at low cost. This storagepool uses CLSTD disk volumes created from 8+1 RAID 5 disk groups.

clar_r5_economy

Designed for use with infrequently accessed data, such as archive retrieval.This storage pool uses CLATA disk drives in a RAID 5 configuration.

clarata_archive

AVM storage pool and disk type options 31

Concepts

Page 32: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 5. System-defined storage pools (continued)

DescriptionStorage pool name

Designed for archival performance and availability at low cost. This AVMstorage pool uses LCFC, SATA II, and CLATA disk drives in a RAID 3 con-figuration.

clarata_r3

Designed for high availability at low cost. This storage pool uses CLATAdisk volumes created from RAID 6 disk groups.

clarata_r6

Designed for high performance and availability at medium cost.This storagepool uses two CLATA disk volumes in a RAID 1/0 configuration.

clarata_r10

Designed for medium performance and availability at medium cost. Thisstorage pool uses VNX Serial Attached SCSI (SAS) disk volumes createdfrom RAID 5 disk groups.

clarsas_archive

Designed for high availability at medium cost.This storage pool uses CLSASdisk volumes created from RAID 6 disk groups.

clarsas_r6

Designed for high performance and availability at medium cost.This storagepool uses two CLSAS disk volumes in a RAID 1/0 configuration.

clarsas_r10

Designed for very high performance and availability at high cost.This storagepool uses CLEFD disk volumes created from 4+1 and 8+1 RAID 5 diskgroups.

clarefd_r5

Designed for high performance and availability at medium cost.This storagepool uses two CLEFD disk volumes in a RAID 1/0 configuration.

clarefd_r10

Designed for high performance and availability at low cost. This storagepool uses CMSTD disk volumes created from RAID 1 mirrored-pair diskgroups for use with MirrorView/Synchronous.

cm_r1

Designed for medium performance and availability at low cost. This storagepool uses CMSTD disk volumes created from 4+1 RAID 5 disk groups foruse with MirrorView/Synchronous.

cm_r5_performance

Designed for medium performance and availability at low cost. This storagepool uses CMSTD disk volumes created from 8+1 RAID 5 disk groups foruse with MirrorView/Synchronous.

cm_r5_economy

Designed for high availability at low cost. This storage pool uses CMSTDdisk volumes created from RAID 6 disk groups for use with MirrorView/Syn-chronous.

cm_r6

Designed for use with infrequently accessed data, such as archive retrieval.This storage pool uses CMATA disk drives in a RAID 5 configuration for usewith MirrorView/Synchronous.

cmata_archive

32 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 33: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 5. System-defined storage pools (continued)

DescriptionStorage pool name

Designed for archival performance and availability at low cost. This AVMstorage pool uses CMATA disk drives in a RAID 3 configuration for use withMirrorView/Synchronous.

cmata_r3

Designed for high availability at low cost. This storage pool uses CMATAdisk volumes created from RAID 6 disk groups for use with MirrorView/Syn-chronous.

cmata_r6

Designed for high performance and availability at medium cost.This storagepool uses two CMATA disk volumes in a RAID 1/0 configuration for use withMirrorView/Synchronous.

cmata_r10

Designed for medium performance and availability at medium cost. Thisstorage pool uses CMSAS disk volumes created from RAID 5 disk groupsfor use with MirrorView/Synchronous.

cmsas_archive

Designed for high availability at low cost. This storage pool uses CMSASdisk volumes created from RAID 6 disk groups for use with MirrorView/Syn-chronous.

cmsas_r6

Designed for high performance and availability at medium cost.This storagepool uses two CMSAS disk volumes in a RAID 1/0 configuration for use withMirrorView/Synchronous.

cmsas_r10

Designed for very high performance and availability at high cost.This storagepool uses CMEFD disk volumes created from 4+1 and 8+1 RAID 5 diskgroups for use with MirrorView/Synchronous.

cmefd_r5

Designed for high performance and availability at medium cost.This storagepool uses two CMEFD disk volumes in a RAID 1/0 configuration for use withMirrorView/Synchronous.

cmefd_r10

RAID groups and storage characteristics

The following table correlates the storage array to the RAID groups for system-definedstorage pools.

AVM storage pool and disk type options 33

Concepts

Page 34: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 6. RAID group combinations

RAID 1RAID 6RAID 5Storage

1+1 RAID 1/04+2 RAID 62+1 RAID 5

3+1 RAID 5

4+1 RAID 5

5+1 RAID 5

NX4 SAS or

SATA

1+1 RAID 14+2 RAID 6

6+2 RAID 6

12+2 RAID 6

4+1 RAID 5

8+1 RAID 5

NS20 /

NS40 /

NS80 FC

Not supported4+2 RAID 6

6+2 RAID 6

12+2 RAID 6

4+1 RAID 5

6+1 RAID 5

8+1 RAID 5

NS20 /

NS40 /

NS80 ATA

1+1 RAID 1/04+2 RAID 6

6+2 RAID 6

12+2 RAID 6

4+1 RAID 5

8+1 RAID 5

NS-120 /

NS-480 /

NS-960 FC

1+1 RAID 1/04+2 RAID 6

6+2 RAID 6

12+2 RAID 6

4+1 RAID 5

6+1 RAID 5

8+1 RAID 5

NS-120 /

NS-480 /

NS-960 ATA

1+1 RAID 1/0Not supported4+1 RAID 5

8+1 RAID 5

NS-120 /

NS-480 /

NS-960 EFD

1+1 RAID 1/04+2 RAID 6

6+2 RAID 6

3+1 RAID 5

4+1 RAID 5

6+1 RAID 5

8+1 RAID 5

VNX SAS

Not supported4+2 RAID 6

6+2 RAID 6

Not supportedVNX NL SAS

34 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 35: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

User-defined storage pools

For some customer environments, more user control is required than the system-definedstorage pools offer. One way for administrators to have more control is to create their ownstorage pools and define the attributes of the storage pool.

AVM user-defined storage pools allow you to have more control over how the storage isallocated to file systems. Administrators can create a storage pool. They can also add volumesto the storage pool either by manually selecting and building the volume structure, or byauto-selection, extending the storage pool with new volumes when required, and removingvolumes that are no longer required in the storage pool.

Auto-selection is performed by choosing a minimum size and a system pool which describesthe disk attributes. With auto-selection, whole disk volumes are taken from the volumesavailable in the system pool and placed in the user pool according to the selected stripeoptions. The auto-selection uses the same AVM algorithms that choose which disk volumesto stripe in a system pool. When extending a user-defined storage pool, AVM references thelast pool member's volume structure and makes the best effort to keep the underlying volumestructures consistent. System-defined storage pool volume and storage profiles on page 39contains additional information.

While user-defined storage pools have attributes similar to system-defined storage pools,user-defined storage pools are not dynamic. They require administrators to explicitly addand remove volumes manually.

If you define the storage pool, you must also explicitly add and remove storage from thestorage pool and define the attributes for that storage pool. Use the nas_pool command todo the following:

◆ List, create, delete, extend, shrink, and view storage pools.◆ Modify the attributes of storage pools.

Create file systems with AVM on page 70 and Chapter 4 provide more information.

Understanding how AVM storage pools work enables you to determine whethersystem-defined storage pools, user-defined storage pools, or both, are appropriate for theenvironment. It is also important to understand the ways in which you can modify thestorage-pool behavior to suit your file system requirements. Modify system-defined anduser-defined storage pool attributes on page 109 provides a list of all the attributes and theprocedures to modify them.

Storage pool attributes

System-defined and user-defined storage pools have attributes that control how they createvolumes and file systems. Table 7 on page 36 lists the storage pool attributes, their values,whether an attribute is modifiable and for which storage pools, and a description of theattribute. The system-defined storage pools are shipped with the VNX system. They aredesigned to optimize performance based on the hardware configuration. Each of the

AVM storage pool and disk type options 35

Concepts

Page 36: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

system-defined storage pools has associated profiles that define the kind of storage used,and how new storage is added to, or deleted from, the storage pool.

Table 7. Storage pool attributes

DescriptionModifiableValuesAttribute

Unique name. If a name is not specifiedduring creation, one is automatically gener-ated.

Yes

User-defined storage pools

Quoted stringname

A text description.

Default is “ ” (blank string).

Yes

User-defined storage pools

Quoted stringdescription

Access control level.Yes

User-defined storage pools

Integer. For exam-ple, 0.acl

Controlling Access to System Objects onVNX contains instructions to manage accesscontrol levels.

Indicates whether AVM can slice membervolumes to meet the file system request.

A y entry tells AVM to create a slice of exact-ly the correct size from one or more membervolumes.

An n entry gives the primary or source filesystem exclusive access to one or moremember volumes.

Note: If using TimeFinder or automatic filesystem extension, this attribute should beset to n.You cannot restore file systemsbuilt with sliced volumes to a previous stateby using TimeFinder/FS.

Yes

System-defined and user-de-fined storage pools

"y" | "n"default_slice_flag

Note: This attribute is applicable only ifvolume_profile is not blank.

Indicates whether this storage pool is al-lowed to automatically add or removemember volumes. The default value is n.

Yes

System-defined storage pools

"y" | "n"is_dynamic

36 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 37: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 7. Storage pool attributes (continued)

DescriptionModifiableValuesAttribute

Note: This attribute is applicable only ifvolume_profile is not blank.

Yes

System-defined and user-de-fined storage pools

"y" | "n"is_greedy

Indicates whether a storage pool is greedy.

This option works differently depending onwhether you are using a system-definedstorage pool or user-defined storage pool.

System-defined storage pools

When a storage pool receives a request forspace, a greedy storage pool attempts tocreate a new member volume beforesearching for free space in existing membervolumes.The attribute value for this storagepool is y.

A storage pool that is not greedy uses allavailable space in the storage pool beforecreating a new member volume. The at-tribute value for this storage pool is n.

Note: When extending a file system, AVMsearches for free space on the existing vol-umes that the file system is currently usingand ignores the is_greedy attribute value. Ifthere is not enough free space available,AVM first uses the available space of theexisting volumes of the file system, and thenuses the is_greedy attribute value to deter-mine where to look for the remaining space.

Storage pool attributes 37

Concepts

Page 38: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 7. Storage pool attributes (continued)

DescriptionModifiableValuesAttribute

User-defined storage pools

If set to n (default), the system uses spacefrom the user-defined storage pool's existingmember volumes in the order that the vol-umes were added to the pool to create anew file system or extend an existing filesystem.

If set to y, the system uses space from theleast-used member volume in the user-de-fined storage pool to create a new file sys-tem. When there is more than one least-used member volume available, AVM se-lects the member volume that contains themost disk volumes.

For example, if one member volume con-tains four disk volumes and another membervolume contains eight disk volumes, AVMselects the one with eight disk volumes. Ifthere are two or more member volumes thathave the same number of disk volumes,AVM selects the one with the lowest ID.

The system-defined storage pools are designed for use with the Symmetrix and VNX forblock storage systems. The structure of volumes created by AVM might differ greatlydepending on the type of storage system that is used by the various storage pools. Thisdifference allows AVM to exploit the architecture of current and future block storage devicesthat are attached to the VNX for file.

Figure 1 on page 39 shows how the different storage pools are associated with the diskvolumes for each storage-system type attached. The nas_disk -list command lists the diskvolumes. These are the representation of the VNX for file LUNs that are exported from theattached storage system.

38 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 39: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Note: Any given disk volume must be a member of only one storage pool.

VNX-000014

Diskvolumes inthe storage

pools

dm dndy dzdnd4d3

Symmetrixstoragesystem

VNX for blockstoragesystem

dx

AVM storage pools

clarata_archive

symm_std

cmata_r3cmata_r6

symm_std_rdf_src

clar_r1

clar_r5_performance

clar_r5_economy

Storagesystems

cmata_archive

clarata_r3

Figure 1. AVM system-defined storage pools

System-defined storage pool volume and storage profiles

Volume profiles are the set of rules and parameters that define how new storage is addedto a system-defined storage pool. A volume profile defines a standard method of buildinga large section of storage from a set of disk volumes. This large section of storage can beadded to a storage pool that might contain similar large sections of storage. Thesystem-defined storage pool is responsible to satisfy requests for any amount of storage.

Users cannot create or delete system-defined storage pools and their associated profiles.However, users can list, view, and extend the system-defined storage pools, and also modifystorage pool attributes.

Volume profiles have an attribute named storage_profile. A volume profile's storage profiledefines the rules and attributes that are used to aggregate some number of disk volumes(listed by the nas_disk -list command) into a volume that can be added to a system-definedstorage pool. A volume profile uses its storage profile to determine the set of disk volumesto select (or match existing VNX disk volumes), where a given disk volume might matchthe rules and attributes of a storage profile.

System-defined storage pool volume and storage profiles 39

Concepts

Page 40: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

The following sections explain how these profiles help system-defined storage pools aggregatethe disk volumes into storage pool members, place the members into storage pools, andthen build file systems for each storage-system type:

◆ VNX for block system-defined storage pool algorithms on page 40◆ VNX for block system-defined storage pools for RAID 5, RAID 3, and RAID 1/0 SATA

support on page 43◆ VNX for block system-defined storage pools for Flash support on page 45◆ Symmetrix system-defined storage pools algorithm on page 46◆ VNX for block primary pool-based file system algorithm on page 48◆ VNX for block secondary pool-based file system algorithm on page 50◆ Symmetrix mapped pool file systems on page 51

When using the system-defined storage pools without modifications by using the Unispheresoftware or the VNX CLI, this activity is transparent to users.

VNX for block system-defined storage pool algorithms

When you create a file system that requires new storage, AVM attempts to create the mostoptimal stripe volume for a VNX for block storage system. System-defined storage poolsfor VNX for block storage systems work with LUNs of a specific type, for example, 4+1 RAID5 LUNs for the clar_r5_performance storage pool.

VNX for block integrated models use storage templates to create the LUNs that the VNXfor file recognizes as disk volumes. VNX for block storage templates are a combination oftemplate definition files and scripts that create RAID groups and bind LUNs on VNX forblock storage systems. You see only the scripts, not the templates. These storage templatesare invoked by using the VNX for block root-only setup script or by using the Unispheresoftware.

Disk volumes exported from a VNX for block storage system are relatively large. A VNXfor block system also has two storage processors (SPs). Most VNX for block storage templatescreate two LUNs per RAID group, one owned by SP A and the other by SP B. Only the VNXfor block RAID 3 storage templates create both LUNs that are owned by one of the SPs.

If no disk volumes are found when a request for space is made, AVM considers the storagepool attributes, and initiates the next step based on these settings:

◆ The is_greedy setting indicates if the system-defined storage pool must add a new membervolume to meet the request, or if it must use all the available space in the storage poolbefore adding a new member volume. AVM then checks the is_dynamic setting.

Note: When extending a file system, the is_greedy attribute is ignored unless there is not enoughfree space on the existing volumes that the file system is using. Table 7 on page 36 describes theis_greedy behavior.

◆ The is_dynamic setting indicates if the storage pool can dynamically grow and shrink:

40 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 41: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

If set to yes, then it allows AVM to automatically add a member volume to meet therequest.

• If set to no, and a member volume must be added to meet the request, then the usermust manually add the member volume to the storage pool.

◆ The flag that requests a file system slice indicates if the file system can be built on a slicevolume from a member volume.

◆ The default_slice_flag setting indicates if AVM can slice storage pool member volumesto meet the request.

Most of the system-defined storage pools for VNX for block storage systems first search forfour same-size disk volumes from different buses, different SPs, and different RAID groups.

The absolute criteria that the volumes must meet are as follows:

◆ Disk volume cannot exceed 14 TB.◆ Disk volume must match the type specified in the storage profile of the storage pool.◆ Disk volumes must be of the same size.◆ No two disk volumes can come from the same RAID group.◆ Disk volumes must be on a single storage system.

If found, AVM stripes the LUNs together and inserts the stripe into the storage pool.

If AVM cannot find the four disk volumes that are bus-balanced, it looks for four same-sizedisk volumes that are SP-balanced from different RAID groups. If not found, AVM thensearches for four same-size disk volumes from different RAID groups.

Next, if AVM has been unable to satisfy these requirements, it looks for three same-size diskvolumes that are SP-balanced from different RAID groups, and so on, until the only optionleft is for AVM to use one disk volume. The criteria that the one disk volume must meet areas follows:

◆ Disk volume cannot exceed 14 TB.◆ Disk volume must match the type specified in the storage profile of the storage pool.◆ If multiple volumes match the first two criteria, then the disk volume must be from the

least-used RAID group.

System-defined storage pool volume and storage profiles 41

Concepts

Page 42: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Figure 2 on page 42 shows the algorithm used to select disk volumes to add to a pool memberin an AVM VNX for block system-defined storage pool, which is either clar_r1,clar_r5_performance, or clar_r5_economy.

1 disk volumeavailable?

CNS-000885

Request

Error.Unable to fill

request

Done

Insert stripeinto the

storage pool

Meets absolutecriteria for multiple

disk volumes?

Slice fromstripe(smaller of freespace available

or file systemrequest)

Place metavolume onthe stripe

Place diskvolumes in pool

(no stripe ormeta on top)

Least used defined by # o f disk volumes used inRAIDgroup/# disk volumes visible in RAIDgroup

No

Meetsabsolute criteria

for 1 diskvolume

4/3/2 diskvolumes

available?

Is space inpool enough?Ye s

1

1

Stripe volumestogether using8 K stripe size

Select volumes

balanced acrossbuse s

balanced acrossstorage processor s

from least usedRAID groups

that are:

Yes

Yes

No

Yes Yes

No

No

No

Figure 2. clar_r1, clar_r5_performance, and clar_r5_economy storage pools algorithm

Figure 3 on page 42 shows the structure of a clar_r5_performance storage pool. The volumesin the storage pools are balanced between SP A and SP B.

VNX-000015

dz dndx dy

stripe_volume1 stripe_volume2clar_r5_performance

storage pool

VNX4+1 RAID 5 disk

volumes

dw3

dm3

Owned bystorage

processor A

Owned bystorage

processor B

Figure 3. clar_r5_performance storage pool structure

42 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 43: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

VNX for block system-defined storage pools for RAID 5, RAID 3, and RAID1/0 SATA support

The three VNX for block system-defined storage pools that provide support for the SATAenvironment are clarata_archive (RAID 5), clarata_r3 (RAID 3), and clarata_r10 (RAID 1/0).

The clarata_r3 storage pool follows the basic VNX for block algorithm explained inSystem-defined storage pool volume and storage profiles on page 39, but uses only onedisk volume and does not allow striping of volumes. One of the applications for this poolis backup to disk. Users can manage the RAID 3 disk volumes manually in a user-definedstorage pool. However, using the system-defined storage pool clarata_r3 helps users maximizethe benefit from AVM disk selection algorithms. The clarata_r3 storage pool supports onlyVNX for block Capacity drives, not Performance drives.

The criteria that the one disk volume must meet are as follows:

◆ Disk volume cannot exceed 14 TB.◆ Disk volume must match the type specified in the storage profile of the storage pool.◆ If multiple volumes match the first two criteria, then the disk volume must be from the

least-used RAID group.

System-defined storage pool volume and storage profiles 43

Concepts

Page 44: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Figure 4 on page 44 shows the storage pool clarata_r3 algorithm.

1 disk volumeavailable?

Meetsabsolute criteria

for 1 diskvolume.

Create meta ondisk volume.Place meta instorage pool.

Request

Done

CNS-000886

Error.Unable to fill

request.

Error.Unable to fill

request.

Yes

Yes

Yes

No

No

1

Figure 4. clarata_r3 storage pool algorithm

The storage pools clarata_archive and clarata_r10 differ from the basic VNX for blockalgorithm. These storage pools use two disk volumes or a single disk volume, and all Capacitydrives are the same.

44 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 45: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Figure 5 on page 45 shows the profile algorithm used to select disk volumes by using eitherthe clarata_archive or clarata_r10 storage pool.

Receiverequest

Put request on metaConcatenate slices

together (ifnecessary)

Request new poolvolume made of N

disk volumes

Sort N pool volumesby utilization

Slice minimum of freespace available or

space neededfrom pool entry

Space reqsatisfied?

Disk volumeavailable?

Other Poolvolumes

available?

Error: Unable tofill request

Done

N = 2Pick first

entry

One volume created

Creation failed

Yes Yes

Yes

No No

No

NoYes, N = 1

CNS-000783

1

Poolvolume

created in1?

Figure 5. clarata_archive and clarata_r10 storage pools algorithm

VNX for block system-defined storage pools for Flash support

The VNX for file provides the clarefd_r5, clarefd_r10, cmefd_r5, and cmefd_r10 storagepools for Flash drive support on the VNX for block storage system. AVM uses the same diskselection algorithm and volume structure for each Flash pool. However, the algorithm differsfrom the standard VNX for block algorithm explained in System-defined storage pool volumeand storage profiles on page 39 and is outlined next. The algorithm adheres to EMC bestpractices to achieve the overall best performance and use of Flash drives. Users can alsomanually manage Flash drives in user-defined pools.

The AVM algorithm used for disk selection and volume structure for all Flash system-definedpools is as follows:

1. The LUN creation process is responsible for storage processor balancing. By default, runthe setup_clariion command on integrated systems to set up storage processor balancing.

2. Use a default stripe width of 256 KB (provided in the profile). The stripe member countin the profile is ignored and should be left at 1.

3. When two or more LUNs of the same size are available, always stripe LUNs. Otherwise,concatenate LUNs.

4. No RAID group balancing or RAID group usage is considered.

5. No order is applied to the LUNs being striped together except that all LUNs from thesame RAID group in the stripe will be next to each other. For example, storage processorbalanced order is not applied.

6. Use a maximum of two RAID groups from which to take LUNs:

System-defined storage pool volume and storage profiles 45

Concepts

Page 46: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

If only one RAID group is available, use every same size LUN in the RAID group.This maximizes the LUN count and meets the size requested.

a.

b. If only two RAID groups are available, use every same size LUN in each RAID group.This maximizes the LUN count and meets the size requested.

Figure 6 on page 46 shows the profile algorithm used to select disk volumes by using eitherthe clarefd_r5, clarefd_r10, cmefd_r5, or cmefd_r10 storage pool.

Done

User requestsEFD FS

Sort all EFD dVolsinto size buckets

Logically construct all possiblestripes that closest fit the OSEC

Sort all possible stripesby capacity (descending)

within size buckets

If still more than 1 best stripesort the best stripes

by # of RGs (descending)preferring 2 over 1

If still more than 1 best stripejust use the first one

Create slice ofrequested FS size

Find an existing stripethat has available spaceand create a new slice

Choose the size bucketwith the smallest available

capacity that meets request

Concatenate withadditional existing

stripes if necessary

Concatenate stripesuntil requested

size is met

Done

Done

Done

Done

For each

size bucket

Determine optimumstripe element count

=#dVols/round(#RGs/2)

Sort all possible stripes(that are big enough)

by # of dVols (descending)and use the widest

Choose the size bucketwith the largest

available capacity

Concatenate stripes untilrequested size is met or

all stripes are used

Concatenate with existingstripes until requested

FS size is met

Next size bucket

CNS-001556

NOYES

YES

YESNO

NO YES

Out of free dVols

Nex

tsiz

ebu

cket

NO

Arethere anyfree dVols

Hasrequested size

been met

Are any sizebuckets equal to or larger than

requested FS size

Are any possiblestripes equal or larger than

requested FS size

If still more than 1 best stripesort the best stripesby size (ascending)and use the smallest

Figure 6. clarefd_r5, clarefd_r10, cmefd_r5, and cmefd_r10 storage pools algorithm

Symmetrix system-defined storage pools algorithm

AVM works differently with Symmetrix storage systems because of the size and uniformityof the disk volumes involved. Typically, the disk volumes exported from a Symmetrixstorage system are small and uniform in size. The aggregation strategy used by Symmetrixstorage pools is primarily to combine many small disk volumes into larger volumes thatcan be used by file systems. AVM attempts to distribute the input/output (I/O) to as manySymmetrix directories as possible. The Symmetrix storage system can use slicing and stripingto distribute I/O among the physical disks on the storage system. This is less of a concernfor the AVM aggregation strategy.

A Symmetrix storage pool creates a stripe volume across one set of Symmetrix disk volumes,or creates a metavolume, as necessary to meet the request. The stripe or metavolume isadded to the Symmetrix storage pool. When the administrator asks for a specific numberof gigabytes of space from the Symmetrix storage pool, the requested size of space is allocatedfrom this system-defined storage pool. AVM adds to and takes from the system-defined

46 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 47: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

storage pool as required. The stripe size is set in the system-defined profiles. You cannotmodify the stripe size of a system-defined storage pool. The default stripe size for Symmetrixstorage pool is 256 KB. Multipath file system (MPFS) requires a stripe depth of 32 KB orgreater.

The algorithm that AVM uses looks for a set of eight disk volumes. If the set of eight is notfound, then the algorithm looks for a set of four disk volumes. If the set of four is not found,then the algorithm looks for a set of two disk volumes. If the set of two disk volumes is notfound, then the algorithm looks for one disk volume. AVM stripes the disk volumes together,if the disk volumes are all of the same size. If the disk volumes are not the same size, AVMcreates a metavolume on top of the disk volumes. AVM then adds the stripe or themetavolume to the storage pool.

If AVM cannot find any disk volumes, it looks for a metavolume in the storage pool thathas space, takes a slice from that metavolume, and makes a metavolume over that slice.

Figure 7 on page 47 shows the AVM algorithm used to select disk volumes by using aSymmetrix system-defined storage pool.

First timethroughloop?

Take a slice from themeta (smaller of free

space avail or FSrequest)

Make metaon slice

Error. Unableto fill FSrequest

1

CNS-000777

No

Received FSrequest Yes

YesNo

No

Yes

Stripe theLUNs together,

or build ameta on topof the LUNs

Is there ameta in the pool

with spaceremaining?

Is there a setof 8/4/2/1 disk

volumes?

1

No

DoneYesDisk spacerequirementsatisfied?

Concentratenew volume to

end of"in progress"

meta

Build FSon meta

Figure 7. Symmetrix storage pool algorithm

System-defined storage pool volume and storage profiles 47

Concepts

Page 48: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Figure 8 on page 48 shows the structure of a Symmetrix storage pool.

CNS-000784

stripe_volume1 stripe_volume2

d3 d4 d5 d63

d73

d83

d93

d103

Symmetrixstorage pool

SymmetrixSTD diskvolumes

Figure 8. Symmetrix storage pool structure

All this system-defined storage pool activity is transparent to users and provides an easyway to create and manage file systems. The system-defined storage pools do not allow usersto have much control over how AVM aggregates storage to meet file system needs, but mostusers prefer ease-of-use over control.

When users make a request for a new file system that uses the system-defined storage pools,AVM does the following:

1. Determines if more volumes need to be added to the storage pool. If so, selects and addsvolumes.

2. Selects an existing, available storage pool volume to use for the file system. The volumemight also be sliced to obtain the correct size for the file system request. If the request islarger than the largest volume, AVM concatenates the volumes to create the size requiredto meet the request.

3. Places a metavolume on the resulting volume and builds the file system within themetavolume.

4. Returns the file system information to the user.

All system-defined storage pools have specific, predictable rules for getting disk volumesinto storage pools, provided by their associated profiles.

VNX for block primary pool-based file system algorithm

AVM uses the primary pool-based algorithm as follows:

1. Striping is tried first. If disk volumes cannot be striped, then concatenation is tried.

2. AVM checks for free disk volumes:

• If there are no free disk volumes and the slice option is set to no, there is not enoughspace available and the request fails.

• If there are free disk volumes:

a. AVM sorts them by thin and thick disk volumes.

b. AVM sorts the thin and thick disk volumes into size groups.

48 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 49: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

3. AVM first checks for thick disk volumes that satisfy the target number of disk volumesto stripe. Default=5.

4. AVM tries to stripe five disk volumes together, with the same size, same data servicepolicies, and in an SP-balanced manner. If five disk volumes cannot be found, AVM triesfour, then three, and then two.

5. AVM selects SP-balanced disk volumes before selecting the same data service policies.

6. If no thick disk volumes are found, AVM then checks for thin disk volumes that satisfythe target number.

7. If the space needed is not found, AVM uses the VNX for block secondary pool-based filesystem algorithm on page 50 to look for the space.

Note:

◆ For file system extension, AVM always tries to expand onto the existing volumes of the file system.However, if there is not enough space to fulfill the size request on the existing volumes, additionalstorage is obtained using the above algorithm and AVM attempts to match the data service policiesof the first used disk volume of the file system.

◆ All volumes mentioned above, whether a stripe or a concatenation, are sliced by default.

Figure 9 on page 49 shows the VNX for block primary pool-based file system algorithm.

User requestsfile system on amapped pool

Arethere any free

dVols?

Yes

Yes

Yes Yes

Yes

Yes

Yes

No

No

No

NoNo

No

No

Sort all dVols into thinand thick buckets.Then sort thin andthick buckets into

size groups.

Is count < 2 ?

Is slice optionspecified as “Y”?

Not enough space tofulfill request. Fail the

request, and cleanup pool if necessary.

Is thereenough free space in

pool to meet sizerequirement?

Are both theSP-balanced conditionand the matching data

service conditionrelaxed?

Can AVM find targetnumber of dVols that matchthe SP-balanced and data

service matchingrequirements?

Issize request

met?Done

Search thick buckets firstfor target number of dVolsto stripe (default=5). If not

found, then search thin sizebuckets for match.

Look first for dVolsthat have matchingdata services andare SP balanced.

Try to find target numberof dVols again that

match the SP-balancedand data service

requirements.

Reset target count to 5and relax data servicematching requirement.

Then relax SP-balancedrequirement.

CNS-001913

Use the secondarypool-based LUNs

strategy.

Reduce targetcount by 1.

Done

Done

Done

Figure 9. VNX for block primary pool-based file systems

System-defined storage pool volume and storage profiles 49

Concepts

Page 50: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

VNX for block secondary pool-based file system algorithm

AVM uses the secondary pool-based algorithm as follows:

1. Concatenation will be used. Striping will not be used.

2. Unless requested, slicing will not be used.

3. AVM checks for free disk volumes, and sorts them by thin and thick disk volumes.

4. AVM checks for free disk volumes:

• If there are no free disk volumes and the slice option is set to no, there is not enoughspace available and the request fails.

• If there are free disk volumes:

a. AVM first checks for thick disk volumes that satisfy the size request (equal to orgreater than the file system size).

b. If not found, AVM then checks for thin disk volumes that satisfy the size request.

c. If still not found, AVM combines thick and thin disk volumes to find ones thatsatisfy the size request.

5. If one disk volume satisfies the size request exactly, AVM takes the selected disk volumeand uses the whole disk to build the file system.

6. If a larger disk volume is found which is a better fit than any set of smaller disks, thenAVM uses the larger disk volume.

7. If multiple disk volumes satisfy the size request, AVM sorts the disk volumes fromsmallest to largest, and then sorts in alternating SP A and SP B lists. Starting with thefirst disk volume, AVM searches through a list for matching data services until the sizerequest is met. If the size request is not met, AVM searches again but ignores the dataservices.

Note: Mapped pools are treated as standard AVM pools, not as user-defined pools, except that mappedpools are always dynamic and the is_greedy option is ignored.

50 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 51: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Figure 10 on page 51 shows the VNX for block secondary pool-based file system algorithm.

Yes

No

No

No

Yes

No

Yes

Yes

No

CNS-001894

User requestsfile system on amapped pool

Is best fit asingle dvol?

Sort all dVolsinto thin and thick buckets

Look for existing volumes that canbe sliced to meet the size. Will try

for consistent data service first, andthen will look for anything that

is available

Search thick bucket for best fit.If not found, search thin bucket

for best fit. If not found, combinethin and thick and look for best fit.

Find all smaller dvols, sort fromsmallest to largest, and then sort

into alternating SPA/SPB list

Either slice or use whole disk,create meta on slice or disk, andthen create file system on meta.

Starting with first dVol, searchthrough list for matching data

service until size is met.

Select larger LUN for dataservice consistency and buildfile system on the single LUN

Arethere any free

dVols?

Is sizerequest met?

Not enough space to fulfillrequest. Fail the request, and

clean up pool if necessary.

Done

Done

Is sliceoption specified

as “Y”?

Yes

Yes

NoIs sizerequest met?

Does alarger LUN exist?

Try to find a smallerset of LUNs whichsatisfy the size but

are not dataservice consistent

Figure 10. VNX for block secondary pool-based file systems

Symmetrix mapped pool file systems

AVM builds a Symmetrix mapped pool file system as follows:

1. Unless requested, slicing will not be used.

2. AVM checks for free disk volumes, and sorts them by thin and thick disk volumes forthe purpose of striping together the same type of disk volumes:

• If there are no free disk volumes and the slice option is set to no, there is not enoughspace available and the request fails.

• If there are free disk volumes:

a. AVM first checks for a set of eight disk volumes.

b. If a set of eight is not found, AVM then looks for a set of four disk volumes.

c. If a set of four is not found, AVM then looks for a set of two disk volumes.

d. If a set of two is not found, AVM finally looks for one disk volume.

System-defined storage pool volume and storage profiles 51

Concepts

Page 52: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

3. When free disk volumes are found:

a. AVM first checks for thick disk volumes that satisfy the size request, which can beequal to or greater than the file system size. If thick disk volumes are available, AVMfirst tries to stripe the thick disk volumes that have the same disk type. Otherwise,AVM stripes together thick disk volumes that have different disk types.

b. If thick disks are not found, AVM then checks for thin disk volumes that satisfy thesize request. If thin disk volumes are available, AVM first tries to stripe the thin diskvolumes that have the same disk type, where "same" means the single disk type ofthe pool in which it resides. Otherwise, AVM stripes together thin disk volumes thathave different disk types.

c. If thin disks are not found, AVM combines thick and thin disk volumes to find onesthat satisfy the size request.

4. If neither thick nor thin disk volumes satisfy the size request, AVM then checks forwhether striping of one same disk type will satisfy the size request, ignoring whetherthe disk volumes are thick or thin.

5. If still no matches are found, AVM checks whether slicing was requested:

a. If slicing was requested, then AVM checks whether any stripes exist that satisfy thesize request. If yes, then AVM slices an existing stripe.

b. If slicing was not requested, AVM checks whether any free disk volumes can beconcatenated to satisfy the size request. If yes, AVM concatenates disk volumes,matching data services if possible, and builds the file system.

6. If still no matches are found, there is not enough space available and the request fails.

Note: Mapped pools are treated as standard AVM pools, not as user-defined pools, except that mappedpools are always dynamic and the is_greedy option is ignored.

52 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 53: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Figure 11 on page 53 shows the Symmetrix mapped pool algorithm.

User requestsfile system on amapped pool

Yes

Yes

NoNo

Yes

Yes

No

No

No

No

No

Yes

Yes

Yes

No

Yes

Is sizerequest met from thick

stripes of the samedisktype?

Is sizerequest met from thinstripes of the same

disktype?

Can stripes ofthe same disktype satisfyspace request if thin and

thick are ignored?

Sort all dVols into thickand thin buckets

Not enough space to fulfillrequest. Fail the request,

and remove unused volumesif necessary.

Slice existing stripe tofulfill space request.

Concatenate dVols,matching data service

if possible, andcreate file system.

Now apply standard Symm1 AVMstrategy to each of these bucketsstarting with the thick buckets first.

DoneCNS-001895

Arethere any free

dVols?

Is slice optionspecified as “Y”?

Was slice option‘Y’ specified?

Can any freedVols be concatenatedto fulfill space request

(no striping)?

Do any in usestripes exist to fulfillthe size request?

Further divide thick and thinbuckets into disktype-specific

buckets for the purpose ofstriping together like disktypes.

Done Done

Figure 11. Symmetrix mapped pool file systems

File system and storage pool relationship

When you create a file system that uses a system-defined storage pool, AVM consumes diskvolumes either by adding new members to the pool, or by using existing pool members.

To create a file system by using a user-defined storage pool, do one of the following:

◆ Create the storage pool and add the volumes you want to use manually before creatingthe file system.

◆ Let AVM create the user pool by size.

Deleting a file system associated with either a system-defined or user-defined storage poolreturns the unused space to the storage pool. But the storage pool might continue to reserve

File system and storage pool relationship 53

Concepts

Page 54: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

that space for future file system requests. Figure 12 on page 54 shows two file systems builtfrom an AVM storage pool.

Member volumes

Storage pool

CNS-000780

Metavolume

FS1

Slice

Metavolume

FS2

Slice

Figure 12. File systems built by AVM

As Figure 13 on page 54 shows, if FS2 is deleted, the storage used for that file system isreturned to the storage pool. AVM continues to reserve it, as well as any other membervolumes that are available in the storage pool, for a future request. This practice is true ofsystem-defined and user-defined storage pools.

Member volumes

Storage pool

CNS-000779

Metavolume

FS1

Slice

Figure 13. FS2 deletion returns storage to the storage pool

If FS1 is also deleted, the storage that was used for the file systems is no longer required.

A system-defined storage pool removes the volumes from the storage pool and returns thedisk volumes to the storage system for use with other features or storage pools. You canchange the attributes of a system-defined storage pool so that it is not dynamic, and willnot grow and shrink dynamically. By making this change, you increase your directinvolvement in managing the volume structure of the storage pool, including adding andremoving volumes.

A user-defined storage pool does not have any capability to add and remove volumes. Touse volumes contained in a user-defined storage pool for another purpose, you must removethe volumes. Remove volumes from storage pools on page 122 provides more information.Otherwise, the user-defined storage pool continues to reserve the space for use by that pool.

54 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 55: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Figure 14 on page 55 shows that the storage pool container continues to exist after the filesystems are deleted, and AVM continues to reserve the volumes for future requests of thatstorage pool.

Member volumes

Storage pool

CNS-000778

Figure 14. FS1 deletion leaves storage pool container with volumes

If you have modified the attributes that control the dynamic behavior of a system-definedstorage pool, use the procedure in Remove volumes from storage pools on page 122 to removevolumes from the system-defined storage pool.

To reuse the volumes for other purposes for a user-defined storage pool, remove the volumesor delete the storage pool.

Automatic file system extension

Automatic file system extension works only when an AVM storage pool is associated witha file system. You can enable or disable automatic extension when you create a file systemor modify the file system properties later.

Create file systems with AVM on page 70 provides the procedure to create file systems withAVM system-defined or user-defined storage pools and enable automatic extension on anewly created file system.

Enable automatic file system extension and options on page 91 provides the procedure tomodify an existing file system and enable automatic extension.

You can set the HWM and maximum size for automatic file system extension. The ControlStation might attempt to extend the file system several times, depending on these settings.

HWM

The HWM identifies the threshold for initiating automatic file system extension. TheHWM value must be between 50 percent and 99 percent. The default HWM is 90 percentof the file system size.

Automatic extension guarantees that the file system usage is at least 3 percent below theHWM. Figure 15 on page 58 contains the algorithm for how the calculation is performed.For example, a 100 GB file system reaches its 80 percent HWM at 80 GB. The file systemthen automatically extends to 110 GB and is now at 72.73 percent usage (80 GB), whichis well below the 80 percent HWM for the 110 GB file system:

Automatic file system extension 55

Concepts

Page 56: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

◆ If automatic extension is disabled, when the file system reaches the 90 percent (internal)HWM, an event notification is sent. You must then manually extend the file system.Ignoring the notification could cause data loss.

◆ If automatic extension is enabled, when the file system reaches the HWM, an automaticextension event notification is sent to the sys_log and the file system automaticallyextends without any administrative action. Calculating the automatic extension sizedepends on the extend_size value and the current file system size:

extend_size = polling_interval*io_rate*100/(100-HWM)

where:

polling interval: default is 10 seconds

io_rate: default is 200 MB/s

HWM: value is set per file system

If a file system is smaller than the extend_size value, it extends by its size when it reachesthe HWM.

If a file system is larger than the extend_size value, it extends by 5 percent of its size orthe extend_size, whichever is larger, when it reaches the HWM.

Examples

The following examples use file system sizes of 100 GB and 500 GB, and HWM valuesof 80 percent, 85 percent, 90 percent, and 95 percent:

◆ Example 1 — 100 GB file system, 85 percent HWM

extend_size = (10*200*100)/(100-85)

Result = 13.3 GB

13.3 GB is greater than 5 GB (which is 5 percent of 100 GB). Therefore, the file systemis extended by 13.3 GB.

◆ Example 2 — 100 GB file system, 90 percent HWM

extend_size = (10*200*100)/(100-90)

Result = 20 GB

20 GB is greater than 5 GB (which is 5 percent of 100 GB). Therefore, the file systemis extended by 20 GB.

◆ Example 3 — 500 GB file system, 90 percent HWM

extend_size = (10*200*100)/(100-90)

Result = 20 GB

20 GB is less than 25 GB (which is 5 percent of 500 GB). Therefore, the file system isextended by 25 GB.

◆ Example 4 — 500 GB file system, 95 percent HWM

56 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 57: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

extend_size = (10*200*100)/(100-95)

Result = 40 GB

40 GB is greater than 25 GB (which is 5 percent of 500 GB). Therefore, the file systemis extended by 40 GB.

◆ Example 5 — 500 GB file system, 80 percent HWM

extend_size = (10*200*100)/(100-80)

Result = 10 GB

Since the total used space on the file system after this extension would be 78.4 percent(400/510 *100), which is less than the (HWM-3) limit, the file system is extended bya single 19.5 GB extension (400 * 100/77).

Maximum size

The default maximum size for any file system is 16 TB. The maximum size for automaticfile system extension is from 3 MB up to 16 TB. If thin provisioning is enabled and theselected storage pool is a traditional RAID group (non-virtual VNX for block thin) storagepool, the maximum size is required. Otherwise, this field is optional. The extension sizeis also dependent on having additional space in the storage pool associated with the filesystem.

Automatic file extension conditions

The conditions for automatically extending a file system are as follows:

◆ If the file system size reaches the specified maximum size, the file system cannotextend beyond that size, and the automatic extension operation is rejected.

◆ If the available space is less than the extend size, the file system extends by themaximum available space.

◆ If only the HWM is set with automatic extension enabled, the file system automaticallyextends when that HWM is reached, if there is space available and the file systemsize is less than 16 TB.

◆ If only the maximum size is specified with automatic extension enabled, the file systemautomatically extends when the default HWM of 90 percent is reached, and the filesystem has space available and the maximum size has not been reached.

◆ If the file system reaches or exceeds the set maximum size, automatic extension isrejected.

◆ If the HWM or maximum file size is not set, but either automatic extension or thinprovisioning is enabled, the file system's HWM and maximum size are set to thedefault values of 90 percent and 16 TB, respectively.

Automatic file system extension 57

Concepts

Page 58: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Calculating the size of an automatic file system extension

During each automatic file system extension, fs_extend_handler, located on the ControlStation (/nas/sbin/fs_extend_handler) calculates the extension size by using the algorithmshown in Figure 15 on page 58.

Figure 15. Calculating the size of an automatic file system extension

58 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 59: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Thin provisioning

The thin provisioning option allows you to allocate storage capacity based on anticipatedneeds, while you dedicate only the resources you currently need. Combining automatic filesystem extension and thin provisioning lets you grow the file system gradually as needed.

When thin provisioning is enabled and a virtual storage pool is not being used, the virtualmaximum file system size is reported to NFS and CIFS clients. If a virtual storage pool isbeing used, the actual file system size is reported to NFS and CIFS clients.

Note: Enabling thin provisioning with automatic file system extension does not automatically reservethe space from the storage pool for that file system. Administrators must ensure that adequate storagespace exists so that the automatic extension operation can succeed. If the available storage is less thanthe maximum size setting, automatic extension fails. Users receive an error message when the filesystem becomes full, even though it appears that there is free space in the file system.

Planning considerations

This section covers important volume and file system planning information and guidelines,interoperability considerations, storage pool characteristics, and upgrade considerationsthat you need to know when implementing AVM and automatic file system extension.

Review these topics:

◆ File system management and the nas_fs command◆ The EMC SnapSure feature (checkpoints) and the fs_ckpt command◆ VNX for file volume management concepts (metavolumes, slice volumes, stripe volumes,

and disk volumes) and the nas_volume, nas_server, nas_slice, and nas_disk commands◆ RAID technology◆ Symmetrix storage systems◆ VNX for block storage systems

Interoperability considerations

When using automatic file system extension with replication, consider these guidelines:

◆ Enable automatic extension and thin provisioning only on the source file system. Thedestination file system is synchronized with the source and extends automatically.

◆ When the source file system reaches its HWM, the destination file system automaticallyextends first and then the source file system automatically extends.

Do one of the following:

Thin provisioning 59

Concepts

Page 60: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Set up the source replication file system with automatic extension enabled, asexplained in Create file systems with automatic file system extension on page 81.

• Modify an existing source file system to automatically extend by using theprocedure Enable automatic file system extension and options on page 91.

◆ If the extension of the destination file system succeeds but the extension of the sourcefile system fails, the automatic extension operation stops functioning. You receive anerror message that indicates the failure is due to the limitation of available disk spaceon the source side. Manually extend the source file system to make the source anddestination file systems the same size by using the nas_fs -xtend <fs_name> -optionsrc_only command. Using VNX Replicator provides more detailed information oncorrecting the failure.

Other interoperability considerations are:

◆ The automatic extension and thin provisioning configuration is not moved over tothe destination file system during replication failover. If you intend to reverse thereplication, and the destination file system becomes the source, you must enableautomatic extension on the new source file system.

◆ With thin provisioning enabled, the NFS, CIFS, and FTP clients see the actual size ofthe VNX Replicator destination file system, and the clients see the virtually provisionedmaximum size on the source file system. Table 8 on page 60 describes this client view.

Table 8. Client view of VNX Replicator source and destination file systems

Source file system withthin provisioning

Source file system withoutthin provisioning

Destination file system

Maximum sizeActual sizeActual sizeClients see:

Using VNXReplicator contains more information on using automatic file system extensionwith VNX Replicator.

AVM storage pool considerations

Consider these AVM storage pool characteristics:

◆ System-defined storage pools have a set of rules that govern how the system managesstorage. User-defined storage pools have attributes that you define for each storagepool.

◆ All system-defined storage pools (virtual and non-virtual) are dynamic. They acquireand release disk volumes as needed. Administrators can modify the attribute todisable this dynamic behavior.

◆ User-defined storage pools are not dynamic. They require administrators to explicitlyadd and remove volumes manually. You are allowed to choose disk volume storagefrom only one of the attached storage systems when creating a user-defined storagepool.

60 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 61: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

◆ Striping never occurs above the storage-pool level.

◆ The system-defined VNX for block storage pools attempt to use all free disk volumesbefore maximizing use of the partially used volumes. This behavior is considered tobe a “greedy” attribute. You can modify the attributes that control this greedy behaviorin system-defined storage pools. Modify system-defined storage pool attributes onpage 110 describes the procedure.

Note: When extending a file system, the is_greedy attribute is ignored unless there is not enoughfree space on the existing volumes that the file system is using. Table 7 on page 36 describesthe is_greedy behavior.

Another option is to create user-defined storage pools to group disk volumes to keepsystem-defined storage pools from using them. Create file systems with user-definedstorage pools on page 74 provides more information on creating user-defined storagepools. You can create a storage pool to reserve disk volumes, but never create filesystems from that storage pool. You can move the disk volumes out of the reservinguser-defined storage pool if you need to use them for file system creation or otherpurposes.

◆ The system-defined Symmetrix storage pools maximize the use of disk volumesacquired by the storage pool before consuming more. This behavior is considered tobe a "not greedy" attribute.

◆ When creating a user-defined storage pool, the default is a "not greedy" behavior.The system uses space from the user-defined storage pool's existing volume membersin the order that the volumes were added to the pool to create a new file system orextend an existing file system.

If a “greedy” attribute is set when creating a user-defined storage pool, then the pooluses space from the least-used member volumes to create a new file system. Whenthere is more than one least-used member volume available, AVM selects the membervolume that contains the most disk volumes.

For example, if one member volume contains four disk volumes and another membervolume contains eight disk volumes, AVM selects the one with eight disk volumes.If there are two or more member volumes that have the same number of disk volumes,AVM selects the one with the lowest ID.

◆ AVM does not perform storage system operations necessary to create new diskvolumes, but consumes only existing disk volumes. You might need to add LUNs toyour storage system and configure new disk volumes, especially if you createuser-defined storage pools.

◆ A file system might use many or all the disk volumes that are members of asystem-defined storage pool.

◆ You can use only one type of disk volume in a user-defined storage pool. For example,if you create a storage pool and then add a disk volume based on Capacity drives tothe pool, add only other Capacity-based disk volumes to the pool to extend it.

Planning considerations 61

Concepts

Page 62: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

◆ SnapSure checkpoint SavVols might use the same disk volumes as the file system ofwhich the checkpoints are made.

◆ By default, a checkpoint SavVol is sliced so that a SavVol auto-extension will not usespace unnecessarily.

◆ AVM does not add members to the storage pool if the amount of space requested ismore than the sum of the unused and available disk volumes, but less than or equalto the available space in an existing system-defined storage pool.

◆ Some AVM system-defined storage pools designed for use with VNX for block storagesystems acquire pairs of storage-processor balanced disk volumes with the sameRAID type, disk count, and size. When reserving disk volumes from a VNX for blockstorage system, it is important to reserve them in similar pairs. Otherwise, AVMmight not find matching pairs, and the number of usable disk volumes might be morelimited than was intended.

◆ To guarantee consistent file system performance, on the VNX for block systemconfigure a storage pool that uses the same data services that will map to an AVMpool that uses the same data services on the VNX for file.

◆ Because of the minimum storage requirement restriction for a VNX for block system'sstorage pool, if you must create a heterogeneous pool that uses multiple data servicesto satisfy different use cases, do the following:

1. Use a heterogeneous system-defined AVM pool to create user-defined pools thatgroup disk volumes with matching data service policies.

2. Create file systems from the user-defined pools.

For example, for one use case you might need to create both a regular file system andan archive file system.

◆ The system allows you to control the data service configuration at the file systemlevel. By default, disk volumes are not sliced unless you explicitly select that settingat file system creation time. By not slicing a disk volume, the system guarantees thata file system will not share disks with other file systems. There is a 1:n relationshipbetween the file system and the disk volumes, where n is greater than or equal to 1.

You can go to the VNX for block or Symmetrix storage system and modify the dataservice policies of the set of LUNs underneath the same file system to change the datapolicy of the file system. This option may cause the file system that is created to exceedthe specified storage capacity because the file system size will be disk volume-aligned.Choose the LUN size on the VNX for block or Symmetrix system storage pool carefully.The pool-based LUN overhead is a collection of 2 percent of the file system capacitysize plus 3 GB for a Direct Logical Unit (DLU), and fully provisioned Thin LUN (TLU).

Create file systems with AVM on page 70 provides more information on creating filesystems by using the different pool types.

Related information on page 22 provides a list of related documentation.

62 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 63: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Upgrading VNX for file software

When you upgrade to VNX for file version 7.0 software, all system-defined storage poolsare available.

The system-defined storage pools for the currently attached storage systems with availablespace appear in the output when you list storage pools, even if AVM is not used on thesystem. If you have not used AVM in the past, these storage pools are containers and donot consume storage until you create a file system by using AVM.

If you have used AVM in the past, in addition to the system-defined storage pools, anyuser-defined storage pools also appear when you list the storage pools.

Note: Automatic file system extension is interrupted during software upgrades. If automatic filesystem extension is enabled, the Control Station continues to capture HWM events. However,actual file system extension does not start until the upgrade process completes.

File system and automatic file system extension considerations

Before implementing AVM, consider your environment, most important file systems,file system sizes, and expected growth. Follow these general guidelines when planningto use AVM in your environment:

◆ Create the most important and most-used file systems first. AVM system-definedstorage pools use free disk volumes to create a new file system. For example, thereare 40 disk volumes on the storage system. AVM takes eight disk volumes, createsstripe1, slice1, metavolume1, and then creates the file system ufs1:

• Assuming the default behavior of the system-defined storage pool, AVM useseight more disk volumes, creates stripe2, and builds file system ufs2, even thoughthere is still space available in stripe1.

• File systems ufs1 and ufs2 are on different sets of disk volumes and do not shareany LUNs, for more efficient access.

◆ If you plan to create user-defined storage pools, consider LUN selection and striping,and do your own disk volume aggregation before putting the volumes into the storagepool. This ensures that the file systems are not built on a single LUN. Disk volumeaggregation is a manual process for user-defined storage pools.

◆ For file systems with sequential I/O, two LUNs per file system are generally sufficient.

◆ If you use AVM for file systems with sequential I/O, consider modifying the attributeof the storage pool to restrict slicing.

◆ If you would like to control the data service configuration at the file system level butstill consider doing auto extension and thin provisioning, do one of the following:

• Create a VNX for block or Symmetrix storage pool with thin LUNs, and then createfile systems from that pool.

Planning considerations 63

Concepts

Page 64: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

• Set the slice option to Yes if you want to enable file system auto extension.

◆ Automatic file system extension does not alleviate the need for appropriate planning.Create the file systems with adequate space to accommodate the estimated usage.Allocating too little space to accommodate normal file system usage makes the ControlStation rapidly and repeatedly attempt to extend the file system. If the Control Stationcannot adequately extend the file system to accommodate the usage quickly enough,the automatic extension fails. Known problems and limitations on page 126 providesmore information on how to identify and recover from this issue.

Note: When planning file system size and usage, consider setting the HWM, so that the free spaceabove the HWM setting is a certain percentage above the largest average file for that file system.

◆ Use of AVM with a single-enclosure VNX for block storage system could limitperformance because AVM does not stripe between or across RAID group 0 and otherRAID groups. This is the only case where striping across 4+1 RAID 5 and 8+1 RAID5 is suggested.

◆ If you want to set a stripe size that is different from the default stripe size forsystem-defined storage pools, create a user-defined storage pool. Create file systemswith user-defined storage pools on page 74 provides more information.

◆ Take disk contention into account when creating a user-defined pool.

◆ If you have disk volumes to reserve so that the system-defined storage pools do notuse them, consider creating a user-defined storage pool and add those specific volumesto it.

64 Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Page 65: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

3

Configuring

The tasks to configure volumes and file systems with AVM are as follows:◆ Configure disk volumes on page 66◆ Create file systems with AVM on page 70◆ Extend file systems with AVM on page 84◆ Create file system checkpoints with AVM on page 100

Managing Volumes and File Systems on VNX AVM 7.0 65

Page 66: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Configure disk volumes

System network servers that are gateway network-attached storage (NAS) systems and thatconnect to Symmetrix and VNX for block storage systems are as follows:

◆ VNX VG2◆ VNX VG8

The gateway system stores data on VNX for block user LUNs or Symmetrix hypervolumes.If the user LUNs or hypervolumes are not configured correctly on the array, AVM and theUnisphere for File software cannot be used to manage the storage.

Typically, an EMC Customer Support Representative does the initial setup of disk volumeson these gateway storage systems.

However, if your VNX gateway system is attached to a VNX for block storage system andyou want to add disk volumes to the configuration, use the procedures that follow:

1. Use the Unisphere for Block software or the VNX for block CLI to create VNX for blockuser LUNs.

2. Use either the Unisphere for File software or the VNX for file CLI to make the new userLUNs available to the VNX for file as disk volumes.

The user LUNs must be created before you create file systems.

To add user LUNs, you must be familiar with the following:

◆ Unisphere for Block software or the VNX for block CLI.◆ Process of creating RAID groups and user LUNs for the VNX for file volumes.

The documentation for Unisphere for Block and VNX for block CLI describes how to createRAID groups and user LUNs.

If the disk volumes are configured by EMC experts, go to Create file systems with AVM onpage 70.

66 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 67: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Provide storage from a VNX or legacy CLARiiON system to a gatewaysystem

1. Create RAID groups and LUNs (as needed for VNX for file volumes) by using theUnisphere software or VNX for block CLI:

• Always create the user LUNs in balanced pairs, one owned by SP A and one ownedby SP B. The paired LUNs must be the same size.

• FC or SAS disks must be configured as RAID 1/0, RAID 5, or RAID 6. The pairedLUNs do not need to be in the same RAID group but should be of the same RAIDtype. RAID groups and storage characteristics on page 33 lists the valid RAID groupand storage array combinations. Gateway models use the same combinations as theNS-80 (for CX3 storage systems) or the NS-960 (for CX4 storage systems).

• SATA disks must be configured as RAID 1/0, RAID 5, or RAID 6. All LUNs in a RAIDgroup must belong to the same SP. Create pairs by using LUNs from two RAID groups.RAID groups and storage characteristics on page 33 lists the valid RAID group andstorage array combinations. Gateway models use the same combinations as the NS-80(for CX3 storage systems) or the NS-960 (for CX4 storage systems).

• The host LUN identifier (HLU) must be greater than or equal to 16 for user LUNs.

Use these settings when creating RAID group user LUNs:

• RAID Type: RAID 1/0, RAID 5, or RAID 6 for FC or SAS disks and RAID 1/0, RAID5, or RAID 6 for SATA disks

• LUN ID: Select the first available value• Rebuild Priority: ASAP• Verify Priority: ASAP• Enable Read Cache: Selected• Enable Write Cache: Selected• Enable Auto Assign: Cleared (off)• Number of LUNs to Bind: 2• Alignment Offset: 0• LUN size: Must not exceed 14 TB

Note: If you create 4+1 RAID 3 LUNs, the Number of LUNs to Bind value should be 1.

2. Create a storage group to which to add the LUNs for the gateway system.

• Using the Unisphere software:

a. Select Hosts ➤ Storage Groups.

Configure disk volumes 67

Configuring

Page 68: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

b. Click Create.

• Using the VNX for block CLI, type the following command:

naviseccli -h <system> storagegroup -create -gname <groupname>

3. Ensure that you add the LUNs to the gateway system's storage group. Set the HLU to16 or greater.

• Using the Unisphere software:

a. Select Hosts ➤ Storage Groups.

b. In Storage Group Name, select the storage group that you created in step 2.

c. Click Connect LUNs.

d. Click the LUNs tab.

e. Expand SP A and SP B.

f. Select the LUNs to add and click Add.

• Using the VNX for block CLI, type the following command:

naviseccli -h <system> storagegroup -addhlu -gname ~filestorage -hlu<HLU number> -alu <LUN number>

4. Perform one of these steps to make the new user LUNs available to the VNX for file:

• Using the Unisphere for File software:

a. Select Storage ➤ Storage Configuration ➤ File Systems.

b. From the task list, select File Storage ➤ Rescan Storage Systems.

• Using the VNX for file CLI, type the following command:

nas_diskmark -mark -all

Note: Do not change the host LUN identifier of the VNX for file LUNs after rescanning. This mightcause data loss or unavailability.

Create pool-based provisioning for file storage systems

1. Create storage pools and LUNs as needed for VNX for file volumes.

Use these settings when creating user LUNs for use with mapped pools:

• LUN ID: Use the default

68 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 69: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

• LUN Name: Use the default or supply a name• Number of LUNS to create: 2• Enable Auto Assign: Cleared (Off)• Alignment Offset: 0• LUN Size: Must not exceed 16 TB

2. Ensure that you add the LUNs to the file system's storage group. Set the HLU to 16 orgreater.

• Using the Unisphere software:

a. Select Hosts ➤ Storage Groups.

b. In Storage Group Name, select ~filestorage.

c. Click Connect LUNs.

d. Click LUNs.

e. Expand SP A and SP B.

f. Select the LUNs you want to add and click Add.

• Using the VNX for block CLI, type the following command:

naviseccli -h <system> storagegroup -addhlu -gname ~filestorage -hlu<HLU number> -alu <LUN number>

3. Use one of these methods to make the new user LUNs available to the VNX for file:

• Using the Unisphere software:

a. Select Storage ➤ Storage Configuration ➤ File Systems.

b. From the task list, select File Storage ➤ Rescan Storage Systems.

Note: Do not change the host LUN identifier of the VNX for file LUNs after rescanning.This might cause data loss or unavailability.

• Using the VNX for file CLI, type the following command:

nas_diskmark -mark -all

Note: Do not change the host LUN identifier of the VNX for file LUNs after rescanning. This mightcause data loss or unavailability.

Configure disk volumes 69

Configuring

Page 70: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Add disk volumes to an integrated system

Configure unused or new disk devices on a VNX for block storage system by using the DiskProvisioning Wizard for File. This wizard is available only for integrated VNX for file models(NX4 and NS non-gateway systems excluding NS80), including Fibre Channel-enabledmodels, attached to a single VNX for block storage system.

Note: For VNX systems, Advanced Data Service Policy features such as FAST and compression aresupported on pool-based LUNs only. They are not supported on RAID-based LUNs.

To open the Disk Provisioning Wizard for File in the Unisphere software:

1. Select Storage ➤ Storage Configuration ➤ Storage Pools.

2. From the task list, select Wizards ➤ Disk Provisioning Wizard for File.

Note: To use the Disk Provisioning Wizard for File, you must log in to Unisphere by using the globalsysadmin user account or by using a user account which has privileges to manage storage.

An alternative to the Disk Provisioning Wizard for File is available by using the VNX forfile CLI at /nas/sbin/setup_clariion. This alternative is not available for unified VNX systems.The script performs the following actions:

◆ Provisions the disks on integrated (non-Performance) VNX for block storage systemswhen there are unbound disks to configure. This script binds the data LUNs on the xPEsand DAEs, and makes them accessible to the Data Movers.

◆ Ensures that your RAID groups and LUN settings are appropriate for your VNX for fileserver configuration.

The Unisphere for File software supports only the array templates for legacy EMC CLARiiONCX™ and CX3 storage systems. CX4 and VNX systems must use the User_Defined modewith the /nas/sbin/setup_clariion CLI script.

The setup_clariion script allows you to configure VNX for block storage systems on ashelf-by-shelf basis by using predefined configuration templates. For each enclosure (xPEor DAE), the script examines your specific hardware configuration and gives you a choiceof appropriate templates. You can mix combinations of RAID configurations on the samestorage system. The script then combines the shelf templates into a custom, User_Definedarray template for each VNX for block system, and then configures your array.

Create file systems with AVM

This section describes the procedures to create a file system by using AVM storage pools,and also explains how to create file systems by using the automatic file system extensionfeature.

70 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 71: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

You can enable automatic file system extension on new or existing file systems if the filesystem has an associated AVM storage pool. When you enable automatic file systemextension, use the nas_fs command options to adjust the HWM value, set a maximum filesize to which the file system can be extended, and enable thin provisioning. Create filesystems with automatic file system extension on page 81 provides more information.

You can create file systems by using storage pools with automatic file system extensionenabled or disabled. Specify the storage system from which to allocate space for the type ofstorage pool that is being created.

Choose any of these procedures to create file systems:

◆ Create file systems with system-defined storage pools on page 72

Allows you to create file systems without having to also create the underlying volumestructure.

◆ Create file systems with user-defined storage pools on page 74

Allows more administrative control of the underlying volumes and placement of the filesystem. Use these user-defined storage pools to prevent the system-defined storage poolsfrom using certain volumes.

◆ Create file systems with automatic file system extension on page 81

Allows you to create a file system that automatically extends when it reaches a certainthreshold by using space from either a system-defined or a user-defined storage pool.

Create file systems with AVM 71

Configuring

Page 72: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Create file systems with system-defined storage pools

When you create a file system by using the system-defined storage pools, it is not necessaryto create volumes before setting up the file system. AVM allocates space to the file systemfrom the specified storage pool on the storage system associated with that storage pool.AVM automatically creates any required volumes when it creates the file system. This processensures that the file system and its extensions are created from the same type of storage,with the same cost, performance, and availability characteristics.

The storage system appears either alphabetic characters or as a set of integers:

◆ VNX for block storage systems display as a prefix of alphabetic characters before a setof integers, for example, FCNTR074200038-0019.

◆ Symmetrix storage systems display as a set of integers, for example, 002804000190-003C.

To create a file system with system-defined storage pools:

1. Obtain the list of available system-defined storage pools and mapped storage pools bytyping:

$ nas_pool -list

Output:

id in_use acl name storage_system3 n 0 clar_r5_performance FCNTR07420003840 y 0 TP1 FCNTR07420003841 y 0 FP1 FCNTR074200038

2. Display the size of a specific storage pool by using this command syntax:

$ nas_pool -size <name>

where:

<name> = name of the storage pool

Example:

To display the size of the clar_r5_performance storage pool, type:

$ nas_pool -size clar_r5_performance

Output:

id = 3name = clar_r5_performanceused_mb = 128000avail_mb = 0total_mb = 260985potential_mb = 260985

Note: To display the size of all storage pools, use the -all option instead of the <name> option.

3. Obtain the system name of an attached Symmetrix storage system by typing:

72 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 73: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

$ nas_storage -list

Output:

id acl name serial number1 0 000183501491 000183501491

4. Obtain information of a specific Symmetrix storage system in the list by using thiscommand syntax:

$ nas_storage -info <system_name>

where:

<system_name> = name of the storage system

Example:

To obtain information about the Symmetrix storage system 000183501491, type:

$ nas_storage -info 000183501491

Output:

type num slot ident stat scsi vols ports p0_stat p1_stat p2_stat p3_statR1 1 1 RA-1A Off NA 0 1 Off NA NA NADA 2 2 DA-2A On WIDE 25 2 On Off NA NADA 3 3 DA-3A On WIDE 25 2 On Off NA NASA 5 5 SA-5A On ULTRA 0 2 On On NA NASA 12 12 SA-12A On ULTRA 0 2 Off On NA NADA 14 14 DA-14A On WIDE 27 2 On Off NA NADA 15 15 DA-15A On WIDE 26 2 On Off NA NAR1 16 16 RA-16A On NA 0 1 On NA NA NAR2 17 1 RA-1B Off NA 0 1 Off NA NA NADA 18 2 DA-2B On WIDE 26 2 On Off NA NADA 19 3 DA-3B On WIDE 27 2 On Off NA NASA 21 5 SA-5B On ULTRA 0 2 On On NA NASA 28 13 SA-12B OnULTRA 0 2 On On NA NADA 30 14 DA-14B On WIDE 25 2 On Off NA NADA 31 15 DA-15B On WIDE 25 2 On Off NA NAR2 32 16 RA-16B On NA 0 1 On NA NA NA

5. Create a file system by size with a system-defined storage pool by using this commandsyntax:

$ nas_fs -name <fs_name> -create size=<size> pool=<pool> storage=<system_name>

where:

<fs_name> = name of the file system.

<size> = amount of space to add to the file system. Specify the size in GB by typing<number>G (for example, 250G), in MB by typing <number>M (for example, 500M), orin TB by typing <number>T (for example, 1T).

<pool> = name of the storage pool.

<system_name> = name of the storage system from which space for the file system isallocated.

Example:

Create file systems with AVM 73

Configuring

Page 74: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

To create a file system ufs1 of size 10 GB with a system-defined storage pool, type:

$ nas_fs -name ufs1 -create size=10G pool=symm_std storage=00018350149

Note: To mirror the file system with SRDF, you must specify the symm_std_rdf_src storage pool.This directs AVM to allocate space from volumes configured when installing for remote mirroringby using SRDF. Using SRDF/S with VNX for Disaster Recovery contains more information.

Output:

id = 1name = ufs1acl = 0in_use = Falsetype = uxfsvolume = avm1pool = symm_stdmember_of =rw_servers =ro_servers =rw_vdms =ro_vdms =auto_ext = no,thin=nodeduplication= offstor_devs = 00018350149disks = d20,d12,d18,d10

Note: TheVNXCommand Line Interface Reference for File contains information on the options availablefor creating a file system with the nas_fs command.

Create file systems with user-defined storage pools

The AVM system-defined storage pools are available for use with the VNX for file. If yourequire more manual control than the system-defined storage pools allow, create auser-defined storage pool and then create the file system by using that pool.

Note: Create a user-defined storage pool and define its attributes to reserve disk volumes so that yoursystem-defined storage pools cannot use them.

Before you begin

Prerequisites include:

◆ A user-defined storage pool can be created either by using manual volume managementor by letting AVM create the storage pool with a specified size. If you use manual volumemanagement, you must first stripe the volumes together and add the resulting volumesto the storage pool you create. Managing Volumes and File Systems for VNX Manuallydescribes the steps to create and manage volumes.

74 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 75: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

◆ You cannot use disk volumes you have reserved for other purposes. For example, youcannot use any disk volumes reserved for a system-defined storage pool. ControllingAccess to System Objects on VNX contains more information on access control levels.

◆ AVM system-defined storage pools designed for use with VNX for block storage systemsacquire pairs of use disk volumes that are storage-processor balanced and use the sameRAID type, disk count, and size. Modify system-defined and user-defined storage poolattributes on page 109 provides more information.

◆ When creating a user-defined storage pool to reserve disk volumes from a VNX for blockstorage system, use disk volumes that are storage-processor balanced and use the samequalities. Otherwise, AVM cannot find matching pairs, and the number of usable diskvolumes might be more limited than was intended.

To create a file system with a user-defined storage pool:

◆ Create a user-defined storage pool by volumes on page 76◆ Create a user-defined storage pool by size on page 76◆ Create the file system on page 78◆ Create file systems with automatic file system extension on page 81◆ Create file systems with the automatic file system extension option enabled on page 82

Create file systems with AVM 75

Configuring

Page 76: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Create a user-defined storage pool by volumes

To create a user-defined storage pool (from which space for the file system is allocated) byvolumes, add volumes to the storage pool and define the storage pool attributes.

Action

To create a user-defined storage pool by volumes, use this command syntax:

$ nas_pool -create -name <name> -acl <acl> -description <desc> -volumes<volume_name>[,<volume_name>,...] -default_slice_flag {y|n}

where:

<name> = name of the storage pool.

<acl> = designates an access control level for the new storage pool. Default value is 0.

<desc> = assigns a comment to the storage pool. Type the comment within quotes.

<volume_name> = names of the volumes to add to the storage pool. Can be a metavolume, slice volume, stripe volume,or disk volume. Use a comma to separate each volume name.

-default_slice_flag = determines whether members of the storage pool can be sliced when space is dispensed from thestorage pool. If set to y, then members might be sliced. If set to n, then the members of the storage pool cannot be sliced,and volumes specified cannot be built on a slice.

Example:

To create a user-defined storage pool named marketing with a description, with the disk members d126, d127, d128, andd129 specified, and allow the volumes to be built on a slice, type:

$ nas_pool -create -name marketing -description "storage pool for marketing" -volumesd126,d127,d128,d129 -default_slice_flag y

Output

id = 5name = marketingdescription = storage pool for marketingacl = 0in_use = Falseclients =members = d126,d127,d128,d129default_slice_flag = Trueis_user_defined = Truethin = Falsedisk_type = CLSTDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Create a user-defined storage pool by size

To create a user-defined storage pool (from which space for the file system is allocated) bysize, specify a template pool, size of the pool, minimum stripe size, and number of stripemembers.

76 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 77: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Action

To create a user-defined storage pool by size, use this command syntax:

$ nas_pool -create -name <name> -acl <acl> -description <desc>

-default_slice_flag {y|n} -size <integer>[M|G|T] -storage <system_name>

-template <system_pool_name> -num_stripe_members <num_stripe_mem>

-stripe_size <num>

where:

<name> = name of the storage pool.

<acl> = designates an access control level for the new storage pool. Default value is 0.

<desc> = assigns a comment to the storage pool. Type the comment within quotes.

-default_slice_flag = determines whether members of the storage pool can be sliced when space is dispensedfrom the storage pool. If set to y, then members might be sliced. If set to n, then the members of the storage pool cannotbe sliced, and volumes specified cannot be built on a slice.

<integer> = size of the storage pool, an integer between 1 and 1024. Specify the size in GB (default) by typing <integer>G(for example, 250G), in MB by typing <integer>M (for example, 500M), or in TB by typing <integer>T (for example, 1T).

<system_name> = storage system on which one or more volumes will be created and added to the storage pool.

<system_pool_name> = system pool template used to create the user pool. Required when the -size option is specified.The user pool will be created by using the profile attributes of the specified system pool template.

<num_stripe_mem> = number of stripe members used to create the user pool. Works only when both the -size and-template options are also specified. It overrides the number of stripe members attribute of the specified system pooltemplate.

<num> = stripe size used to create the user pool.Works only when both the -size and -template options are also specified.It overrides the stripe size attribute of the specified system pool template.

Example:

To create a 20 GB user-defined storage pool that is named marketing with a description by using the clar_r5_performancepool, and that contains 4 stripe members with a stripe size of 32768 KB, and allow the volumes to be built on a slice, type:

$ nas_pool -create -name marketing -description "storage pool for marketing"-default_slice_flag y -size 20G -template clar_r5_performance -num_stripe_members4 -stripe_size 32768

Create file systems with AVM 77

Configuring

Page 78: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Output

id = 5name = marketingdescription = storage pool for marketingacl = 0in_use = Falseclients =members = v213default_slice_flag = Trueis_user_defined = Truethin = Falsedisk_type = CLSTDserver_visibility = server_2,server_3template_pool = clar_r5_performancenum_stripe_members = 4stripe_size = 32768

Create the file system

To create a file system, you must first create a user-defined storage pool. Create a user-definedstorage pool by volumes on page 76 and Create a user-defined storage pool by size on page76 provide more information.

Use this procedure to create a file system by specifying a user-defined storage pool and anassociated storage system:

1. List the storage system by typing:

$ nas_storage -list

Output:

id acl name serial number1 0 APM00033900125 APM00033900125

2. Get detailed information of a specific attached storage system in the list by using thiscommand syntax:

$ nas_storage -info <system_name>

where:

<system_name> = name of the storage system

Example:

To get detailed information of the storage system APM00033900125, type:

$ nas_storage -info APM00033900125

Output:

78 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 79: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

id = 1arrayname = APM00033900125name = APM00033900125model_type = RACKMOUNTmodel_num = 630db_sync_time = 1073427660 == Sat Jan 6 17:21:00 EST 2007num_disks = 30num_devs = 21num_pdevs = 1num_storage_grps = 0num_raid_grps = 10cache_page_size = 8wr_cache_mirror = Truelow_watermark = 70high_watermark = 90unassigned_cache = 0failed_over = Falsecaptive_storage = TrueActive SoftwareNavisphere = 6.6.0.1.43ManagementServer = 6.6.0.1.43Base = 02.06.630.4.001

Storage ProcessorsSP Identifier = Asignature = 926432microcode_version = 2.06.630.4.001serial_num = LKE00033500756prom_rev = 3.00.00agent_rev = 6.6.0 (1.43)phys_memory = 3968sys_buffer = 749read_cache = 32write_cache = 3072free_memory = 115raid3_mem_size = 0failed_over = Falsehidden = Truenetwork_name = spaip_address = 128.221.252.200subnet_mask = 255.255.255.0gateway_address = 128.221.252.100num_disk_volumes = 11 - root_disk root_ldisk d3 d4 d5 d6 d8

d13 d14 d15 d16SP Identifier = Bsignature = 926493microcode_version = 2.06.630.4.001serial_num = LKE00033500508prom_rev = 3.00.00agent_rev = 6.6.0 (1.43)phys_memory = 3968raid3_mem_size = 0failed_over = Falsehidden = Truenetwork_name = OEM-XOO25IL9VL9ip_address = 128.221.252.201subnet_mask = 255.255.255.0gateway_address = 128.221.252.100num_disk_volumes = 4 - disk7 d9 d11 d12

Create file systems with AVM 79

Configuring

Page 80: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Note: This is not a complete output.

3. Create the file system from a user-defined storage pool and designate the storage systemon which you want the file system to reside by using this command syntax:

$ nas_fs -name <fs_name> -type <type> -create <volume_name> pool=<pool>storage=<system_name>

where:

<fs_name> = name of the file system

<type> = type of file system, such as uxfs (default), mgfs, or rawfs

<volume_name> = name of the volume

<pool> = name of the storage pool

<system_name> = name of the storage system on which the file system resides

Example:

To create the file system ufs1 from a user-defined storage pool and designate theAPM00033900125 storage system on which you want the file system ufs1 to reside, type:

$ nas_fs -name ufs1 -type uxfs -create MTV1 pool=marketing storage=APM00033900125

Output:

id = 2name = ufs1acl = 0in_use = Falsetype = uxfsvolume = MTV1pool = marketingmember_of = root_avm_fs_group_2rw_servers =ro_servers =rw_vdms =ro_vdms =auto_ext = no,thin=nodeduplication= offstor_devs = APM00033900125-0111disks = d6,d8,d11,d12

80 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 81: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Create file systems with automatic file system extension

Use the -auto_extend option of the nas_fs command to enable automatic file system extensionon a new file system created with AVM. The option is disabled by default.

Note: Automatic file system extension does not alleviate the need for appropriate planning. Createthe file systems with adequate space to accommodate the estimated usage. Allocating too little spaceto accommodate normal file system usage makes the Control Station rapidly and repeatedly attemptto extend the file system. If the Control Station cannot adequately extend the file system to accommodatethe usage quickly enough, the automatic extension fails.

If automatic file system extension is disabled and the file system reaches 90 percent full, awarning message is written to the sys_log. Any action necessary is at the administrator’sdiscretion.

Note: You do not need to set the maximum size for a newly created file system when you enableautomatic extension. The default maximum size is 16 TB. With automatic extension enabled, even ifthe HWM is not set, the file system automatically extends up to 16 TB, if the storage space is availablein the storage pool.

Use this procedure to create a file system by specifying a system-defined storage pool anda storage system, and enable automatic file system extension.

Action

To create a file system with automatic file system extension enabled, use this command syntax:

$ nas_fs -name <fs_name> -type <type> -create size=<size> pool=<pool>storage=<system_name> -auto_extend {no|yes}

where:

<fs_name> = name of the file system.

<type> = type of file system.

<size> = amount of space to add to the file system. Specify the size in GB by typing <number>G (for example, 250G),in MB by typing <number>M (for example, 500M), or in TB by typing <number>T (for example, 1T).

<pool> = name of the storage pool from which to allocate space to the file system.

<system_name> = name of the storage system associated with the storage pool.

Example:

To enable automatic file system extension on a new 10 GB file system created by specifying a system-defined storagepool and a VNX for block storage system, type:

$ nas_fs -name ufs1 -type uxfs -create size=10G pool=clar_r5_performancestorage=APM00042000814 -auto_extend yes

Create file systems with AVM 81

Configuring

Page 82: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Output

id = 434name = ufs1acl = 0in_use = Falsetype = uxfsworm = offvolume = v1634pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers =ro_servers =rw_vdms =ro_vdms =auto_ext = hwm=90%,thin=nodeduplication= offstor_devs = APM00042000814-001D,APM00042000814-001A,

APM00042000814-0019,APM00042000814-0016disks = d20,d12,d18,d10

Create file systems with the automatic file system extension optionenabled

When you create a file system with automatic extension enabled, you can set the point atwhich the file system automatically extends (the HWM) and the maximum size to which itcan grow. You can also enable thin provisioning at the same time that you create or extenda file system. Enable automatic file system extension and options on page 91 providesinformation on modifying the automatic file system extension options.

If you set the slice=no option on the file system, the actual file system size might becomebigger than the size specified for the file system, which would exceed the maximum size.In this case, you receive a warning, and the automatic extension fails. If you do not specifythe file system slice option (-option slice=yes|no) when you create the file system, it defaultsto the setting of the storage pool. Modify system-defined and user-defined storage poolattributes on page 109 provides more information.

Note: If the actual file system size is above the HWM when thin provisioning is enabled, the clientsees the actual file system size instead of the specified maximum size.

Enabling automatic file system extension and thin provisioning options does notautomatically reserve the space from the storage pool for that file system. So that theautomatic extension can succeed, administrators must ensure that adequate storage spaceexists. If the available storage is less than the maximum size setting, automatic extensionfails. Users receive an error message when the file system becomes full, even though itappears that there is free space in the file system. The file system must be manually extended.

Use this procedure to simultaneously set the automatic file system extension options whenyou are creating the file system:

82 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 83: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

1. Create a file system of a specified size, enable automatic file system extension and thinprovisioning, and set the HWM and the maximum file system size simultaneously byusing this command syntax:

$ nas_fs -name <fs_name> -type <type> -create size=<integer>[T|G|M]pool=<pool> storage=<system_name> -auto_extend {no|yes} -thin {yes|no}-hwm <50-99>% -max_size <integer>[T|G|M]

where:

<fs_name> = name of the file system.

<type> = type of file system.

<integer> = size requested in MB, GB, or TB. The maximum size is 16 TB.

<pool> = name of the storage pool.

<system_name> = attached storage system on which the file system and storage pool reside.

<50-99> = percentage between 50 and 99, at which you want the file system toautomatically extend.

Example:

To create a 10 MB file system of type UxFS from an AVM storage pool, with automaticextension enabled, and a maximum file system size of 200 MB, HWM of 90 percent, andthin provisioning enabled, type:

$ nas_fs -name ufs2 -type uxfs -create size=10M pool=clar_r5_performance

-auto_extend yes -thin yes -hwm 90% -max_size 200M

Output:

id = 27name = ufs2acl = 0in_use = Truetype = uxfsworm = offvolume = v104pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers= server_2ro_servers=rw_vdms =ro_vdms =auto_ext = hwm=90%,max_size=200M,thin=yesdeduplication = Offthin_storage = Truetiering_policy = Auto-tiercompressed = Falsemirrored = Falseckpts =

Note: When you enable thin provisioning on a new or existing file system, you must also specifythe maximum size to which the file system can automatically extend.

Create file systems with AVM 83

Configuring

Page 84: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

2. Verify the settings for the specific file system after enabling automatic extension by usingthis command syntax:

$ nas_fs -info <fs_name>

where:

<fs_name> = name of the file system

Example:

To verify the settings for file system ufs2 after enabling automatic extension, type:

$ nas_fs -info ufs2

Output:

id = 27name = ufs2acl = 0in_use = Truetype = uxfsworm = offvolume = v104pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers = server_2ro_servers =rw_vdms =ro_vdms =backups = ufs2_snap1,ufs2_snap2auto_ext = hwm=90%,max_size=200M,thin=yesdeduplication= offthin_storage = Truetiering_policy= Auto-tiercompressed = Falsemirrored = Falseckpts =stor_devs = APM00042000814-001D,APM00042000814-001A,

APM00042000814-0019,APM00042000814-0016disks = d20,d12,d18,d10

You can also set the options -hwm and -max_size on each file system with automatic extensionenabled. When enabling thin provisioning on a file system, you must set the maximum size,but setting the high water mark is optional.

Extend file systems with AVM

Increase the size of a file system nearing its maximum capacity by extending the file system.You can:

◆ Extend the size of a file system to add space if it has an associated system-defined oruser-defined storage pool. You can also specify the storage system from which to allocatespace. Extend file systems by using storage pools on page 85 provides instructions.

84 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 85: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

◆ Extend the size of a file system by adding volumes if the file system has an associatedsystem-defined or user-defined storage pool. Extend file systems by adding volumes toa storage pool on page 87 provides instructions.

◆ Extend the size of a file system by using a storage pool other than the one used to createthe file system. Extend file systems by using a different storage pool on page 89 providesinstructions.

◆ Extend an existing file system by enabling automatic extension on that file system. Enableautomatic file system extension and options on page 91 provides instructions.

◆ Extend an existing file system by enabling thin provisioning on that file system. Enablethin provisioning on page 96 provides instructions.

Managing Volumes and File Systems on VNXManually contains the instructions to extend filesystems manually.

Extend file systems by using storage pools

All file systems created by using the AVM feature have an associated storage pool.

Extend a file system created with either a system-defined storage pool or a user-definedstorage pool by specifying the size and the name of the file system. AVM allocates storagefrom the storage pool to the file system. You can also specify the storage system you wantto use. If you do not specify, the last storage system associated with the storage pool is used.

Note: A file system created by using a mapped storage pool can be extended on its existing pool or byusing a compatible mapped storage pool that contains the same disk type.

Use this procedure to extend a file system by size:

1. Check the file system configuration to confirm that the file system has an associatedstorage pool by using this command syntax:

$ nas_fs -info <fs_name>

where:

<fs_name> = name of the file system

Note: If you see a storage pool defined in the output, the file system was created with AVM andhas an associated storage pool.

Example:

To check the file system configuration to confirm that file system ufs1 has an associatedstorage pool, type:

$ nas_fs -info ufs1

Output:

Extend file systems with AVM 85

Configuring

Page 86: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

id = 27name = ufs1acl = 0in_use = Truetype = uxfsworm = offvolume = v104pool = FP1member_of = root_avm_fs_group_3rw_servers= server_2ro_servers=rw_vdms =ro_vdms =deduplication = Offthin_storage = Truetiering_policy = Auto-tiercompressed = Falsemirrored = Falseckpts =

2. Extend the size of the file system by using this command syntax:

$ nas_fs -xtend <fs_name> size=<size> pool=<pool> storage=<system_name>

where:

<fs_name> = name of the file system.

<size> = amount of space to add to the file system. Specify the size in GB by typing<number>G (for example, 250G), in MB by typing <number>M (for example, 500M), orin TB by typing <number>T (for example, 1T).

<pool> = name of the storage pool.

<system_name> = name of the storage system. If you do not specify a storage system, thedefault storage system is the one on which the file system resides. If the file system spansmultiple storage systems, the default is any one of the storage systems on which the filesystem resides.

Note: The first time you extend the file system without specifying a storage pool, the default storagepool is the one used to create the file system. If you specify a storage pool that is different fromthe one used to create the file system, the next time you extend this file system without specifyinga storage pool, the last pool in the output list is the default.

Example:

To extend the size of file system ufs1 by 10 MB, type:

$ nas_fs -xtend ufs1 size=10M pool=clar_r5_performance storage=APM00023700165

Output:

86 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 87: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

id = 8name = ufs1acl = 0in_use = Falsetype = uxfsvolume = v121pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers=ro_servers=rw_vdms =ro_vdms =stor_devs = APM00023700165-0111disks = d7,d13,d19,d25,d30,d31,d32,d33

3. Check the size of the file system after extending it to confirm that the size increased byusing this command syntax:

$ nas_fs -size <fs_name>

where:

<fs_name> = name of the file system

Example:

To check the size of file system ufs1 after extending it to confirm that the size increased,type:

$ nas_fs -size ufs1

Output:

total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)volume: total = 138096 (sizes in MB)

Extend file systems by adding volumes to a storage pool

You can extend a file system manually by specifying the volumes to add.

Note: With user-defined storage pools, you can manually create the underlying volumes, includingstriping, before adding the volume to the storage pool. Managing Volumes and File Systems on VNXManually describes the procedures needed to perform these tasks before creating or extending the filesystem.

If you do not specify a storage system when extending the file system, the default storagesystem is the one on which the file system resides. If the file system spans multiple storagesystems, the default is any one of the storage systems on which the file system resides.

Use this procedure to extend the file system by adding volumes to the same user-definedstorage pool that was used to create the file system:

1. Check the configuration of the file system to confirm the associated user-defined storagepool by using this command syntax:

Extend file systems with AVM 87

Configuring

Page 88: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

$ nas_fs -info <fs_name>

where:

<fs_name> = name of the file system

Example:

To check the configuration of file system ufs3 to confirm the associated user-definedstorage pool, type:

$ nas_fs -info ufs3

Output:

id = 27name = ufs3acl = 0in_use = Truetype = uxfsworm = offvolume = v104pool = marketingmember_of =rw_servers=

ro_servers=rw_vdms =ro_vdms =deduplication = Offthin_storage = Truetiering_policy = Auto-tiercompressed = Falsemirrored = Falseckpts =

Note: The user-defined storage pool used to create the file system is defined in the output aspool=marketing.

2. Add volumes to extend the size of a file system by using this command syntax:

$ nas_fs -xtend <fs_name> <volume_name> pool=<pool> storage=<system_name>

where:

<fs_name> = name of the file system.

<volume_name> = name of the volume to add to the file system.

<pool> = storage pool associated with the file system. It can be user-defined orsystem-defined.

<system_name> = name of the storage system on which the file system resides.

Example:

To extend file system ufs3, type:

$ nas_fs -xtend ufs3 v121 pool=marketing storage=APM00023700165

Output:

88 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 89: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

id = 10name = ufs3acl = 0in_use = Falsetype = uxfsvolume = v121pool = marketingmember_of =rw_servers=ro_servers=rw_vdms =ro_vdms =stor_devs = APM00023700165-0111disks = d7,d8,d13,d14

Note: The next time you extend this file system without specifying a storage pool, the last pool inthe output list is the default.

3. Check the size of the file system after extending it to confirm that the size increased byusing this command syntax:

$ nas_fs -size <fs_name>

where:

<fs_name> = name of the file system

Example:

To check the size of file system ufs3 after extending it to confirm that the size increased,type:

$ nas_fs -size ufs3

Output:

total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)volume: total = 138096 (sizes in MB)

Extend file systems by using a different storage pool

You can use more than one storage pool to extend a file system. Ensure that the storagepools have space allocated from the same storage system to prevent the file system fromspanning more than one storage system.

Note: A file system created by using a mapped storage pool can be extended on its existing pool or byusing a compatible mapped storage pool that contains the same disk type.

Use this procedure to extend the file system by using a storage pool other than the one usedto create the file system:

1. Check the file system configuration to confirm that it has an associated storage pool byusing this command syntax:

Extend file systems with AVM 89

Configuring

Page 90: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

$ nas_fs -info <fs_name>

where:

<fs_name> = name of the file system

Example:

To check the file system configuration to confirm that file system ufs2 has an associatedstorage pool, type:

$ nas_fs -info ufs2

Output:

id = 9name = ufs2acl = 0in_use = Truetype = uxfsworm = offvolume = v121pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers=ro_servers=rw_vdms =ro_vdms =deduplication = Offthin_storage = Truetiering_policy = Auto-tiercompressed = Falsemirrored = Falseckpts =

Note: The storage pool used earlier to create or extend the file system is shown in the output asassociated with this file system.

2. Optionally, extend the file system by using a storage pool other than the one used tocreate the file system by using this command syntax:

$ nas_fs -xtend <fs_name> size=<size> pool=<pool>

where:

<fs_name> = name of the file system.

<size> = amount of space to add to the file system. Specify the size in GB by typing<number>G (for example, 250G), in MB by typing <number>M (for example, 500M), orin TB by typing <number>T (for example, 1T).

<pool> = name of the storage pool.

Example:

To extend file system ufs2 by using a storage pool other than the one used to create thefile system, type:

$ nas_fs -xtend ufs2 size=10M pool=clar_r5_economy

90 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 91: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Output:

id = 9name = ufs2acl = 0in_use = Falsetype = uxfsvolume = v123pool = clar_r5_performance,clar_r5_economymember_of = root_avm_fs_group_3,root_avm_fs_group_4rw_servers=ro_servers=rw_vdms =ro_vdms =stor_devs = APM00033900165-0112disks = d7,d13,d19,d25

Note: The storage pools used to create and extend the file system now appear in the output. Thereis only one storage system from which space for these storage pools is allocated.

3. Check the file system size after extending it to confirm the increase in size by using thiscommand syntax:

$ nas_fs -size <fs_name>

where:

<fs_name> = name of the file system

Example:

To check the size of file system ufs2 after extending it to confirm the increase in size,type:

$ nas_fs -size ufs2

Output:

total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)volume: total = 138096 (sizes in MB)

Enable automatic file system extension and options

You can automatically extend an existing file system created with AVM system-defined oruser-defined storage pools. The file system automatically extends by using space from thestorage system and storage pool with which the file system is associated.

If you set the slice=no option on the file system, the actual file system size might becomebigger than the size specified for the file system, which would exceed the maximum size.In this case, you receive a warning, and the automatic extension fails. If you do not specifythe file system slice option (-option slice=yes|no) when you create the file system, it defaultsto the setting of the storage pool.

Modify system-defined and user-defined storage pool attributes on page 109 describes theprocedure to modify the default_slice_flag attribute on the storage pool.

Extend file systems with AVM 91

Configuring

Page 92: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Use the -modify option to enable automatic extension on an existing file system. You canalso set the HWM and maximum size.

To enable automatic file system extension and options:

◆ Enable automatic file system extension on page 92◆ Set the HWM on page 94◆ Set the maximum file system size on page 95

You can also enable thin provisioning at the same time that you create or extend a file system.Enable thin provisioning on page 96 describes the procedure to enable thin provisioningon an existing file system.

Enable automatic extension, thin provisioning, and all options simultaneously on page 98describes the procedure to simultaneously enable automatic extension, thin provisioning,and all options on an existing file system.

Enable automatic file system extension

If the HWM or maximum size is not set, and if there is space available, the file systemautomatically extends up to the default maximum size of 16 TB when the file system reachesthe default HWM of 90 percent.

An error message appears if you try to enable automatic extension on a file system that wascreated manually.

Note: The HWM is 90 percent by default when you enable automatic file system extension.

Action

To enable automatic extension on an existing file system, use this command syntax:

$ nas_fs -modify <fs_name> -auto_extend {no|yes}

where:

<fs_name> = name of the file system

Example:

To enable automatic extension on the existing file system ufs3, type:

$ nas_fs -modify ufs3 -auto_extend yes

92 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 93: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Output

id = 28name = ufs3acl = 0in_use = Truetype = uxfsworm = offvolume = v157pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers= server_2ro_servers=rw_vdms =ro_vdms =auto_ext = hwm=90%,thin=nostor_devs = APM00042000818-001F,APM00042000818-001D

APM00042000818-0019,APM00042000818-0016disks = d20,d18,d14,d11disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2

Extend file systems with AVM 93

Configuring

Page 94: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Set the HWM

With automatic file system extension enabled on an existing file system, use the -hwm optionto set a threshold. To specify a threshold, type an integer between 50 and 99 percent. Thedefault is 90 percent.

If the HWM or maximum size is not set, the file system automatically extends up to thedefault maximum size of 16 TB when the file system reaches the default HWM of 90 percent,if the space is available. The value for the maximum size, if specified, has an upper limit of16 TB.

Action

To set the HWM on an existing file system, with automatic file system extension enabled, use this command syntax:

$ nas_fs –modify <fs_name> -hwm <50-99>%

where:

<fs_name> = name of the file system

<50-99> = an integer representing the file system usage point at which you want it to automatically extend

Example:

To set the HWM to 85 percent on the existing file system ufs3, with automatic extension already enabled, type:

$ nas_fs -modify ufs3 -hwm 85%

Output

id = 28name = ufs3acl = 0in_use = Truetype = uxfsworm = offvolume = v157pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers= server_2ro_servers=rw_vdms =ro_vdms =auto_ext = hwm=85%,thin=nostor_devs = APM00042000818-001F,APM00042000818-001D,

APM00042000818-0019,APM00042000818-0016disks = d20,d18,d14,d11disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2

94 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 95: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Set the maximum file system size

Use the -max_size option to specify a maximum size to which a file system can grow. Tospecify the maximum size, type an integer and specify T for TB, G for GB (default), or M forMB.

To convert gigabytes to megabytes, multiply the number of gigabytes by 1024. To convertterabytes to gigabytes, multiply the number of terabytes by 1024. For example, to convert450 gigabytes to megabytes, 450 x 1024 = 460800 MB.

When you enable automatic file system extension, the file system automatically extends upto the default maximum size of 16 TB. Set the HWM at which you want the file system toautomatically extend. If the HWM is not set, the file system automatically extends up to 16TB when the file system reaches the default HWM of 90 percent, if the space is available.

Action

To set the maximum file system size with automatic file system extension already enabled, use this command syntax:

$ nas_fs -modify <fs_name> -max_size <integer>[T|G|M]

where:

<fs_name> = name of the file system

<integer> = maximum size requested in MB, GB, or TB

Example:

To set the maximum file system size on the existing file system, type:

$ nas_fs -modify ufs3 -max_size 16T

Extend file systems with AVM 95

Configuring

Page 96: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Output

id = 28name = ufs3acl = 0in_use = Truetype = uxfsworm = offvolume = v157pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers= server_2ro_servers=rw_vdms =ro_vdms =auto_ext = hwm=85%,max_size=16769024M,thin=nostor_devs = APM00042000818-001F,APM00042000818-001D,

APM00042000818-0019,APM00042000818-0016disks = d20,d18,d14,d11disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2

Enable thin provisioning

You can also enable thin provisioning at the same time that you create or extend a file system.Use the -thin option to enable thin provisioning. You must also specify the maximum sizeto which the file system should automatically extend. An error message appears if youattempt to enable thin provisioning and do not set the maximum size. Set the maximum filesystem size on page 95 describes the procedure to set the maximum file system size.

The upper limit for the maximum size is 16 TB. The maximum size you set is the file systemsize that is presented to users, if the maximum size is larger than the actual file system size.

Note: Enabling automatic file system extension and thin provisioning options does not automaticallyreserve the space from the storage pool for that file system. Administrators must ensure that adequatestorage space exists, so that the automatic extension operation can succeed. If the available storage isless than the maximum size setting, automatic extension fails. Users receive an error message whenthe file system becomes full, even though it appears that there is free space in the file system. The filesystem must be manually extended.

Enable thin provisioning on the source file system when the feature is used in a replicationsituation. With thin provisioning enabled, the NFS, CIFS, and FTP clients see the actual sizeof the Replicator destination file system, and the clients see the virtually provisionedmaximum size of the Replicator source file system. Interoperability considerations on page59 provides additional information.

96 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 97: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Action

To enable thin provisioning with automatic extension enabled on the file system, use this command syntax:

$ nas_fs -modify <fs_name> -max_size <integer>[T|G|M] -thin {yes|no}

where:

<fs_name> = name of the file system

<integer> = size requested in MB, GB, or TB

Example:

To enable thin provisioning, type:

$ nas_fs -modify ufs1 -max_size 16T -thin yes

Output

id = 27name = ufs3acl = 0in_use = Truetype = uxfsworm = offvolume = v157pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers= server_2ro_servers=rw_vdms =ro_vdms =auto_ext = hwm=85%,max_size=16769024M,thin=yesstor_devs = APM00042000818-001F,APM00042000818-001D,

APM00042000818-0019,APM00042000818-0016disks = d20,d18,d14,d11disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2

Extend file systems with AVM 97

Configuring

Page 98: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Enable automatic extension, thin provisioning, and all optionssimultaneously

Note: An error message appears if you try to enable automatic file system extension on a file systemthat was created without using a storage pool.

Action

To simultaneously enable automatic file system extension and thin provisioning on an existing file system, and to set theHWM and the maximum size, use this command syntax:

$ nas_fs -modify <fs_name> -auto_extend {no|yes} -thin {yes|no}-hwm <50-99>% -max_size <integer>[T|G|M]

where:

<fs_name> = name of the file system

<50-99> = an integer that represents the file system usage point at which you want it to automatically extend

<integer> = size requested in MB, GB, or TB

Example:

To modify a UxFS to enable automatic extension, enable thin provisioning, set a maximum file system size of 16 TB withan HWM of 90 percent, type:

$ nas_fs -modify ufs4 -auto_extend yes -thin yes -hwm 90% -max_size 16T

Output

id = 29name = ufs4acl = 0in_use = Falsetype = uxfsworm = offvolume = v157pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers=ro_servers=rw_vdms =ro_vdms =auto_ext = hwm=90%,max_size=16769024M,thin=yesstor_devs = APM00042000818-001F,APM00042000818-001D,

APM00042000818-0019,APM00042000818-0016disks = d20,d18,d14,d11

98 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 99: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Verify the maximum size of the file system

Automatic file system extension fails when the file system reaches the maximum size.

Action

To force an extension to determine whether the maximum size has been reached, use this command syntax:

$ nas_fs -xtend <fs_name> size=<size>

where:

<fs_name> = name of the file system

<size> = size to extend the file system by, in GB, MB, or TB

Example:

To force an extension to determine whether the maximum size has been reached, type:

$ nas_fs -xtend ufs1 size=4M

Output

id = 759name = ufs1acl = 0in_use = Truetype = uxfsworm = offvolume = v2459pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers= server_4ro_servers=rw_vdms =ro_vdms =auto_ext = hwm=90%,max_size=16769024M (reached)thin=yes<<<stor_devs = APM00041700549-0018disks = d10disk=d10 stor_dev=APM00041700549-0018 addr=c16t1l8 server=server_4disk=d10 stor_dev=APM00041700549-0018 addr=c32t1l8 server=server_4disk=d10 stor_dev=APM00041700549-0018 addr=c0t1l8 server=server_4disk=d10 stor_dev=APM00041700549-0018 addr=c48t1l8 server=server_4

Extend file systems with AVM 99

Configuring

Page 100: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Create file system checkpoints with AVM

Use either AVM system-defined or user-defined storage pools to create file systemcheckpoints. Specify the storage system where the file system checkpoint should reside.

Use this procedure to create the checkpoint by specifying a storage pool and storage system:

Note: You can specify the storage pool for the checkpoint SavVol only when there are no existingcheckpoints of the PFS.

1. Obtain the list of available storage systems by typing:

$ nas_storage -list

Note: To obtain more detailed information on the storage system and associated names, use the-info option instead.

2. Create the checkpoint by using this command syntax:

$ fs_ckpt <fs_name> -name <name> -Create [size=<integer>[T|G|M|%]] pool=<pool>storage=<system_name>

where:

<fs_name> = name of the file system for which you want to create a checkpoint.

<name> = name of the checkpoint.

<integer> = amount of space to allocate to the checkpoint. Type the size in TB, GB, orMB.

<pool> = name of the storage pool.

<system_name> = storage system on which the file system checkpoint resides.

Note: Thin provisioning is not supported with checkpoints. NFS, CIFS, and FTP clients cannot seethe virtually provisioned maximum size of a SnapSure checkpoint file system.

Example:

To create the checkpoint ckpt1, type:

$ fs_ckpt ufs1 -name ckpt1 -Create size=10G pool=clar_r5_performance

storage=APM00023700165

Output:

100 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 101: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

id = 1name = ckpt1acl = 0in_use = Falsetype = uxfsvolume = V126pool = clar_r5_performancemember_of =rw_servers=ro_servers=rw_vdms =ro_vdms =stor_devs = APM00023700165-0111disks = d7,d8

Create file system checkpoints with AVM 101

Configuring

Page 102: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

102 Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Page 103: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

4

Managing

The tasks to manage AVM storage pools are:◆ List existing storage pools on page 104◆ Display storage pool details on page 105◆ Display storage pool size information on page 106◆ Modify system-defined and user-defined storage pool attributes on

page 109◆ Extend a user-defined storage pool by volume on page 118◆ Extend a user-defined storage pool by size on page 119◆ Extend a system-defined storage pool on page 120◆ Remove volumes from storage pools on page 122◆ Delete user-defined storage pools on page 123

Managing Volumes and File Systems on VNX AVM 7.0 103

Page 104: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

List existing storage pools

When the existing storage pools are listed, all system-defined storage pools and user-definedstorage pools appear in the output, regardless of whether they are in use.

Action

To list all existing system-defined and user-defined storage pools, type:

$ nas_pool -list

Output

id in_use acl name storage_system3 n 0 clar_r5_performance FCNTR07420003840 y 0 TP1 FCNTR07420003841 y 0 FP1 FCNTR074200038

104 Managing Volumes and File Systems on VNX AVM 7.0

Managing

Page 105: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Display storage pool details

Action

To display detailed information for a storage pool, use this command syntax:

$ nas_pool -info <name>

where:

<name> = name of the storage pool

Example:

To display detailed information for the storage pool FP1, type:

$ nas_pool -info FP1

Output

id = 40name = FP1description = Mapped Pool on FCNTR074200038acl = 0in_use = Falseclients =members =default_slice_flag = Trueis_user_defined = Falsethin = Mixedtiering_policy = Auto-tiercompressed = Falsemirrored = Falsedisk_type = Mixedvolume_profile = FP1_vpis_dynamic = Trueis_greedy = N/A

Display storage pool details 105

Managing

Page 106: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Display storage pool size information

Information about the size of the storage pool appears in the output. If there is more thanone storage pool, the output shows the size information for all the storage pools.

The size information includes:

◆ The total used space in the storage pool in megabytes (used_mb).◆ The total unused space in the storage pool in megabytes (avail_mb).◆ The total used and unused space in the storage pool in megabytes (total_mb).◆ The total space available from all sources in megabytes that could be added to the storage

pool (potential_mb). For user-defined storage pools, the output for potential_mb is 0because they must be manually extended and shrunk.

Note: If either non–MB-aligned disk volumes or disk volumes of different sizes are striped together,truncation of storage might occur. The total amount of space added to a pool might be different thanthe total amount taken from potential storage. Total space in the pool includes the truncated space,but potential storage does not include the truncated space.

In the Unisphere for File software, the potential megabytes in the output represents the totalavailable storage, including the storage pool. In the VNX for file CLI, the output forpotential_mb does not include the space in the storage pool.

Note: Use the -size -all option to display the size information for all storage pools.

Action

To display the size information for a specific storage pool, use this command syntax:

$ nas_pool -size <name>

where:

<name> = name of the storage pool

Example:

To display the size information for the clar_r5_performance storage pool, type:

$ nas_pool -size clar_r5_performance

Output

id = 3name = clar_r5_performanceused_mb = 128000avail_mb = 0total_mb = 260985potential_mb = 260985

106 Managing Volumes and File Systems on VNX AVM 7.0

Managing

Page 107: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Action

To display the size information for a specific mapped storage pool, use this command syntax:

$ nas_pool -size <name>

where:

<name> = name of the storage pool

Example:

To display the size information for the Pool0 storage pool, type:

$ nas_pool -size Pool0

Output

id = 43name = Pool0used_mb = 0avail_mb = 0total_mb = 0potential_mb = 3691Physical storage usage in Pool Pool0 on APM00101902363used_mb = 16385avail_mb = 1632355total_mb = 1648740

Display storage pool size information 107

Managing

Page 108: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Display size information for Symmetrix storage pools

Use the -size -all option to display the size information for all storage pools.

Action

To display the size information of Symmetrix storage pools, use this command syntax:

$ nas_pool -size <name> -slice y

where:

<name> = name of the storage pool

Example:

To request size information for the Symmetrix symm_std storage pool, type:

$ nas_pool -size symm_std -slice y

Output

id = 5name = symm_stdused_mb = 128000avail_mb = 0total_mb = 260985potential_mb = 260985

Note

◆ Use the -slice y option to include any space from sliced volumes in the available result. However, if the default_slice_flagvalue is set to no, then sliced volumes do not appear in the output.

◆ The size information for the system-defined storage pool named clar_r5_performance appears in the output. If youhave more storage pools, the output shows the size information for all the storage pools.

◆ used_mb is the used space in the specified storage pool in megabytes.

◆ avail_mb is the amount of unused available space in the storage pool in megabytes.

◆ total_mb is the total of used and unused space in the storage pool in megabytes.

◆ potential_mb is the potential amount of storage that can be added to the storage pool available from all sources inmegabytes. For user-defined storage pools, the output for potential_mb is 0 because they must be manually extendedand shrunk. In this example, total_mb and potential_mb are the same because the total storage in the storage poolis equal to the total potential storage available.

◆ If either non–megabyte-aligned disk volumes or disk volumes of different sizes are striped together, truncation ofstorage might occur. The total amount of space added to a pool might be different than the total amount taken frompotential storage. Total space in the pool includes the truncated space, but potential storage does not include thetruncated space.

108 Managing Volumes and File Systems on VNX AVM 7.0

Managing

Page 109: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Modify system-defined and user-defined storage pool attributes

System-defined and user-defined storage pools have attributes that control how they managethe volumes and file systems. Table 7 on page 36 lists the modifiable storage pool attributes,and their values and descriptions.

You can change the attribute default_slice_flag for system-defined and user-defined storagepools. The flag indicates whether member volumes can be sliced. If the storage pool hasmember volumes built on one or more slices, you cannot set this value to n.

Action

To modify the default_slice_flag for a system-defined or user-defined storage pool, use this command syntax:

$ nas_pool -modify {<name>|id=<id>} -default_slice_flag {y|n}

where:

<name> = name of the storage pool

<id> = ID of the storage pool

Example:

To modify a storage pool named marketing and change the default_slice_flag to prevent members of the pool from beingsliced when space is dispensed, type:

$ nas_pool -modify marketing -default_slice_flag n

Output

id = 5name = marketingdescription = storage pool for marketingacl = 0in_use = Falseclients =members = d126,d127,d128,d129default_slice_flag= Falseis_user_defined = Truethin = Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members= N/Astripe_size = N/A

Note

◆ When the default_slice_flag is set to y, it appears as True in the output.

◆ If using automatic file system extension, the default_slice_flag should be set to n.

Modify system-defined and user-defined storage pool attributes 109

Managing

Page 110: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Modify system-defined storage pool attributes

The system-defined storage pool’s attributes that can be modified are:

◆ -is_dynamic: Indicates whether the system-defined storage pool is allowed toautomatically add or remove member volumes.

◆ -is_greedy: If this is set to y (greedy), the system-defined storage pool attempts to createnew member volumes before using space from existing member volumes. If this is setto n (not greedy), the system-defined storage pool consumes all the existing space in thestorage pool before trying to add additional member volumes.

Note: When extending a file system, the is_greedy attribute is ignored unless there is not enoughfree space on the existing volumes that the file system is using. Table 7 on page 36 describes theis_greedy behavior.

The tasks to modify the attributes of a system-defined storage pool are:

◆ Modify the -is_greedy attribute of a system-defined storage pool on page 111◆ Modify the -is_dynamic attribute of a system-defined storage pool on page 112

110 Managing Volumes and File Systems on VNX AVM 7.0

Managing

Page 111: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Modify the -is_greedy attribute of a system-defined storage pool

Action

To modify the -is_greedy attribute of a specific system-defined storage pool to allow the storage pool to use new volumesrather than existing volumes, use this command syntax:

$ nas_pool -modify {<name>|id=<id>} -is_greedy {y|n}

where:

<name> = name of the storage pool

<id> = ID of the storage pool

Example:

To change the attribute -is_greedy to false, for the storage pool named clar_r5_performance, type:

$ nas_pool -modify clar_r5_performance -is_greedy n

Output

id = 3name = clar_r5_performancedescription = CLARiiON RAID5 4plus1acl = 0in_use = Falseclients =members =default_slice_flag = Trueis_user_defined = Falsethin = Falsevolume_profile = clar_r5_performance_vpis_dynamic = Trueis_greedy = Falsenum_stripe_members = 4stripe_size = 32768

Note

The n entered in the example delivers a False answer to the is_greedy attribute in the output.

Modify system-defined and user-defined storage pool attributes 111

Managing

Page 112: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Modify the -is_dynamic attribute of a system-defined storage pool

Action

To modify the -is_dynamic attribute of a specific system-defined storage pool to not allow the storage pool to add or removenew members, use this command syntax:

$ nas_pool -modify {<name>|id=<id>} -is_dynamic {y|n}

where:

<name> = name of the storage pool

<id> = ID of the storage pool

Example:

To change the attribute -is_dynamic to false to not allow the storage pool to add or remove new members, for the storagepool named clar_r5_performance, type:

$ nas_pool -modify clar_r5_performance -is_dynamic n

Output

id = 3name = clar_r5_performancedescription = CLARiiON RAID5 4plus1acl = 0in_use = Falseclients =members = d126,d127,d128,d129default_slice_flag = Trueis_user_defined = Falsethin = Falsevolume_profile = clar_r5_performance_vpis_dynamic = Falseis_greedy = Falsenum_stripe_members = 4stripe_size = 32768

Note

The n entered in the example delivers a False answer to the is_dynamic attribute in the output.

112 Managing Volumes and File Systems on VNX AVM 7.0

Managing

Page 113: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Modify user-defined storage pool attributes

The user-defined storage pool’s attributes that can be modified are:

◆ -name: Changes the name of the specified user-defined storage pool to the new name.◆ -acl: Designates an access control level for a user-defined storage pool. The default value

is 0.◆ -description: Changes the description comment for the user-defined storage pool.◆ -is_greedy: Identifies which member volumes of a user-defined storage pool are used to

provide space when creating or extending a file system.

The tasks to modify the attributes of a user-defined storage pool are:

◆ Modify the name of a user-defined storage pool on page 114◆ Modify the access control of a user-defined storage pool on page 115◆ Modify the description of a user-defined storage pool on page 116◆ Modify the -is_greedy attribute of a user-defined storage pool on page 117

Modify system-defined and user-defined storage pool attributes 113

Managing

Page 114: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Modify the name of a user-defined storage pool

Action

To modify the name of a specific user-defined storage pool, use this command syntax:

$ nas_pool -modify <name> -name <new_name>

where:

<name> = old name of the storage pool

<new_name> = new name of the storage pool

Example:

To change the name of the storage pool named marketing to purchasing, type:

$ nas_pool -modify marketing -name purchasing

Output

id = 5name = purchasingdescription = storage pool for marketingacl = 0in_use = Falseclients =members = d126,d127,d128,d129default_slice_flag = Trueis_user_defined = Truethin = Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Note

The name change to purchasing appears in the output.The description does not change unless the administrator changesit.

114 Managing Volumes and File Systems on VNX AVM 7.0

Managing

Page 115: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Modify the access control of a user-defined storage pool

Controlling Access to System Objects on VNX contains instructions to manage access controllevels.

Note: The access control level change to 1 appears in the output. The description does not changeunless the administrator modifies it.

Action

To modify the access control level for a specific user-defined storage pool, use this command syntax:

$ nas_pool -modify {<name>|id=<id>} -acl <acl>

where:

<name> = name of the storage pool.

<id> = ID of the storage pool.

<acl> = designates an access control level for the new storage pool. The default value is 0.

Example:

To change the access control level for the storage pool named purchasing, type:

$ nas_pool -modify purchasing -acl 1000

Output

id = 5name = purchasingdescription = storage pool for marketingacl = 1000in_use = Falseclients =members = d126,d127,d128,d129default_slice_flag = Trueis_user_defined = Truethin = Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Modify system-defined and user-defined storage pool attributes 115

Managing

Page 116: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Modify the description of a user-defined storage pool

Action

To modify the description of a specific user-defined storage pool, use this command syntax:

$ nas_pool -modify {<name>|id=<id>} -description <description>

where:

<name> = name of the storage pool.

<id> = ID of the storage pool.

<description> = descriptive comment about the pool or its purpose. Type the comment within quotes.

Example:

To change the descriptive comment for the storage pool named purchasing, type:

$ nas_pool -modify purchasing -description "storage pool for purchasing"

Output

id = 15name = purchasingdescription = storage pool for purchasingacl = 1000in_use = Falseclients =members = d126,d127,d128,d129default_slice_flag = Trueis_user_defined = Truethin = Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

116 Managing Volumes and File Systems on VNX AVM 7.0

Managing

Page 117: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Modify the -is_greedy attribute of a user-defined storage pool

Action

To modify the -is_greedy attribute of a specific user-defined storage pool, use this command syntax:

$ nas_pool -modify {<name>|id=<id>} -is_greedy {y|n}

where:

<name> = name of the storage pool

<id> = ID of the storage pool

Example:

To change the -is_greedy attribute for the user-defined storage pool named user_pool, type:

$ nas_pool -modify user_pool -is_greedy y

Output

id = 58name = user_pooldescription =acl = 0in_use = Falseclients =members = d21,d22,d23,d24default_slice_flag = Trueis_user_defined = Truevirtually_provisioned= Falsedisk_type = CLSTDserver_visibility = server_2is_greedy = Truetemplate_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Modify system-defined and user-defined storage pool attributes 117

Managing

Page 118: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Extend a user-defined storage pool by volume

You can add a slice volume, a metavolume, a disk volume, or a stripe volume to auser-defined storage pool.

Action

To extend an existing user-defined storage pool by volumes, use this command syntax:

$ nas_pool -xtend {<name>|id=<id>} [-storage <system_name>] -volumes [<volume_name>,...]

where:

<name> = name of the storage pool

<id> = ID of the storage pool

<system_name> = name of the storage system, used to differentiate pools when the same pool name is used in multiplestorage systems

<volume_name> = names of the volumes separated by commas

Example:

To extend the volumes for the storage pool named engineering, with volumes d130, d131, d132, and d133, type:

$ nas_pool -xtend engineering -volumes d130,d131,d132,d133

Output

id = 6name = engineeringdescription =acl = 0in_use = Falseclients =members = d126,d127,d128,d129,d130,d131,d132,d133default_slice_flag = Trueis_user_defined = Truethin = Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Note

The original volumes (d126, d127, d128, and d129) appear in the output, followed by the volumes added in the example.

118 Managing Volumes and File Systems on VNX AVM 7.0

Managing

Page 119: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Extend a user-defined storage pool by size

Action

To extend the volumes for an existing user-defined storage pool by size, use this command syntax:

$ nas_pool -xtend {<name>|id=<id>} -size <integer> [M|G|T][-storage <system_name>]

where:

<name> = name of the storage pool

<id> = ID of the storage pool

<system_name> = storage system on which one or more volumes will be created, to be added to the storage pool

Example:

To extend the volumes for the storage pool named engineering, by a size of 1 GB, type:

$ nas_pool -xtend engineering -size 1G

Output

id = 6name = engineeringdescription =acl = 0in_use = Falseclients =members = d126,d127,d128,d129,d130,d131,d132,d133default_slice_flag = Trueis_user_defined = Truethin = Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Extend a user-defined storage pool by size 119

Managing

Page 120: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Extend a system-defined storage pool

You can specify a size by which AVM extends a system-defined pool and turns off thedynamic behavior of the system pool to prevent it from consuming additional disk volumes.Doing so:

◆ Uses the disk selection algorithms that AVM uses to create system-defined storage poolmembers.

◆ Prevents system-defined storage pools from rapidly consuming a large number of diskvolumes.

You can specify the storage system from which to allocate space to the pool. The dynamicbehavior of the system-defined storage pool must be turned off by using the nas_pool-modify command before extending the pool.

Note: When extending a file system, the is_greedy attribute is ignored unless there is not enough freespace on the existing volumes that the file system is using. Table 7 on page 36 describes the is_greedybehavior.

On successful completion, the system-defined storage pool extends by at least the specifiedsize. The storage pool might extend more than the requested size. The behavior is the sameas when the storage pool is extended during a file-system creation.

If a storage system is not specified and the pool has members from a single storage system,then the default is the existing storage system. If a storage system is not specified and thepool has members from multiple storage systems, the existing set of storage systems is usedto extend the storage pool.

If a storage system is specified, space is allocated from that system:

◆ The specified pool must be a system-defined pool.◆ The specified pool must have the is_dynamic attribute set to n, or false. Modify

system-defined storage pool attributes on page 110 provides instructions to change theattribute.

◆ There must be enough disk volumes to satisfy the size requested.

120 Managing Volumes and File Systems on VNX AVM 7.0

Managing

Page 121: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Extend a system-defined storage pool by size

Action

To extend a system-defined storage pool by size and specify a storage system from which to allocate space, use thiscommand syntax:

$ nas_pool -xtend {<name>|id=<id>} -size <integer> -storage <system_name>

where:

<name> = name of the system-defined storage pool.

<id> = ID of the storage pool.

<integer> = size requested in MB or GB. The default size unit is MB.

<system_name> = name of the storage system from which to allocate the storage.

Example:

To extend the system-defined clar_r5_performance storage pool by size and designate the storage system from which toallocate space, type:

$ nas_pool -xtend clar_r5_performance -size 128M -storage APM00023700165-0011

Output

id = 3name = clar_r5_performancedescription = CLARiiON RAID5 4plus1acl = 0in_use = Falseclients =members = v216default_slice_flag = Falseis_user_defined = Falsethin = Falsedisk_type = CLSTDserver_visibility = server_2,server_3volume_profile = clar_r5_performance_vpis_dynamic = Falseis_greedy = Falsenum_stripe_members = 4stripe_size = 32768

Extend a system-defined storage pool 121

Managing

Page 122: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Remove volumes from storage pools

Action

To remove volumes from a system-defined or user-defined storage pool, use this command syntax:

$ nas_pool -shrink {<name>|id=<id>} [-storage <system_name>]-volumes [<volume_name>,...]

where:

<name> = name of the storage pool

<id> = ID of the storage pool

<system_name> = name of the storage system, used to differentiate pools when the same pool name is used in multiplestorage systems

<volume_name> = names of the volumes separated by commas

Example:

To remove volumes d130 and d133 from the storage pool named marketing, type:

$ nas_pool -shrink marketing -volumes d130,d133

Output

id = 5name = marketingdescription = storage pool for marketingacl = 1000in_use = Falseclients =members = d126,d127,d128,d129,d131,d132default_slice_flag = Trueis_user_defined = Truethin = Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

122 Managing Volumes and File Systems on VNX AVM 7.0

Managing

Page 123: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Delete user-defined storage pools

You can delete only a user-defined storage pool that is not in use. You must remove allstorage pool member volumes before deleting a user-defined storage pool. This delete actionremoves only the volumes in the specified storage pool and deletes the storage pool, not thevolumes. System-defined storage pools cannot be deleted.

Action

To delete a user-defined storage pool, use this command syntax:

$ nas_pool -delete <name>

where:

<name> = name of the storage pool

Example:

To delete the user-defined storage pool named sales, type:

$ nas_pool -delete sales

Output

id = 7name = salesdescription =acl = 0in_use = Falseclients =members =default_slice_flag = Trueis_user_defined = Truetemplate_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Delete user-defined storage pools 123

Managing

Page 124: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Delete a user-defined storage pool and its volumes

The -deep option deletes the storage pool and also recursively deletes each member of thestorage pool unless it is in use or is a disk volume.

Action

To delete a user-defined storage pool and the volumes in it, use this command syntax:

$ nas_pool -delete {<name>|id=<id>} [-deep]

where:

<name> = name of the storage pool

<id> = ID of the storage pool

Example:

To delete the storage pool named sales, type:

$ nas_pool -delete sales -deep

Output

id = 7name = salesdescription =acl = 0in_use = Falseclients =members =default_slice_flag = Trueis_user_defined = Truethin = Falsetemplate_pool = N/Anum_stripe_members = N/Astripe_size = N/A

124 Managing Volumes and File Systems on VNX AVM 7.0

Managing

Page 125: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

5

Troubleshooting

As part of an effort to continuously improve and enhance the performanceand capabilities of its product lines, EMC periodically releases new versionsof its hardware and software. Therefore, some functions described in thisdocument may not be supported by all versions of the software orhardware currently in use. For the most up-to-date information on productfeatures, refer to your product release notes.

If a product does not function properly or does not function as describedin this document, contact your EMC Customer Support Representative.

ProblemResolution Roadmap for VNX contains additional information aboutusing the EMC Online Support website and resolving problems.

Topics included are:◆ AVM troubleshooting considerations on page 126◆ EMC E-Lab Interoperability Navigator on page 126◆ Known problems and limitations on page 126◆ Error messages on page 127◆ EMC Training and Professional Services on page 128

Managing Volumes and File Systems on VNX AVM 7.0 125

Page 126: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

AVM troubleshooting considerations

Consider these steps when troubleshooting AVM:

◆ Obtain all files and subdirectories in /nas/log/ and /nas/volume/ from the Control Stationbefore reporting problems, which helps to diagnose the problem faster. Additionally,save any files in /nas/tasks when problems are seen from the Unisphere for File software.The support material script collects information related to the Unisphere for File softwareand APL.

◆ Set the environment variable NAS_REPLICATE_DEBUG=1 to log additional informationin /nas/log/nas_log.al.tran.

EMC E-Lab Interoperability Navigator

The EMC E-Lab™ Interoperability Navigator is a searchable, web-based application thatprovides access to EMC interoperability support matrices. It is available on the EMC OnlineSupport website at http://Support.EMC.com. After logging in, locate the applicable Supportby Product page, find Tools, and click E-Lab Interoperability Navigator.

Known problems and limitations

Table 9 on page 126 describes known problems that might occur when using AVM andautomatic file system extension and presents workarounds.

Table 9. Known problems and workarounds

WorkaroundSymptomKnown problem

Place the newly marked disks in a user-defined storage pool.This protects themfrom being used by system-definedstorage pools (and manual volumemanagement).

Temporary disks might be used by AVMsystem-defined storage pools orcheckpoint extension.

AVM system-defined storage pools andcheckpoint extensions recognize tempo-rary disks as available disks.

126 Managing Volumes and File Systems on VNX AVM 7.0

Troubleshooting

Page 127: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Table 9. Known problems and workarounds (continued)

WorkaroundSymptomKnown problem

Alleviate this timing issue by loweringthe HWM on a file system to ensureautomatic extension can accommodatenormal file system activity.

Set the HWM to allow enough freespace in the file system to accommo-date write operations to the largest av-erage file in that file system. For exam-ple, if you have a file system that is 100GB, and the largest average file in thatfile system is 20 GB, set the HWM forautomatic extension to 70%.

Changes made to the 20 GB file mightcause the file system to reach theHWM, or 70 GB. There is 30 GB ofspace left in the file system to handlethe file changes, and to initiate andcomplete automatic extension withoutfailure.

An error message indicating the failureof automatic extension start, and a fullfile system.

In an NFS environment, the write activ-ity to the file system starts immediatelywhen a file changes. When the file sys-tem reaches the HWM, it begins to au-tomatically extend but might not finishbefore the Control Station issues a filesystem full error. This causes an auto-matic extension failure.

In a CIFS environment, the CIFS/Win-dows Microsoft client does PersistentBlock Reservation (PBR) to reserve thespace before the writes begin. As a re-sult, the file system full error occursbefore the HWM is reached and beforeautomatic extension is initiated.

Error messages

All event, alert, and status messages provide detailed information and recommended actionsto help you troubleshoot the situation.

To view message details, use any of these methods:

◆ Unisphere software:

• Right-click an event, alert, or status message and select to view Event Details, AlertDetails, or Status Details.

◆ CLI:

• Type nas_message -info <MessageID>, where <MessageID> is the messageidentification number.

◆ Celerra Error Messages Guide:

• Use this guide to locate information about messages that are in the earlier-releasemessage format.

◆ EMC Online Support website:

Error messages 127

Troubleshooting

Page 128: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Use the text from the error message's brief description or the message's ID to searchthe Knowledgebase on the EMC Online Support website. After logging in to EMC

Online Support, locate the applicable Support by Product page, and search for theerror message.

EMC Training and Professional Services

EMC Customer Education courses help you learn how EMC storage products work togetherwithin your environment to maximize your entire infrastructure investment. EMC CustomerEducation features online and hands-on training in state-of-the-art labs conveniently locatedthroughout the world. EMC customer training courses are developed and delivered by EMCexperts. Go to the EMC Online Support website at http://Support.EMC.com for course andregistration information.

EMC Professional Services can help you implement your system efficiently. Consultantsevaluate your business, IT processes, and technology, and recommend ways that you canleverage your information for the most benefit. From business plan to implementation, youget the experience and expertise that you need without straining your IT staff or hiring andtraining new personnel. Contact your EMC Customer Support Representative for moreinformation.

128 Managing Volumes and File Systems on VNX AVM 7.0

Troubleshooting

Page 129: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Glossary

A

automatic file system extensionConfigurable file system feature that automatically extends a file system created or extendedwith AVM when the high water mark (HWM) is reached.

See also high water mark.

Automatic Volume Management (AVM)Feature of VNX for file that creates and manages volumes automatically without manual volumemanagement by an administrator. AVM organizes volumes into storage pools that can beallocated to file systems.

See also thin provisioning.

D

disk volumeOn a VNX for file, a physical storage unit as exported from the storage system. All other volumetypes are created from disk volumes.

See also metavolume, slice volume, stripe volume, and volume.

F

File migration serviceFeature for migrating file systems from NFS and CIFS source file servers to the VNX for file.The online migration is transparent to users once it starts.

file systemMethod of cataloging and managing the files and directories on a system.

Fully Automated Storage Tiering (FAST)Lets you assign different categories of data to different types of storage media within a tieredpool. Data categories may be based on performance requirements, frequency of use, cost, andother considerations. The FAST feature retains the most frequently accessed or important data

Managing Volumes and File Systems on VNX AVM 7.0 129

Page 130: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

on fast, high performance (more expensive) drives, and moves the less frequently accessed andless important data to less-expensive (lower-performance) drives.

H

high water mark (HWM)Trigger point at which the VNX for file performs one or more actions, such as sending a warningmessage, extending a volume, or updating a file system, as directed by the related feature'ssoftware/parameter settings.

L

logical unit number (LUN)Identifying number of a SCSI or iSCSI object that processes SCSI commands. The LUN is thelast part of the SCSI address for a SCSI object. The LUN is an ID for the logical unit, but theterm is often used to refer to the logical unit itself.

M

mapped poolA storage pool that is dynamically created during the normal storage discovery (diskmark)process for use on the VNX for file. It is a one-to-one mapping with either a VNX storage poolor a FAST Symmetrix Storage Group. A mapped pool can contain a mix of different types ofLUNs that use any combination of data services (thin, thick, auto-tiering, mirrored, and VNXcompression). However, mapped pools should contain only the same type of LUNs that usethe same data services (all thick, all thin, all the same auto-tiering options, all mirrored or nonemirrored, and all compressed or none compressed) for the best file system performance.

metavolumeOn VNX for file, a concatenation of volumes, which can consist of disk, slice, or stripe volumes.Also called a hypervolume or hyper. Every file system must be created on top of a uniquemetavolume.

See also disk volume, slice volume, stripe volume, and volume.

S

slice volumeOn VNX for file, a logical piece or specified area of a volume used to create smaller, moremanageable units of storage.

See also disk volume, metavolume, stripe volume, and volume.

storage poolGroups of available disk volumes organized by AVM that are used to allocate available storageto file systems. They can be created automatically by AVM or manually by the user.

See also Automatic volume management (AVM)

130 Managing Volumes and File Systems on VNX AVM 7.0

Glossary

Page 131: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

stripe volumeArrangement of volumes that appear as a single volume. Allows for stripe units that cut acrossthe volume and are addressed in an interlaced manner. Stripe volumes make load balancingpossible.

See also disk volume, metavolume, and slice volume.

system-defined storage poolPredefined AVM storage pools that are set up to help you easily manage both storage volumestructures and file system provisioning by using AVM.

T

thin provisioningConfigurable VNX for file feature that lets you allocate storage based on long-term projections,while you dedicate only the file system resources that you currently need. NFS or CIFS clientsand applications see the virtual maximum size of the file system of which only a portion isphysically allocated.

See also Automatic Volume Management.

U

Universal Extended File System (UxFS)High-performance, VNX for file default file system, based on traditional Berkeley UFS, enhancedwith 64-bit support, metadata logging for high availability, and several performanceenhancements.

user-defined storage poolsUser-created storage pools containing volumes that are manually added. User-defined storagepools provide an appropriate option for users who want control over their storage volumestructures while still using the automated file system provisioning functionality of AVM toprovision file systems from the user-defined storage pools.

V

volumeOn VNX for file, a virtual disk into which a file system, database management system, or otherapplication places data. A volume can be a single disk partition or multiple partitions on oneor more physical drives.

See also disk volume, metavolume, slice volume, and stripe volume.

Managing Volumes and File Systems on VNX AVM 7.0 131

Glossary

Page 132: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

132 Managing Volumes and File Systems on VNX AVM 7.0

Glossary

Page 133: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

Index

Aalgorithm

automatic file system extension 58Symmetrix 47system-defined storage pools 39VNX for block 42

attributesstorage pool, modify 109, 110, 113storage pools 36system-defined storage pools 110user-defined storage pools 113

automatic file system extensionalgorithm 58and VNX Replicator interoperabilityconsiderations 59considerations 63enabling 70how it works 27maximum size option 81maximum size, set 95options 26restrictions 12thin provisioning 96

Automatic Volume Management (AVM)restrictions 11storage pool 27

Ccautions 14, 15

spanning storage systems 14character support, international 15checkpoint, create for file system 100clar_r1 storage pool 31clar_r5_economy storage pool 31clar_r5_performance storage pool 31clar_r6 storage pool 31

clarata_archive storage pool 31clarata_r10 storage pool 32clarata_r3 storage pool 32clarata_r6 storage pool 32clarefd_r10 storage pool 32clarefd_r5 storage pool 32clarsas_archive storage pool 32clarsas_r10 storage pool 32clarsas_r6 storage pool 32cm_r1 storage pool 32cm_r5_economy storage pool 32cm_r5_performance storage pool 32cm_r6 storage pool 32cmata_archive storage pool 32cmata_r10 storage pool 33cmata_r3 storage pool 33cmata_r6 storage pool 33cmefd_r10 storage pool 33cmefd_r5 storage pool 33cmsas_archive storage pool 33cmsas_r10 storage pool 33cmsas_r6 storage pool 33considerations

automatic file system extension 63interoperability 59

create a file system 70, 72, 74using system-defined pools 72using user-defined pools 74

Ddata service policy

removing from storage group 15delete user-defined storage pools 123details, display 105display

details 105

Managing Volumes and File Systems on VNX AVM 7.0 133

Page 134: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

display (continued)size information 106

EEMC E-Lab Navigator 126error messages 127extend file systems

by size 85by volume 87with different storage pool 89

extend storage poolssystem-defined by size 121user-defined by size 119user-defined by volume 118

FFAST capacity algorithm and striping 16file system

create checkpoint 100extend by size 85extend by volume 87quotas 15

file system considerations 63

Iinternational character support 15

Kknown problems and limitations 126

Llegacy CLARiiON and deleting thin items 15

Mmasking option and moving LUNs 16messages, error 127migrating LUNs 16modify system-defined storage pools 110

Pplanning considerations 59

profiles, volume and storage 39

Qquotas for file system 15

RRAID group combinations 34related information 22restrictions 11, 12, 13, 14, 15

automatic file system extension 12AVM 11Symmetrix volumes 11thin provisioning 13TimeFinder/FS 15VNX for block 14

Sstorage pools

attributes 48clar_r1 31clar_r5_economy 31clar_r5_performance 31clar_r6 31clarata_archive 31clarata_r10 32clarata_r3 32clarata_r6 32clarefd_r10 32clarefd_r5 32clarsas_archive 32clarsas_r10 32clarsas_r6 32cm_r1 32cm_r5_economy 32cm_r5_performance 32cm_r6 32cmata_archive 32cmata_r10 33cmata_r3 33cmata_r6 33cmefd_r10 33cmefd_r5 33cmsas_archive 33cmsas_r10 33cmsas_r6 33delete user-defined 123display details 105display size information 106explanation 27extend system-defined by size 121

134 Managing Volumes and File Systems on VNX AVM 7.0

Index

Page 135: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

storage pools (continued)extend user-defined by size 119extend user-defined by volume 118list 104modify attributes 109remove volumes from user-defined 122supported types 30symm_ata 31symm_ata_rdf_src 31symm_ata_rdf_tgt 31symm_efd 31symm_std 31symm_std_rdf_src 31symm_std_rdf_tgt 31system-defined algorithms 39system-defined Symmetrix 47system-defined VNX for block 40

symm_ata storage pool 31symm_ata_rdf_src storage pool 31symm_ata_rdf_tgt storage pool 31symm_efd storage pool 31symm_std storage pool 31symm_std_rdf_src storage pool 31symm_std_rdf_tgt storage pool 31Symmetrix and deleting thin items 15Symmetrix pool, insufficient space 17system-defined storage pools 39, 72, 85, 87, 110

algorithms 39

system-defined storage pools (continued)create a file system with 72extend file systems by size 85extend file systems by volume 87

Tthin provisioning, out of space message 16troubleshooting 125

UUnicode characters 15upgrade software 63user-defined storage pools 74, 85, 87, 113, 122

create a file system with 74extend file systems by size 85extend file systems by volume 87modify attributes 113remove volumes 122

VVNX for block pool, insufficient space 17VNX upgrade

automatic file system extension issue 15

Managing Volumes and File Systems on VNX AVM 7.0 135

Index

Page 136: Docu31539 Managing Volumes and File Systems With VNX Automatic Volume Management 7.0

136 Managing Volumes and File Systems on VNX AVM 7.0

Index