Gp f Slab Manual

  • Upload
    tolukes

  • View
    239

  • Download
    0

Embed Size (px)

Citation preview

  • 7/27/2019 Gp f Slab Manual

    1/36

    General Parallel File System 3.2Labs

    July 7, 2010

  • 7/27/2019 Gp f Slab Manual

    2/36

    CONTENTS

    Lab Title Pg

    1 Install and configure a GPFS cluster 3

    1a Install and configure a GPFS cluster on AIX via Command Line 71b Guide to Installing GPFS* 14

    2 Storage Pools, Filesets and Policies 21

    3 Using Replication 284 Snapshots 31

    5 Dynamically Adding a Disk to an Online File System 33

    6 Permanently Remove GPFS* 35

    *This was notproduced by IBM.

    Some labs were modified to promote consistency, or correct errors. Other labs were adapted

    from other IBM documentation.

    2

  • 7/27/2019 Gp f Slab Manual

    3/36

    GPFS Labs Exercise 1Install and configure a GPFS cluster

    Objectives:

    Use the GPFS web based administration tool to install a GPFS cluster.

    Requirements:

    Node names and IP address provided by instructorNode1:________________________

    Node2:________________________

    Account/Cluster name used for exerciseName: root

    Password: __________________

    ClusterName: _______________

    Step 1: Initialize the Lab Environment

    This lab assumes the rpm packages have already been installed on the servers.The packages for linux are:

    gpfs.base-3.2.1-0.i386.rpmgpfs.docs-3.2.1-0.noarch.rpmgpfs.gui-3.2.1-0.i386.rpmgpfs.src-3.2.1-0.noarch.rpmgpfs.base-3.2.1-0.i386.update.rpmgpfs.gpl-3.2.1-0.noarch.rpmgpfs.msg.en_US-3.2.1-0.noarch.rpm

    You can install the packages manually by copying them to each node and running rpm or the

    GPFS install wizard will distribute the packages for you. You need to manually install the

    packages on at least one node and start the GUI by running:

    /etc/init.d/gpfsgui start

    Step 2: Create the GPFS cluster

    In this step you will create a GPFS cluster on two nodes using the GPFS web based

    administration interface.

    1 Open a web browser to the GPFS

    http://\[node1 ip address\]/ibm/console

    2 Login using the account and password provided by the instructor.

    3

  • 7/27/2019 Gp f Slab Manual

    4/36

    Account: root

    Password: (root password)

    3 In the navigation pane select "GPFS Management" then "Install GPFS". This will start

    the GPFS configuration wizard.

    4 Select "Create a new Session". This will take you to the Define hosts page.

    5 Under "Defined hosts" click "Add"

    a. Enter the name or ip address of node1, and the root password.

    b. Select "Add this host and add the next host ."

    4

  • 7/27/2019 Gp f Slab Manual

    5/36

    6. Close the Task process dialogue when completed.

    7 Enter the IP address and root password for node2 and select "Add host." Close the

    Taskprocess dialogue when completed.

    8 The hosts have now been added. Select next to go to the Install and Verify Packagespage. On this page select "Check existing package installation." Close the task dialog

    when the check is complete.

    9 GPFS Ships an open source component called the GPL layer that allows the support of

    a wide variety of Linux kernels. The GPL layer installation page checks that the GPLlayer is built and installed correctly. If it is not, the installer will complete the build and

    install. Select "Check existing GPL layer installation". Close the "Task" dialog when

    the check is complete.

    10 GPFS verifies the network configuration of all nodes in the cluster. Select "Checkcurrent settings" to verify the network configuration. Close the "Task" dialog when

    the check is complete.

    11 GPFS uses ssh (Or other remote command tool) for some cluster operations. The

    installer will verify that ssh is configured properly for a GPFS cluster. Select "Check

    Current Settings" to verify the ssh config. Close the "Task" dialog when the check is

    complete.

    12 It is recommended, though not required, that all the servers synchronize the time using

    a protocol such as NTP. For this lab we will skip the NTP setup. Choose "Skip Setup"

    to continue.

    5

  • 7/27/2019 Gp f Slab Manual

    6/36

    13 Next, you set the name the GPFS cluster. Enter the cluster name and select "Next."

    14 The last step is to define the primary and secondary cluster configuration servers. Since

    this is a two node cluster we will leave it at the defaults. Select "Next" to continue.

    15 Select Next to complete the cluster configuration. Close the "Task" dialog when the

    configuration is complete.

    16 When you select "Finish" you will be directed to the cluster management page.

    17 The GPFS cluster is now installed and running.

    6

  • 7/27/2019 Gp f Slab Manual

    7/36

    GPFS Labs Exercise 1aInstall and configure a GPFS cluster on AIX via Command Line

    Objectives

    Verify the system environment

    Create a GPFS cluster

    Define NSD's

    Create a GPFS file system

    Requirements

    An AIX 5.3 System

    o Very similar to linux config with rpm instead of AIX binary images and Linux

    admin commands are different

    At least 4 hdisks

    GPFS 3.3 Software with latest PTF

    Step 1: Verify Environment

    1 Verify nodes properly installed

    a Check that the oslevel is supportedOn the system run oslevel

    Check the GPFS FAQ:http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp__

    b Is the installed OS level supported by GPFS? Yes No

    c Is there a specific GPFS patch level required for the installed OS? Yes No

    d If so what patch level is required? ___________

    2 Verify nodes configured properly on the network(s)

    a Write the name of Node1: ____________

    b Write the name of Node2: ____________

    c From node 1 ping node 2

    d From node 2 ping node 1

    If the pings fail, resolve the issue before continuing.

    3 Verify node-to-node ssh communications (For this lab you will use ssh and scp for

    communications)

    7

  • 7/27/2019 Gp f Slab Manual

    8/36

    a On each node create an ssh-key. To do this use the command ssh-keygen; if you don't

    specify a blank passphrase, -N, then you need to press enter each time you are promotedto create a key with no passphrase until you are returned to a prompt. The result should

    look something like this:

    # ssh-keygen -t rsa -N "" -f $HOME/.ssh/id_rsaGenerating public/private rsa key pair.Created directory '/.ssh'.Your identification has been saved in /.ssh/id_rsa.Your public key has been saved in /.ssh/id_rsa.pub.The key fingerprint is:7d:06:95:45:9d:7b:7a:6c:64:48:70:2d:cb:78:ed:61sas@perf3-c2-aix

    b On node1 copy the /.ssh/id_rsa.pub file to /.ssh/authorized_keys

    # cp /.ssh/id_rsa.pub /.ssh/authorized_keys

    c From node1 copy the /.ssh/id_rsa.pub file from node2 to /tmp/id_rsa.pub

    # scp node2:/.ssh/id_rsa.pub /tmp/id_rsa.pub

    d Add the public key from node2 to the authorized_keys file on node1

    # cat /tmp/id_rsa.pub >> /.ssh/authorized_keys

    e Copy the authorized key file from node1 to node2

    # scp /.ssh/authorized_keys node2:/.ssh/authorized_keys

    f To test your ssh configuration ssh as root from node 1 to node1 and node1 to node2 until

    you are no longer prompted for a password or for addition to the known_hosts file.node1# ssh node1 datenode1# ssh node2 datenode2# ssh node1 datenode2# ssh node2 date

    g Supress ssh banners by creating a .hushlogin file in the root home directory

    # touch /.hushlogin

    4 Verify the disks are available to the system

    For this lab you should have 4 disks available for use hdiskn-hdiskt.1. Use lspv to verify the disks exist

    2. Ensure you see 4 disks besides hdisk0 talk.

    8

  • 7/27/2019 Gp f Slab Manual

    9/36

    Step 2: Install the GPFS software

    On node1:

    1 Locate the GPFS software in /yourdir/software/base/

    # cd /yourdir/software/base/

    2 Run the inutoc command to create the table of contents

    # inutoc .

    3 Install the base GPFS code using the installp command

    # installp -aXY -d/yourDir/software/basegpfs -f all

    4 Locate the latest GPFS patch level in /yourdir/software/PTF/

    # cd /yourdir/software/PTF/

    5 Run the inutoc command to create the table of contents

    # inutoc .

    6 Install the PTF GPFS code using the installp command

    # installp -aXY -d/yourdir/software/PTFgpfs -f all

    7 Repeat Steps 1-7 on node28. On node1 and node2 confirm GPFS is installed using lslpp

    # lslpp -L gpfs.\*

    the output should look similar to this

    # lslpp -L gpfs.\*Fileset Level State Type Description (Uninstaller)----------------------------------------------------------------------------gpfs.base 3.3.0.3 A F GPFS File Managergpfs.docs.data 3.3.0.3 A F GPFS Server Manpages and Documentationgpfs.gui 3.3.0.3 C F GPFS GUIgpfs.msg.en_US 3.3.0.1 A F GPFS Server Messages U.S. English

    Note: Exact versions of GPFS may vary from this example, the important part is that allthree packages are present.

    9

  • 7/27/2019 Gp f Slab Manual

    10/36

    8 Confirm the GPFS binaries are in your path using the mmlscluster command

    # mmlscluster

    mmlscluster: 6027-1382 This node does not belong to a GPFS cluster.mmlscluster: 6027-1639 Command failed. Examine previous error messages to determinecause.

    Note: The path to the GPFS binaries is: /usr/lpp/mmfs/bin

    Step 3: Create the GPFS cluster

    For this exercise the cluster is initially created with a single node. When creating the cluster

    make node1 the primary configuration server and give node1 the designations quorum andmanager. Use ssh and scp as the remote shell and remote file copy commands.

    Primary Configuration server (node1): ________

    Verify fully qualified path to ssh and scp: ssh path________

    scp path_____________

    1 Use the mmcrcluster command to create the cluster

    # mmcrcluster -N _node01_:manager-quorum -p _node01_ -r /usr/bin/ssh -R /usr/bin/scp

    2 Run the mmlscluster command again to see that the cluster was created

    # mmlscluster

    GPFS cluster information========================

    GPFS cluster name: node1.ibm.comGPFS cluster id: 13882390374179224464GPFS UID domain: node1.ibm.com

    Remote shell command: /usr/bin/sshRemote file copy command: /usr/bin/scp

    GPFS cluster configuration servers:-----------------------------------

    Primary server: node1.ibm.comSecondary server: (none)

    Node Daemon node name IP address Admin node name Designation-----------------------------------------------------------------------------------------------

    1 perf3-c2-aix.bvnssg.net 10.0.0.1 node1.ibm.com quorum-manager

    3 Set the license mode for the node using the mmchlicense command. Use a server license forthis node.

    # mmchlicense server --accept -N node01

    Step 4: Start GPFS and verify the status of all nodes

    1 Start GPFS on all the nodes in the GPFS cluster using the mmstartup command

    # mmstartup a

    10

  • 7/27/2019 Gp f Slab Manual

    11/36

    2 Check the status of the cluster using the mmgetstate command

    # mmgetstate -a

    Node number Node name GPFS state------------------------------------------

    1 node1 active

    Step 5: Add the second node to the cluster

    1 One node 1 use the mmaddnode command to add node2 to the cluster

    # mmaddnode -N node2

    2 Confirm the node was added to the cluster using the mmlscluster command

    # mmlscluster

    3 Use the mmchcluster command to set node2 as the secondary configuration server

    # mmchcluster -s node2

    4 Set the license mode for the node using the mmchlicense command. Use a server license for

    this node.

    # mmchlicense server --accept -N node02

    5 Start node2 using the mmstartup command

    # mmstartup -N node2

    6 Use the mmgetstate command to verify that both nodes are in the active state

    # mmgetstate -a

    Step 6: Collect information about the cluster

    Now we will take a moment to check a few things about the cluster. Examine the cluster

    configuration using the mmlscluster command

    1. What is the cluster name? ______________________2. What is the IP address of node2? _____________________3. What date was this version of GPFS "Built"? ________________

    Hint: look in the GPFS log file: /var/adm/ras/mmfs.log.latest

    Step 7: Create NSDs

    11

  • 7/27/2019 Gp f Slab Manual

    12/36

    You will use the 4 hdisks.

    Make sure they can all hold data and metadata

    Leave the storage pool column blank.

    Leave the Primary and Backup server fields blank

    Sample input files are in /yourdir/samples

    1 On node 1 create directory /yourdir/data

    2 Create a disk descriptor file /yourdir/data/diskdesc.txt using the format:

    #DiskName:PrimaryServer:BackupServer:DiskUsage:FailureGroup:DesiredName:StoragePoolhdiskw:::dataAndMetadata::nsd1:hdiskx:::dataAndMetadata::nsd2:hdisky:::dataAndMetadata::nsd3:hdiskz:::dataAndMetadata::nsd4:

    Note: hdisk numbers will vary per system.

    3 Create a backup copy of the disk descriptor file /yourdir/data/diskdesc_bak.txt

    # cp /yourdir/data/diskdesc.txt /yourdir/data/diskdesc_bak.txt

    4 Create the NSD's using the mmcrnsd command

    # mmcrnsd -F /yourdir/data/diskdesc.txt

    Step 8: Collect information about the NSD's

    Now collect some information about the NSD's you have created.

    1. Examine the NSD configuration using the mmlsnsd command

    1. What mmlsnsd flag do you use to see the operating system device (/dev/hdisk?)

    associated with an NSD? _______

    Step 9: Create a file system

    Now that there is a GPFS cluster and some NSD's available you can create a file system. In thissection we will create a file system.

    Set the file system blocksize to 64kb

    Mount the file system at /gpfs

    12

  • 7/27/2019 Gp f Slab Manual

    13/36

    1 Create the file system using the mmcrfs command

    # mmcrfs /gpfs fs1 -F diskdesc.txt -B 64k

    2 Verify the file system was created correctly using the mmlsfs command

    # mmlsfs fs1

    Is the file system automatically mounted when GPFS starts? _______________

    3 Mount the file system using the _mmmount_ command

    # mmmount all a

    4 Verify the file system is mounted using the df command

    # df -k

    Filesystem 1024-blocks Free %Used Iused %Iused Mounted on/dev/hd4 65536 6508 91% 3375 64% //dev/hd2 1769472 465416 74% 35508 24% /usr/dev/hd9var 131072 75660 43% 620 4% /var/dev/hd3 196608 192864 2% 37 1% /tmp/dev/hd1 65536 65144 1% 13 1% /home/proc - - - - - /proc/dev/hd10opt 327680 47572 86% 7766 41% /opt/dev/fs1 398929107 398929000 1% 1 1% /gpfs

    5 Use the mmdf command to get information on the file system.

    # mmdf fs1

    How many inodes are currently used in the file system? ______________

    13

  • 7/27/2019 Gp f Slab Manual

    14/36

    GPFS Labs Exercise 1bGuide to Installing GPFS on Red Hat/CentOS 5.x

    This document describes installing GPFS 3.2 on Red Hat/CentOS 5 systems. There are two sets

    of files to install from. The first is the original installer. It is a self-extracting archive containing

    the RPMs for version 3.2.0-0. The second set contains the four RPMs from that same archive.You may choose to use either method.

    When you are finished with this guide, you will have a GPFS cluster up and running. You willnot have storage attached to the cluster, yet. That will be the next step.

    Following is an explanation of the files:

    gpfs.base-3.2.0-0.i386.rpm

    This is the main GPFS code, including daemons and utility binaries.

    gpfs.docs-3.2.0-0.noarch.rpm

    Obviously documentation. Installing this is optional.gpfs.gpl-3.2.0-0.noarch.rpm

    Portability modules for Linux systems.gpfs.msg.en_US-3.2.0-0.noarch.rpm

    This is the US English language files. Optional, but you will want to install this.

    There are also five updates. These include an update file at version 3.2.1-20 for each of files

    listed above, as well as an update file for the GUI (web management interface). Update files

    cannot be installed without first installing the 3.2.0-0 versions, with the exception of the gpfs.guifile; it may be installed without a 3.2.0-0 version of the same.

    Objectives

    Install GPFS with updates to a Red Hat/CentOS 5 server

    Create a new GPFS cluster with a primary and secondary manager and one quorum node

    Add additional nodes into the GPFS cluster

    Requirements

    GPFS 3.2.0-0 software

    GPFS 3.2.1-20 updates (optional)

    Red Hat/CentOS 5 servers in the same subnet with Internet access (for performing yumupdates)

    Step 1: Installing the RPMs

    Installing from the IBM Installer

    If you are using the self-extracting installer, perform the following to extract the RPMs:

    14

  • 7/27/2019 Gp f Slab Manual

    15/36

    # gpfs_install-3.2.0-0_arch

    where arch is your system architecture (i386 or x64). The following 3.2.0-0 RPMs will end up inthe /usr/lpp/mmfs/3.2 directory:

    gpfs.base-3.2.0-0.arch.rpmgpfs.docs-3.2.0-0.noarch.rpm

    gpfs.gpl-3.2.0-0.noarch.rpm

    gpfs.msg.en_US-3.2.0-0.noarch.rpm

    Note that you will need to have XWindows running, as well as IBMs JRE (Sun and Microsoft

    JREs are insufficient) to view and accept the license agreement. The --text-only switch is

    reported by IBM to remove the XWinows requirement, however the JRE is still required.

    Using Extracted Files

    You may have files that were previously extracted and stored on a medium. If you are using

    these extracted files, simply copy the files to a directory from which you can install the RPMs.

    Now, copy the updated 3.2.1-20 RPMs into a work directory from which you can install them.

    Step 2: Installing GPFS

    This covers installing the files, which is notthe same as installing the cluster. The files get

    everything in place, whereas installing the cluster (covered below) implies configuring and

    starting GPFS. Begin by installing the compat-libstdc++-33 package. This is a dependency for

    the gpfs.base package:

    # yum install compat-libstdc++-33

    To install the 3.2.0-0 RPMs, perform the following:

    # rpm ivh gpfs.base-3.2.0-0.i386.rpm# rpm ivh gpfs.docs-3.2.0-0.noarch.rpm# rpm ivh gpfs.gpl-3.2.0-0.noarch.rpm# rpm ivh gpfs.msg.en_US-3.2.0-0.noarch.rpm

    To install the 3.2.1-21 RPMs, perform the following:

    # rpm Uvh gpfs.base-3.2.1-20.i386.update.rpm# rpm Uvh gpfs.docs-3.2.1-20.noarch.rpm# rpm Uvh gpfs.gpl-3.2.1-20.noarch.rpm# rpm Uvh gpfs.msg.en_US-3.2.1-20.noarch.rpm# rpm Uvh gpfs.gui-3.2.1-20.noarch.rpm

    Finally, start the gui application framework:

    15

  • 7/27/2019 Gp f Slab Manual

    16/36

    # /etc/init.d/gpfsgui start

    This will install GPFS to /usr/lpp/mmfs. The steps above must be completed for every PC in the

    cluster with the exception of the gpfs.gui commands. These may be performed on only one hostin the cluster that can act as a a manager node (presumably, the first node which is identified as

    the primary below). Note that the gpfsgui is also a service, and will start automatically the next

    time the server starts.

    Step 3: Compiling and Installing the Portability Modules

    Linux installations require that some modules be compiled and installed. IBM refers to these asportability modules. The module source code is installed in /usr/lpp/mmfs/src. Alternately, the

    updated modules may be downloaded from the DeveloperWorks site.

    Before compiling, you will require the applications necessary to compile software against your

    current kernel. For Red Hat/CentOS, the following will suffice:

    # yum install kernel kernel-headers kernel-devel gcc gcc-c++ imake

    Once complete, it's good practice to verify that the /boot/grub/grub.conf file is referencing the

    new versions of the kernel and initrd files as the default. Reboot the host so that the new kernel

    can take effect.

    For CentOS (or other distributions that are based on Red Hat, but are not Red Hat strictly), you

    will need to edit the /etc/redhat-release file. This file contains the name and version of the

    operating system. When configuring, the configure script looks for Red Hat Enterprise Linuxin this file, and will fail if it is not found. Using your favorite editor, change:

    CentOS release 5.x

    toRed Hat Enterprise Linux release 5.x

    Next, you will need to set the SHARKCLONEROOT environment variable. The IBM

    documentation indicates that if not set, it will default to /usr/lpp/mmfs/src, however the configure

    script will terminate with an error if the environment variable is not set.

    # export SHARKCLONEROOT=/usr/lpp/mmfs/src

    Configure the build with:

    # cd /usr/lpp/mmfs/src/config# ./configure

    The configure script should end with no output to the terminal.

    Finally, you can begin building the modules. Return to the parent directory (/usr/lpp/mmfs/src)and issue the following to compile and install the modules:

    16

  • 7/27/2019 Gp f Slab Manual

    17/36

  • 7/27/2019 Gp f Slab Manual

    18/36

    To the end of the PATH= line in ~/bash_profile. (The ; is required to separate this path

    specification from the last one In the line.) Log out, then back in for the changes to take effect

    (or simply make the same change in a shell window).

    Now, transfer the public key to every other host in the cluster:

    # cat ~/.ssh/id_rsa.pub | ssh user@server "cat - >> \~/.ssh/authorized_keys"

    Replace userwith root and replaceserverwith the fqdn of the remote host, and issue the

    command a second time replacingserverwith the IP address of the remote host.

    Perform these steps on every host in the cluster, so root on each host can log into the root

    account on every other host without the requirement of a password. In addition, you must

    remove any banner text that may be displayed through motd, or the ssh Banner parameter during

    login.

    Start and Add a Node to the Cluster

    Verify that GPFS is properly installed by performing the following:

    # mmlscluster

    This is produce an error message, but will insure that the cluster software is installed and

    available.

    Verify the location of the ssh and scp binaries. This will be used below.

    # which ssh# which scp

    Create the first cluster node:

    # mmcrcluster N node1-fqdn:manager-quorum p node1-fqdn r /usr/bin/ssh \R /usr/bin/scp

    1. The N node1-fqdn indicates that you are creating a node on the host identified by

    node1-fqdn (presumably, the current host).

    2. The manager-quorum indicates that this is a manager node (it retains copies of theconfiguration files) and a quorum node.

    3. The p node1-fqdn indicates that this node will be the Primary manager.4. The remainder specifies which binaries will be used for remote command execution and

    file transfers.

    Run the mmlscluster command again:

    # mmlscluster

    18

  • 7/27/2019 Gp f Slab Manual

    19/36

    This should respond with a screen of information about the cluster, including some of the detail

    provided above.

    Prior to installing the remaining nodes, you need to start the cluster:

    # mmstartup a# mmgetstate a

    Node number Node name GPFS state-----------------------------------------

    1 node1 active

    The first command starts the cluster. The second displays a status of all (through a) nodes in the

    cluster (of which there are currently only one).

    Add Remaining Nodes to the Cluster

    # mmaddnode N node2-fqdn

    # mmlscluster# mmchcluster s node2-fqdn# mmstartup N node2-fqdn# mmgetstate a

    The commands above do the following:

    1. Add a second node to the cluster2. List all nodes in the cluster

    3. Set node2-fqdn as the Secondary manager node

    4. Start GPFS on the second node5. Display the status of all nodes in the cluster

    For additional nodes, follow the same instructions, omitting the mmchcluster s command.

    At this point, the cluster should be up and running, and all of the GPFS nodes added. You can

    now begin adding storage to the cluster.

    Troubleshooting

    Verify Passwordless Login

    GPFS exchanges files and executes commands on all servers in the cluster. This requires that

    each server have access to other servers without the need to type a password. If you cannot loginto another host with the root account, and without a password, GPFS will not be able tocommunicate properly with other nodes. For example, if you are on node1, and you want to test

    ssh to node2, perform the following:

    node1# ssh node2node2#

    19

  • 7/27/2019 Gp f Slab Manual

    20/36

    Notice that there was no prompt for a password, and no motd/banner messages appear on the

    terminal before a prompt is issued.

    Use the GUI as a Troubleshooting Tool

    You can also use the GUI as a troubleshooting tool. If you have a host, node3-fqdn that is havingtrouble, Launch the GUI. Select GPFS Management|Install GPFS|Add host. Type node2-fqdn in

    the host name field, and the root password for node2-fqdn. Click the Add Host button. A window

    will appear with three stages of action to be performed.

    Connectivity Check fails: This indicates that the current host cannot communicate with node3-

    fqdn. That may be that the password is incorrect, sshd is not running on the remote host, or

    passwordless authentication (using ssh keys) is not set up correctly.

    Gathering system information fails: This indicates that the portability modules (kernel

    modules) either were not installed, or were not installed correctly on node3-fqdn.

    Cluster membership check fails: This indicates the node is already a member of another

    cluster.

    Nodes in an unknown State

    I shut all of my GPFS hosts down, and when I restart, only the Primary manager is active (when

    running the mmgetstate a command). All of the others are reported as unknown.

    Problem: Unknown. This has been observed in CentOS5 + VMWare + Open/iSCSIenvironments.

    Solution: You will need to drop all of the unknown nodes out of the cluster then re-add them.This will have the effect of renumbering the nodes, however. This solution is a particular

    problem for node2 which (from the instructions above) was installed as the secondary manager.

    You cannot remove a primary or secondary manager from the cluster. The strategy is to shufflearound which node is the secondary manager during the drop and add process. The following

    will accomplish this for you. In this example, there are four nodes, node1 which is the primary

    manager (and is active), node2 which is the secondary manager, and node3/node4 which have no

    special designation.

    # mmshutdown a# mmdeletenode N node3

    # mmdeletenode N node4# mmaddnode N node3# mmaddnode N node4# mmchcluster s node3 (changes the sec. manager to node 3)# mmdeletenode N node2# mmaddnode N node2# mmchcluster s node2# mmstartup a

    20

  • 7/27/2019 Gp f Slab Manual

    21/36

    GPFS Lab Exercise 2Storage Pools, Filesets and Policies

    Objectives:

    * Create storage pools* Create a file system

    * Create filesets

    * Implement a Placement Policy* Define and execute a file management policy

    Requirements:

    Complete Exercise 1: Installing the cluster

    List of available devices

    /dev/sd__/dev/sd__

    /dev/sd__

    /dev/sd__

    /dev/sd__

    Step 1: Create A File System With 2 Storage Pools

    Storage pools are defined when an NSD (Network Shared Disk) is created. You will use the 4

    disks (labelled sd_ through sd_ ) provided by your instructor to create 2 storage pools. Since the

    storage is all direct attached, no NSD servers are needed. Place the disks into two storage pools

    and make sure both pools can store file data.

    Hints:

    Create 2 disks per storage pool

    The first pool must be the system pool.

    The system pool should be able to store data and metadata

    Only the system pool can contain metadata # Create a disk descriptor file /gpfs-

    course/data/pooldesc.txt using theformat:#DiskName:serverlist::DiskUsage:FailureGroup:DesiredName:StoragePool

    /dev/sd__:::dataAndMetadata::nsd1:system/dev/sd__:::dataAndMetadata::nsd2:system/dev/sd__:::dataOnly::nsd3:pool1/dev/sd__:::dataOnly::nsd4:pool1

    The two storage pools will be system and pool1.

    1 Create a backup copy of the disk descriptor file

    21

  • 7/27/2019 Gp f Slab Manual

    22/36

    /gpfs-course/data/pooldesc_bak.txt

    2 Create the NSD's using the mmcrnsd command.

    > mmcrnsd -F /gpfs-course/data/pooldesc.txt -v no

    mmcrnsd: Processing disk sdcmmcrnsd: Processing disk sddmmcrnsd: Processing disk sdemmcrnsd: Processing disk sdfmmcrnsd: Propagating the cluster configuration data to allaffected nodes. This is an asynchronous process.

    3 Create a file system based on these NSD's using the mmcrfs command

    * Set the file system blocksize to 64KB* Mount the file system at /gpfs

    Command: "mmcrfs /gpfs fs1 -F/gpfs-course/data/pooldesc.txt-B 64k -M 2 -R 2"

    Example:

    gpfs1:~ # mmcrfs /gpfs fs1 -F /gpfs-course/data/pooldesc.txt -B 64k -M2 -R2

    The following disks of fs1 will be formatted on node gpfs1:nsd1: size 20971520 KBnsd2: size 20971520 KBnsd3: size 20971520 KBnsd4: size 20971520 KB

    Formatting file system ...Disks up to size 53 GB can be added to storage pool 'system'.Disks up to size 53 GB can be added to storage pool 'pool1'.Creating Inode File45 % complete on Wed Sep 26 10:05:27 200789 % complete on Wed Sep 26 10:05:32 2007

    100 % complete on Wed Sep 26 10:05:33 2007

    Creating Allocation MapsClearing Inode Allocation MapClearing Block Allocation Map42 % complete on Wed Sep 26 10:05:52 200783 % complete on Wed Sep 26 10:05:57 2007

    100 % complete on Wed Sep 26 10:05:59 200743 % complete on Wed Sep 26 10:06:04 200785 % complete on Wed Sep 26 10:06:09 2007

    100 % complete on Wed Sep 26 10:06:10 2007Completed creation of file system /dev/fs1.mmcrfs: Propagating the cluster configuration data to allaffected nodes. This is an asynchronous process.

    4 Verify the file system was created correctly using the mmlsfs command

    # mmlsfs fs1

    5 Mount the file system using the mmmount command

    # mmmount fs1 -a

    6 Verify the file system is mounted using the df command.

    # df

    22

  • 7/27/2019 Gp f Slab Manual

    23/36

    Filesystem 1K-blocks Used Available Use% Mounted on/dev/sda3 32908108 10975916 21932192 34% /tmpfs 2019884 4 2019880 1% /dev/shm/dev/sda1 72248 45048 27200 63% /boot/dev/fs1 3989291072 491992640 3497298432 13% /gpfs

    7 Verify the storage pool configuration using the mmdf command

    # mmdf fs1disk disk size failure holds holds free KB free KBname in KB group metadata data in full blocks in fragments--------------- ------------- -------- -------- ----- -------------------- -------------------Disks in storage pool: systemnsd1 102734400 -1 yes yes 102565184 (100%) 90 ( 0%)nsd2 102734400 -1 yes yes 102564608 (100%) 96 ( 0%)

    ------------- -------------------- -------------------(pool total) 205468800 205129792 (100%) 186 ( 0%)

    Disks in storage pool: pool1nsd3 102734400 -1 no yes 102732288 (100%) 62 ( 0%)nsd4 102734400 -1 no yes 102732288 (100%) 62 ( 0%)

    ------------- -------------------- -------------------(pool total) 205468800 205464576 (100%) 124 ( 0%)

    ============= ==================== ===================(data) 410937600 410594368 (100%) 310 ( 0%)(metadata) 205468800 205129792 (100%) 186 ( 0%)

    ============= ==================== ===================(total) 410937600 410594368 (100%) 310 ( 0%)

    Inode Information-----------------Number of used inodes: 4038Number of free inodes: 397370Number of allocated inodes: 401408Maximum number of inodes: 401408

    Step 2: Create Filesets

    We are going to create 5 filesets to organize the data.

    1 Create 5 filesets fileset1-fileset5 using the mmcrfileset command

    # mmcrfileset fs1 fileset1# mmcrfileset fs1 fileset2# mmcrfileset fs1 fileset3# mmcrfileset fs1 fileset4# mmcrfileset fs1 fileset5

    2 Verify they were created using the mmlsfileset command

    # mmlsfileset fs1

    What is the status of fileset1-fileset5? _______________

    3 Link the filesets into the file system using the mmlinkfileset command

    # mmlinkfileset fs1 fileset1 -J /gpfs/fileset1# mmlinkfileset fs1 fileset2 -J /gpfs/fileset2# mmlinkfileset fs1 fileset3 -J /gpfs/fileset3# mmlinkfileset fs1 fileset4 -J /gpfs/fileset4# mmlinkfileset fs1 fileset5 -J /gpfs/fileset5

    23

  • 7/27/2019 Gp f Slab Manual

    24/36

    Now what is the status of fileset1-fileset5? ___________________

    Step 3: Create a file placement policy

    Now that you have two storage pools and some filesets you need to define placement policies toinstruct GPFS where you would like the file data placed. By default, if the system pool can

    accept data, all files will go to the system storage pool. You are going to change the default and

    create 3 placement rules. The rules will designate:

    Data in fileset1 to fileset4 go to the system storage pool

    Data in fileset5 go to the pool1 storage pool

    files that end in .dat go to pool1 storage pool

    1 Start by creating a policy file /gpfs-course/data/placementpolicy.txt

    /* The fileset does not matter, we want all .dat and .DAT files to go to pool1 */

    RULE 'datfiles' SET POOL 'pool1' WHERE UPPER(name) like '%.DAT'

    /* All non *.dat files placed in filset5 will go to pool1 */RULE 'fs5' SET POOL 'pool1' FOR FILESET ('fileset5')

    /* Set a default rule that sends all files not meeting the other criteria to the system pool */RULE 'default' set POOL 'system'

    2 Install the new policy file using the mmchpolicy command

    # mmchpolicy fs1 placementpolicy.txt

    Step 4: Testing the placement policies

    Now you will do some experiments to see how policies work. Use this chart to track the

    experiments results. You can get the amount of free space by using the mmdf command.

    Experiment System Pool(Free KB)

    Pool1(Free KB)

    Before

    Bigilfe1

    bigfile1.dat

    bigfile2

    Migrate/Delete

    1 Record the "Before" free space in the chart

    2 Create a file in fileset1 called bigfile1

    # dd if=/dev/zero of=/gpfs/fileset1/bigfile1 bs=64k count=10000

    3 Record the free space in each pool using the mmdf command (Bigfile1)

    24

  • 7/27/2019 Gp f Slab Manual

    25/36

    # mmdf fs1disk disk size failure holds holds free KB free KBname in KB group metadata data in full blocks in fragments--------------- ------------- -------- -------- ----- -------------------- -------------------Disks in storage pool: systemnsd1 20971520 -1 yes yes 20588288 ( 98%) 930 ( 0%)nsd2 20971520 -1 yes yes 20588608 ( 98%) 806 ( 0%)

    ------------- -------------------- -------------------

    (pool total) 41943040 41176896 ( 98%) 1736 ( 0%)

    Disks in storage pool: pool1nsd3 20971520 -1 no yes 20969408 (100%) 62 ( 0%)nsd4 20971520 -1 no yes 20969408 (100%) 62 ( 0%)

    ------------- -------------------- -------------------(pool total) 41943040 41938816 (100%) 124 ( 0%)

    ============= ==================== ===================(data) 83886080 83115712 ( 99%) 1860 ( 0%)(metadata) 41943040 41176896 ( 98%) 1736 ( 0%)

    ============= ==================== ===================(total) 83886080 83115712 ( 99%) 1860 ( 0%)

    Inode Information-----------------Number of used inodes: 4044Number of free inodes: 78132Number of allocated inodes: 82176

    Maximum number of inodes: 82176

    4 Create a file in fileset1 called bigfile1.dat

    # dd if=/dev/zero of=/gpfs/fileset1/bigfile1.dat bs=64k count=1000

    Record the free space (bigfile1.dat)

    5 Create a file in fileset5 called bigfile2

    # dd if=/dev/zero of=/gpfs/fileset5/bigfile2 bs=64k count=1000

    Record the free space (bigfile2)

    6 Questions:

    Where did the data go for each file?

    Bigfile1 ______________Bigfile1.dat ______________

    Bigfile2 ______________

    Why?

    7 Create a couple more files (These will be used in the next step)

    # dd if=/dev/zero of=/gpfs/fileset3/bigfile3 bs=64k count=10000# dd if=/dev/zero of=/gpfs/fileset4/bigfile4 bs=64k count=10000

    Step 5: File Management Policies

    Now that you have data in the file system you are going to manage the placement of the file data

    using file management policies. For this example your business rules say that all file names that

    25

  • 7/27/2019 Gp f Slab Manual

    26/36

    start with the letters "big" need to be moved to pool1. In addition, all files that end in ".dat"

    should be deleted.

    1 To begin, create a policy file /gpfs-course/data/managementpolicy.txt that implements

    the business rules.

    RULE 'datfiles' DELETE WHERE UPPER(name) like '%.DAT'RULE 'bigfiles' MIGRATE TO POOL 'pool1' WHERE UPPER(name) like 'BIG%'

    2 Test the rule set using the mmapplypolicy command

    # mmapplypolicy fs1 \-P managementpolicy.txt \-I test

    This command will show you what mmapplypolicy will do but will not actually performthe delete or migrate.

    3 Actually perform the migration and deletion using the mmapplypolicy command

    # mmapplypolicy fs1 \-P managementpolicy.txt

    4 Review:

    Review the output of the mmapplypolicy command to answer these questions.How many files were deleted? ____________

    How many files were moved? ____________

    How many KB total were moved? ___________

    Step 6: Using External Pools

    In this step you will use the external pool interface to generate a report.

    1 Create two files : expool1.bash, listrule1.txt:File /tmp/expool1.bash

    #!/bin/bash

    dt=`date +%h%d%y-%H_%M_%S`results=/tmp/FileReport_${dt}

    echo one $1if [[ $1 == 'MIGRATE' ]];then

    echo Filelistecho There are `cat $2 | wc -l` files that match >> ${results}cat $2 >> ${results}echo ----echo - The file list report has been placed in ${results}echo ----

    fi

    File listrule1.txt

    RULE EXTERNAL POOL 'externalpoolA' EXEC '/tmp/expool1.bash'RULE 'MigToExt' MIGRATE TO POOL 'externalpoolA' WHERE FILE_SIZE > 2

    26

  • 7/27/2019 Gp f Slab Manual

    27/36

    Note: You may need to modify the where clause to get a list of files on your file system.

    2 Make the external pool script executable

    # chmod +x /tmp/expool1.bash

    3 Execute the job

    # mmapplypolicy fs1 -P listrule1.txt

    It will print output to the screen. When it is done it will print the location of the resultsfile. For example:

    The file list report has been placed in /tmp/FileReport_Jul3108-20_15_50

    4 What information do you see in the file?

    27

  • 7/27/2019 Gp f Slab Manual

    28/36

    GPFS Exercise Lab 3Using Replication

    In this lab we will configure a file system for replication.

    Objectives:

    Enable data and metadata replication

    Verify and monitor a file's replication status

    Requirements:

    1. Complete Exercise 1: Installing the cluster

    2. List of available devices

    /dev/sd__

    /dev/sd__/dev/sd__

    /dev/sd__

    Step 1: Enabling Replication

    1 The max replication factor should have been set in Lab 1. Use the mmlsfs command to

    verify the file system is enabled for replication. A file system is enabled for replicationwhen the maximum number of data and metadata replicas is set to 2.

    2 If these parameters are not set to 2 you will need to recreate the file system. To recreate thefile system

    a. Umount the file system

    b. Delete the file system

    c. Create the file system and specify -M 2 and -R 2

    # mmcrfs /gpfs fs1 \-F pooldesc.txt \-B 64k \-M 2 \-R 2

    Where pooldesc.txt is the disk descriptor file from Lab 1

    Step 2: Change the failure group on the NSDs

    1 View the current disk usage using the mmdf command

    # mmdf fs1

    2 Check the status of replication with the mmlsdisk command:

    # mmlsdisk fs1

    28

  • 7/27/2019 Gp f Slab Manual

    29/36

    The failure group should be set to a value of -1

    3 Change the failure group to 1 for nsd1 and nsd3 and to 2 for nsd2 and nsd4 using the

    mmchdisk command.

    # mmchdisk fs1 change -d "nsd1:::dataAndMetadata:1:::"# mmchdisk fs1 change -d "nsd2:::dataAndMetadata:2:::"# mmchdisk fs1 change -d "nsd3:::dataOnly:1:::"# mmchdisk fs1 change -d "nsd4:::dataOnly:2:::"

    4 Verify the changes using the mmdf and mmlsdisk commands:

    # mmlsdisk fs1# mmdf fs1

    Notice that data was not written because the default replication level is still set to 1. Now

    that there are two failure groups you can see how to change the replication status of a file.

    Step 3: Replicate a file

    Replication status can bet set at the file level. In this step we will replicate the data and metadata

    of a single file in the file system.

    1 Create a file in the GPFS file system, /gpfs, called bigfile10

    # dd if=/dev/zero of=/gpfs/fileset1/bigfile10 bs=64k count=1000

    2 Use the mmlsattr command to check the replication status of the file bigfile10

    # mmlsattr /gpfs/fileset1/bigfile10replication factorsmetadata(max) data(max) file [flags]------------- --------- ---------------

    1 ( 2) 1 ( 2) /gpfs/fileset1/bigfile10

    3 Change the file replication status of bigfile10 so that it is replicated in two failure groups

    using the mmchattr command.

    # mmchattr \-m 2 \-r 2 /gpfs/fileset1/bigfile10

    Notice that this command take a few moments to execute, as you change the replication

    status of a file the data is copied before the command completes unless you use the "-I

    defer" option.

    4 Again use the mmlsattr command to check the replication status of the file bigfile10

    # mmlsattr /gpfs/fileset1/bigfile10

    29

  • 7/27/2019 Gp f Slab Manual

    30/36

    Did you see a change in the replication status of the file?

    Step 4: Replicate all data in the file system

    If desired you can replicate all of the data in the file system. In this step we will change thedefault replication status for the whole file system.

    1 Create a file in fileset1 called bigfile11:

    # dd if=/dev/zero of=/gpfs/fileset1/bigfile11 bs=64k count=1000

    2 Use the mmlsattr command to check the replication status of the file bigfile11:

    # mmlsattr /gpfs/fileset1/bigfile11

    3 Using the mmchfs command change the default replicaton status for fs1:

    # mmchfs fs1 \-m 2 \-r 2

    4 Use the mmlsattr command to check the replication status of the file bigfile11:

    # mmlsattr /gpfs/fileset1/bigfile11

    Has the replication status of bigfile11 changed? _________________

    5 The replication status of existing files does not change until mmrestripefs is run. New files

    created after the mmchfs command above will get replicated, however. To test this create a

    new file called bigfile12:

    # dd if=/dev/zero of=/gpfs/fileset1/bigfile12 bs=64k count=1000

    6 Use the mmlsattr command to check the replication status of the file bigfile12:

    # mmlsattr /gpfs/fileset1/bigfile12

    Is the file replicated?

    7 You can replicate the existing files in the file system using the mmrestripefs command:

    # mmrestripefs fs1 \-R

    8 Recheck the file replication status of bigfile11:

    # mmlsattr /gpfs/fileset1/bigfile11

    Is the file replicated now?

    30

  • 7/27/2019 Gp f Slab Manual

    31/36

    GPFS Exercise Lab 4Snapshots

    In this lab we will use the snapshot feature to create online copies of files.

    Objectives:

    Create a file system snapshot

    Restore a user deleted file from a snapshot image

    Manage multiple snapshot images

    Requirements:

    1. Complete Exercise 1: Installing the cluster

    2. A File System - Use Exercise 2 to create a file system if you do not already have one.

    Step 1: Use a snapshot to backup a file

    A snapshot is a point in time view of a file system. To see how snapshots operate you will create

    a file, take a snapshot, delete the file then restore the snapshot from the file.

    1 Create a file for testing in the /gpfs/fileset1 directory:

    # echo "hello world:snap1" > /gpfs/fileset1/snapfile1

    2 Create a snapshot image using the mmcrsnapshot command:

    # mmcrsnapshot fs1 snap1

    Writing dirty data to diskQuiescing all file system operationsWriting dirty data to disk againCreating snapshot.Resuming operations.

    3 Modify the file for testing in the /gpfs/fileset1 directory:

    # echo "hello world:snap2" >> /gpfs/fileset1/snapfile1

    4 Create a seconds snapshot image using the mmcrsnapshot command:

    # mmcrsnapshot fs1 snap2

    5 View the list of snapshots created using the mmlssnapshot command

    Example:

    31

  • 7/27/2019 Gp f Slab Manual

    32/36

    # mmlssnapshot fs1gpfs1:~ # mmlssnapshot fs1Snapshots in file system fs1:Directory SnapId Status Createdsnap1 2 Valid Wed Sep 26 11:03:52 2007snap2 3 Valid Wed Sep 26 11:04:52 2007

    6 Delete the file /gpfs/fileset1/snapfile1 Now that the file is deleted let's see what is in thesnapshots.

    7 Take a look at the snapshot images. To view the image change directories to the .snapshot

    directory

    # cd /gpfs/.snapshots

    What directories do you see? _____________________

    8 Compare the snapfile1 stored in each snapshot:

    # cat snap1/fileset1/snapfile1[output omitted]

    # cat snap2/fileset1/snapfile1[output omitted]

    Are the file contents the same? _______________

    9 To restore the file from the snapshot copy the file back into the original location:

    # cp /gpfs/.snapshots/snap2/fileset1/snapfile1 \/gpfs/fileset1/snapfile1

    10 When you are done with a snapshot you can delete the snapshot. Delete both of these

    snapshots using the mmdelsnapshot command:

    # mmdelsnapshot fs1 snap1# mmdelsnapshot fs1 snap2

    11 Verify the snapshots were deleted using the mmlssnapshot command:

    # mmlssnapshot fs1

    32

  • 7/27/2019 Gp f Slab Manual

    33/36

    GPFS Exercise Lab 5Dynamically Adding a Disk to an Online File System

    Objectives:

    Add a disk to a storage pool online

    Re-balance existing data in the file system

    Requirements:

    1. Complete Exercise 1: Installing the cluster2. A File System (Use Exercise 2 to create a file system if you do not already have one).

    3. Device to add

    /dev/sd___

    Step 1: Add a disk to the existing file system

    1 Verify that GPFS is running and the file system is mounted using the mmgetstate

    command and the df command

    a. The mmgetstate command will show the status of the nodes in the cluster.

    # mmgetstate -a

    b. The df command will display the mounted GPFS file system.

    # df

    2 Create a disk descriptor file /gpfs-course/data/adddisk.txt for the new disk using theformat:

    #DiskName:serverlist::DiskUsage:FailureGroup:DesiredName:StoragePool/dev/sd_:::dataOnly::nsd5:pool1

    3 Use the mmcrnsd command to create the NSD:

    # mmcrnsd -F /gpfs-course/data/adddisk.txt

    4 Verify the disk has been created using the mmlsnsd command:

    # mmlsnsd

    The disk you just added should show as a (free disk)

    5 Add the new NSD to the fs1 file system using the mmadddisk command:

    33

  • 7/27/2019 Gp f Slab Manual

    34/36

    # mmadddisk fs1 -F /gpfs-course/data/adddisk.txt

    6 Verify the NSD was added using the mmdf command

    # mmdf fs1disk disk size failure holds holds free KB free KBname in KB group metadata data in full blocks in fragments

    --------------- ------------- -------- -------- ----- -------------------- -------------------Disks in storage pool: systemnsd1 20971520 1 yes yes 20873984 (100%) 284 ( 0%)nsd2 20971520 2 yes yes 20873984 (100%) 202 ( 0%)

    ------------- -------------------- -------------------(pool total) 41943040 41747968 (100%) 486 ( 0%)

    Disks in storage pool: pool1nsd3 20971520 1 no yes 20969408 (100%) 62 ( 0%)nsd4 20971520 2 no yes 20969408 (100%) 62 ( 0%)nsd5 20971520 -1 no yes 20969408 (100%) 62 ( 0%)

    ------------- -------------------- -------------------(pool total) 62914560 62908224 (100%) 186 ( 0%)

    ============= ==================== ===================(data) 104857600 104656192 (100%) 672 ( 0%)(metadata) 41943040 41747968 (100%) 486 ( 0%)

    ============= ==================== ===================(total) 104857600 104656192 (100%) 672 ( 0%)

    Inode Information-----------------Number of used inodes: 4045Number of free inodes: 78131Number of allocated inodes: 82176Maximum number of inodes: 82176

    Step 2: Re-balancing the data

    In some cases you may wish to have GPFS re-balance existing data over the new disks that were

    added to the file system. Often it is not necessary to manually re-balance the data across the newdisks. New data that is added to the file system is correctly striped. Re-striping a large file

    system requires a large number of insert and delete operations and may affect system

    performance. Plan to perform this task when system demand is low.1 To re-balance the existing data in the file system use the mmrestripefs command:

    # mmrestripefs fs1 b

    2 Use the mmdf command to view the utilization of each disk.

    34

  • 7/27/2019 Gp f Slab Manual

    35/36

    GPFS Exercise Lab 6Permanently Remove GPFS

    WARNING!: Performing these steps will remove GPFS and destroy all data on the GPFS file

    system.

    Objectives:

    Remove all traces of GPFS from hosts in a cluster

    Requirements:

    1. A cluster with multiple nodes of GPFS

    Step 1: Shut down the necessary services

    1 Unmount all file systems

    # mmumount all -a

    2 Delete the file system

    # mmdelfs fs1

    (Replace fs1 with your file system name. Repeat for additional file systems.)

    3 Remove all NSDs:

    # mmdelnsd nsd1

    (Replace NSD1 with your NSD name. Repeat for additional NSDs.)

    4 Stop GPFS:

    # mmshutdown -a

    Step 2: Remove GPFS packages

    This section assumes GPFS is installed on Linux using the .rpm packages.

    1 Verify the packages installed on the current host:

    # rpm -qa | grep gpfs

    Write down each package name (less the version).

    35

  • 7/27/2019 Gp f Slab Manual

    36/36

    2 Remove each package. For each package listed in step 1, perform the following:

    # rpm -e

    (where package-name> is the name of the package without the version number listed in

    step 1.)

    Step 3: Remove Residual Traces of GPFS

    This section assumes GPFS is installed on Linux using the .rpm packages.

    1 Remove gpfs-specific directories and files:

    # rm -rf /var/mmfs

    # rm -rf /usr/lpp/mmfs

    # rm -rf /var/adm/ras/mm*# rm -rf /tmp/mmfs

    2 Remove the /gpfs mount entry from /etc/fstab.

    3 If you appended the GPFS binary directory (/usr/lpp/mmfs/bin) to root's PATH

    environment variable, remove it.