Hp Ia64 Rac Install

Embed Size (px)

Citation preview

  • 7/28/2019 Hp Ia64 Rac Install

    1/31

  • 7/28/2019 Hp Ia64 Rac Install

    2/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    Contents1 INTRODUCTION...................................................................................................................3

    1.1 References......................................................................................................................31.2 Revision History................................................. ......................................... ..................... 3

    2 INTRODUCTION..................................................................................................................43 HARDWARE OVERVIEW......................................................................................................54 HARDWARE INVENTORY ...................................................................................................65 SOFTWARE INVENTORY ....................................................................................................66 CONSOLE ACCESS .............................................................................................................67 REDHAT INSTALL................................................................ ......................................... ........ 78 NETWORK CONFIGURATION ............................................................................................99 HBA SETUP........................................................................................................................1010 SAN Switch Setup..............................................................................................................12

    11 MSA FLASH UPGRADE ...................................................................................................1312 OCFS INSTALL.................................................................................................................1313 ASM INSTALL....................................................................................................................1514 ORACLE PRE INSTALL SETUP TASKS...........................................................................16

    14.1 Oracle User.................................................................................................................1614.2 System Parameters ....................................................................................................1714.3 Hangcheck-timer.........................................................................................................1814.4 Remote access setup..................................................................................................18

    15 CRS STORAGE DISK SETUP...........................................................................................1916 DATABASE DISK SETUP.................................................................................................22

    16.1 LUN Setup ..................................................................................................................2216.2 OCFS Database Disk Setup.................................................... ......................... ...... ...2416.3 ASM Database Disk Setup..........................................................................................25

    17 CRS INSTALL....................................................................................................................2518 RAC SOFTWARE INSTALL..............................................................................................2819 RAC DATABASE CREATION............................................................................................29

    19.1 Storage Options ..........................................................................................................2919.1.1 Cluster File System ..............................................................................................2919.1.2 Automatic Storage Manager ASM.........................................................................30

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 2 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    3/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    1 INTRODUCTION

    1.1 ReferencesReference Document

    1. Oracle Docs

    2. Hunter Rac Install 10g

    3. ASM lib / OCFS for IA64

    4. RX1620 HOME PAGE

    5. MSA 1000 HOME PAGE

    6. 2/8V SAN Switch

    1.2 Revision HistoryRevision Author Date Description

    1.0 Darren Moore, John P Hansen 02/08/2005

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 3 of 31

    http://www.oracle.com/technology/documentation/database10gr2.htmlhttp://www.oracle.com/technology/pub/articles/hunter_rac10g.htmlhttp://oss.oracle.com/http://www.hp.com/products1/servers/integrity/entry_level/http://h18006.www1.hp.com/products/storageworks/msa1000/http://h18006.www1.hp.com/products/storageworks/sanswitch28v/http://www.oracle.com/technology/documentation/database10gr2.htmlhttp://www.oracle.com/technology/pub/articles/hunter_rac10g.htmlhttp://oss.oracle.com/http://www.hp.com/products1/servers/integrity/entry_level/http://h18006.www1.hp.com/products/storageworks/msa1000/http://h18006.www1.hp.com/products/storageworks/sanswitch28v/
  • 7/28/2019 Hp Ia64 Rac Install

    4/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    2 INTRODUCTION

    The purpose of this document is to detail 10G RAC setup (10.1.0.3) on HP Itaniumrx1620 servers with Red Hat Advanced Server and an MSA1000 as shared storage.Oracle technologies such as CRS, OCFS and ASM are used in conjunction with theOracle RAC install. This document is to be used as a reference and details the stepsand experience encountered during the setup and is not an actual install guide.

    Included in the doc are a number of references used during the setup, hardwareoverview including a topology, OS and associated S/W install instructions.

    For the Cluster Software install Cluster Ready Services (CRS) was used inconjunction with OCFS (Oracle Cluster File System) to manage the shared devicesneeded for CRS.

    For the purposes of customer demos we created two RAC databases on the sharedstorage. The first database taking advantage of OCFS and RAID 5 and the seconddatabase taking advantage of ASM to manage the shared database datafiles. Bothdatabases were set up on a private LAN. This may seem confusing however weseparated out both approaches in the procedure and you may ignore whichever approach does not suit your setup.

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 4 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    5/31

  • 7/28/2019 Hp Ia64 Rac Install

    6/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    4 HARDWARE INVENTORY

    2 * ex1620 Integrity HP Servers1 * MSA 1000 Storage Array1 * 8 /4 way fibre channel switch2 netgear 8 way hubs

    1 for private interconnect1 for private LAN

    One x86 PC, which acted as the primary DNS for a private LAN. HP Proliant DL580

    2/8V Fibre Channel SAN switch

    5 SOFTWARE INVENTORYRH AS 3.0 U5 ia64 for both rx1620sRH AS 3.0 U5 x86 for the PC with acting as the DNS mater 10.1.0.3 for the RAC installOracle Cluster File System

    - ocfs-2.4.21 for IA64 RHAS 3.0 Linux ASMLib is a library add on for the Automatic Storage Manager

    - oracleasm-2.4.21-32.EL for IA64 RHAS 3.0 LinuxMSA 1000 firmware v4.48

    A7538A hpqla2x00_2005_05_11 HBA driver with fibreutils_1.11-3

    6 CONSOLE ACCESS

    Using a PC with Linux installed (RHAS 3.0 x86), we attached a serial cableconnected to com port 1 on the pc to the com port on the back of the rx1620, (aconsole cable should be supplied with the rx1620)

    Using minicom we connected to the console on the rx1620 as follows

    > minicom -8 -o -con -L w

    You can however connect to the console using whatever method suits.

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 6 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    7/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    7 REDHAT INSTALL

    We installed Redhat Advanced Server Update 5 for IA64, Kernel 2.4.21-32. Thecorrect version of RHAS is important to satisfy the support matrix for ASMlib andOCFS.

    At boot time select the Boot Maintenance Menu as show below:

    EFI Boot Manager ver 1.10 [14.62] Firmware ver 2.11 [4445] Please select a boot option

    Red Hat Enterprise Linux AS

    EFI Shell [Built-in] Boot Option Maintenance Menu

    System Configuration

    Use and v to change option(s). Use Enter to select anoption

    EFI Boot Maintenance Manager ver 1.10 [14.62]

    Select Removable Media Boot [Internal Bootable DVD] Boot From a File. Select a Volume

    NO VOLUME LABEL [Acpi(HWP0002,0)/Pci(2|0)/Ata(Primary,Master)/CD

    NO VOLUME LABEL [Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(P

    NO VOLUME LABEL [Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(P Removable Media Boot [Internal Bootable DVD]

    Load File [EFI Shell [Built-in]]Load File [Core LAN Gb A]Load File [Core LAN Gb B]Exit

    At the ELILO boot prompt type linux console = ttyS0 , remember you have approx 8seconds to type this

    ELILO boot: linux console=ttyS0Uncompressing Linux... done

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 7 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    8/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    Loading initrd initrd.img...done

    Your system will now boot into the default anaconda installer in console mode.

    Next we installed redhat as follows:

    Select Skip, when prompted to test CDSelect OKSelect EnglishSelect Disk Druid

    We created a disk layout as follows, the most important step here is the creation of a /boot/efi boot partition which must be a FAT partition,

    sda2 13 20 50M vfat /boot/efisda3 20 2569 20000M ext3 /u02sda4 2569 5119 20000M ext3 /u01sda5 5119 6394 10000M ext3 /usrsda6 6394 7669 10000M ext3 /sda7 7669 8178 4000M swapsda8 8178 8816 5000M ext3 /opt

    Select OKSelect Yes to verify your selection

    Next we assigned an IP Address and Network Mask to Activate on boot.

    We did not configure any additional network devices at this stage i.e. interconnect.

    Next we assigned the Primary DNS host.

    In Hostname Configuration we manually assigned the fully qualified hostname e.g.rac1.linux.bogus.

    We did not enable any firewall and selected No firewall.

    We then selected the appropriate Language, Time Zone and Root Password

    The installer first formats the disks, then asks for the CDs in the following Order

    Disk 1, Disk 2, Disk 3, Disk 4, Disk 1

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 8 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    9/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    8 NETWORK CONFIGURATION At this stage all nodes in cluster should be on the network with a static IP address.The /etc/hosts file on each node should contain

    All hosts within the cluster, in our case rac1 and rac2 All VIP s within the cluster All interconnect devices within the cluster

    Ensure each entry in the /etc/hosts file has the following format

    e.g.

    192.168.196.100 rac1.linux.bogus rac1192.168.196.101 rac1-vip.linux.bogus rac1-vip10.0.0.2 rac1-priv.linux.bogus rac1-priv

    The installer will prompt for the VIP Addresses during the RAC S/W installation.

    In our setup we configured 2 interconnects (however you only need 1, additionalinterconnects can be configured for high availability), on a private subnet using aseparate hub. You can configure each network device using

    /usr/bin/redhat-config-network

    therefore in total each node in our two node cluster had two network devicesassigned for 2 private interconnects

    e.g. on the system rac1.linux.bogus we configured eth1 with the IP address 10.0.0.2Note: ensure you select Activate device when computer starts before you activatethe device.

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 9 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    10/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    We also configured /etc/nsswitch.conf to allow each node to interrogate the dnsserver for hostname lookups before using the /etc/hosts/ file:

    # hosts: db files nisplus nis dnshosts: dns files

    9 HBA SETUP

    We installed internal qlogic HBA 2G fibre channel adaptors to connect the a fibrechannel switch which was also connected to an MAS1000 (more in this later).

    All details about the HBA Fibre channel adaptors can be found online at:

    A7538A - 2Gb PCI-X Fibre Channel HBA for Linux

    Click on Software & drivers to download the appropriate HBA driver.

    We downloaded the driver kit for A7538A and use the INSTALL script containedwithinhp_qla2x00-2005-05-11.tar.gz which installed the A7538A driver and fibreutils.We also installed hp_sansurfer which is a useful gui tool used to connect andconfigure your qlogic card if needed. Both the driver and sansurfer (optional) wereinstalled on both nodes.

    [root@rac2 hp_qla2x00]# ./INSTALL

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 10 of 31

    http://h18006.www1.hp.com/products/storageworks/q2300/index.htmlhttp://h18006.www1.hp.com/products/storageworks/q2300/index.html
  • 7/28/2019 Hp Ia64 Rac Install

    11/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    Installing hp_qla2x00src RPM...Preparing...########################################### [100%]Logfile is /var/log/hp_qla2x00_install.logGetting list of QLA FC HBAsGetting list of SCSI adapters and Vendor IDsProducing list of SCSI adapters and Vendor IDs that are FCPadaptersChecking Vendor IDsAll Storage is HP Storage. Proceeding with installation

    1:hp_qla2x00src########################################### [100%]Loaded driver is in nonfailover modeWriting new /etc/hp_qla2x00.conf...done

    Copying /opt/hp/src/hp_qla2x00src/libqlsdm-ia64.so to/usr/lib/libqlsdm.soModifying /etc/hba.conf Configuring kernel sources... Using /usr/src/linux-2.4/configs/kernel-2.4.21-ia64.configas .configExecuting make mrproperExecuting make oldconfigExecuting make dep

    Compiling QLA driver... make cleanmake HSG80=n OSVER=linux-2.4 SMP=1 allrm -f qla2200.o qla2300.o qla2300_conf.o qla2200_conf.oqla_opts.o qla_optscc -D__KERNEL__ -DMODULE -Wall -O -g -DUDEBUG -DLINUX -Dlinux-DINTAPI

    Copying qla2300.o to /lib/modules/2.4.21-32.EL/kernel/drivers/addon/qla2200Copying qla2300_conf.o to /lib/modules/2.4.21-32.EL/kernel/drivers/scsiRunning depmod -a adding line to /etc/modules.conf: alias scsi_hostadapter2qla2300_confadding line to /etc/modules.conf: alias scsi_hostadapter3qla2300adding line to /etc/modules.conf: alias scsi_hostadapter4 sg

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 11 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    12/31

  • 7/28/2019 Hp Ia64 Rac Install

    13/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    this was setup. We supplied the system with an IP address using the ipaddrsetcommand. Once setup you can also gain console access using http// as the switch hosts a web server presenting you with access to your switch, through a useful java applet.

    http://h18006.www1.hp.com/products/storageworks/sanswitch28v/

    11 MSA FLASH UPGRADE

    We also upgraded the MSA1000 firmware to v4.48 which is recommended by HP.The latest firmware can be downloaded form the HP site at

    http://www.hp.com/go/msa1000

    Attaching a fibre channel switch with a Proliant DL580 box with Win 2003 server anda 2G HBA card allowed us to use the CLI interface using hyperterm via a consolecable. This procedure can also be achieved from a linux box.

    Using msaflash32r34v448.tar from the above mentioned site:

    Instructions: 1. Unzip (tar -xf msaflash.tar)2. run msainst by typing "./msainst" to copy library files to /usr/lib and binary file to/usr/bin3. type msaflash in any console window to run program.

    12 OCFS INSTALLDownload and install the ocfs rpms from http://oss.oracle.com and install the ocfsrpms on each host as follows:

    [root@rac2]# rpm -ivh ocfs-2.4.21-EL-1.0.14-1.ia64.rpm ocfs-support-1.0.10-1.ia64.rpm ocfs-tools-1.0.10-1.ia64.rpm

    Preparing...########################################### [100%]

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 13 of 31

    http://h18006.www1.hp.com/products/storageworks/sanswitch28v/http://www.hp.com/go/msa1000http://oss.oracle.com/http://h18006.www1.hp.com/products/storageworks/sanswitch28v/http://www.hp.com/go/msa1000http://oss.oracle.com/
  • 7/28/2019 Hp Ia64 Rac Install

    14/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    1:ocfs-support########################################### [ 33%]

    2:ocfs-2.4.21-EL########################################### [ 67%]

    Linking OCFS module into the module path [ OK ]3:ocfs-tools

    ########################################### [100%]

    Note: check your systems kernel version for compatibility with the ocfs rpms onoss.oracle.com. In our example

    [root@rac2]# uname -r2.4.21-32.EL

    We used OCFS Release 1.0.14-1 for kernel 2.4.21-27.EL+ (EL3 U4+) and alsoinstalled the support and tools packages.

    Once the rpms are installed you must setup /etc/ocfs.conf . We used the ocfstool toconfigure the /etc/ocfs.conf file on each server which is required to use ocfs on eachsystem.

    Note: You can start up vncserver on each system to gain access to an xserver oneach system. We used the gnome xserver by configuring ~/.vnc/xstartup file to startup a gnome desktop by adding gnome-session to xstartup.

    # vi ~/.vnc/xstartup[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresourcesxsetroot -solid greyvncconfig -iconic &xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &gnome-session &

    Once vncserver was configured we used vncviewer to logon to each host and createthe /etc/ocfs.conf file using ocfstool (as root user) as follows:

    # ocfstool

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 14 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    15/31

  • 7/28/2019 Hp Ia64 Rac Install

    16/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    2.4.21-32.EL

    On each node you will need to configure to the ASM library driver to start at boot timeby the oracle user. On each node perform the following.

    [root@rac2 RH3]# /etc/init.d/oracleasm configureConfiguring the Oracle ASM library driver.This will configure the on-boot properties of the Oracle ASMlibrarydriver. The following questions will determine whether thedriver isloaded on boot and what permissions it will have. The currentvalueswill be shown in brackets ('[]'). Hitting withouttyping ananswer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracleDefault group to own the driver interface []: dbaStart Oracle ASM library driver on boot (y/n) [n]: yFix permissions of Oracle ASM disks on boot (y/n) [y]:Writing Oracle ASM library driver configuration: [ OK ]Loading module "oracleasm": [ OK ]Mounting ASMlib driver filesystem: [ OK ]Scanning system for ASM disks: [ OK ]

    14 ORACLE PRE INSTALL SETUP TASKSRef: Oracle Documentationhttp://www.oracle.com/technology/documentation/

    14.1Oracle User

    Create the Oracle User and dba group on both systems as follows as follows:

    [root@rac2 root]# mkdir p /usr/home/oracle

    [root@rac2 root]# groupadd -g 500 dba[root@rac2 root]# groupadd -g 501 oinstall[root@rac2 root]# useradd -u 500 -g dba -G oinstall -m -s /bin/bash oracle

    Set up your oracle user password using the passwd command

    Add the following to end of /etc/profile

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 16 of 31

    http://www.oracle.com/technology/documentation/http://www.oracle.com/technology/documentation/
  • 7/28/2019 Hp Ia64 Rac Install

    17/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    if [ $USER = "oracle" ]; thenif [ $SHELL = "/bin/ksh" ]; then

    ulimit -p 16384ulimit -n 65536

    else ulimit -u 16384 -n 65536fi

    fi

    14.2System Parameters Add the following to /etc/security/limits.conf

    oracle soft nproc 2047oracle hard nproc 16384

    oracle soft nofile 1024oracle hard nofile 65536

    Add the following to /etc/sysctl.conf,

    # ADDED For 10g RAC INSTALL# Default settings in bytes of the socket receive buffernet.core.rmem_default = 262144# Default setting in bytes of the socket send buffernet.core.wmem_default = 262144# Max socket receive buffer size which may be sent using# the SO_RCVBUF socket option

    net.core.rmem_max = 262144# Max socket send buffer size which may be sent using# the SO_RCVBUF socket optionnet.core.wmem_max = 262144# SHMMAX maximum size (in bytes) for a System V shared memory segmentkernel.shmmax = 2147483648# SEM system Semaphoreskernel.sem = 250 32000 100 128# File Handelsfs.file-max = 65536# Shared Memory limit Maxkernel.shmall = 2097152# shared memory limit mIN

    kernel.shmmni = 4096# Socketsnet.ipv4.ip_local_port_range = 1024 65000

    You can either update the system values with

    /sbin/sysctl p

    or reboot the system . At this stage reboot the system before you start the OracleInstall to ensure al changes are implemented at boot time.

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 17 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    18/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    Note: these values are tuneable, for example you can increase SHMMAX to the actualsize of your available physical memory this will allow you to increase the SGA area ata later stage without rebuilding the kernel.

    14.3Hangcheck-timer

    This step is optional if you have installed CRS, however it is recommended and isused to monitor the health of the system.

    The hangcheck-timer will reset a node if the system hangs or pauses. It is installedwith the linux kernel 2.4.9-e.12 or higher, you can verify if the hangcheck-timer isinstalled as follows:

    [root@rac1 etc]# find /lib/modules -name "hangcheck-timer.o"

    Perform the follow on each node to set up the hangcheck timer to check the health of the system every 30 seconds with a hang delay of 180 seconds before hangcheck-timer reboots the system.

    [root@rac1 etc]# echo "options hangcheck-timer hangcheck_tick=30hangcheck_margin=180" >>/etc/modules.conf

    Oracle will load the hangcheck timer when needed, however to manually load thehangchek timer and check it is working

    [root@rac1 etc]# modprobe hangcheck-timer [root@rac1 etc]# grep Hangcheck /var/log/messages | tail -2Jul 28 13:02:48 rac1 kernel: Hangcheck: starting hangcheck timer 0.8.0 (tick is 30seconds, margin is 180 seconds).

    14.4Remote access setup

    Each node within the cluster needs remote access between each node as the oracleuser, for remote access and to remotely copy files between each server. You caneither use rsh or ssh. We used rsh however ssh is recommended for a secureenvironment. By default rsh-server will not be installed on RHAS, you will have to findthe rpm on CD and install rsh-server on all nodes within the cluster as root user asfollows:

    # rpm -ivh rsh-server-0.17-17.ia64.rpm

    Ensure you have rsh client and rsh server installed# rpm -q rsh rsh-server

    rsh-0.17-17rsh-server-0.17-17

    To enable the "rsh" service, the "disable" attribute in the /etc/xinetd.d/rsh file must beset to "no" and xinetd must be reloaded.

    # chkconfig rsh on# chkconfig rlogin on# service xinetd reload

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 18 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    19/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    Reloading configuration: [ OK ]

    To allow the "oracle" user account to be trusted among the RAC nodes,create the /etc/hosts.equiv file on all nodes in the cluster as root user:

    # touch /etc/hosts.equiv# chmod 600 /etc/hosts.equiv# chown root.root /etc/hosts.equiv

    Now add all RAC nodes to the /etc/hosts.equiv including the VIP addresses, e.g.:

    # cat /etc/hosts.equiv+rac1 oracle+rac2 oracle+rac1-vip oracle+rac2-vip oracle+rac1-int1 oracle

    +rac2-int1 oracle+rac1-int2 oracle+rac2-int2 oracle

    Rename the Kerberos version of rsh as root user to use the standard rsh binary:

    # which rsh /usr/kerberos/bin/rsh

    # mv /usr/kerberos/bin/ rsh rsh.ORG

    # which rsh/usr/bin/rsh

    15 CRS STORAGE DISK SETUP

    Next we configured the external shared storage an MSA1000 using the CLI(command line interface) for a single CRS device which would be used for both theCSS and Voting disk during the CRS install.

    Initial Setup consisted of connected the Proliant DL580 server directly to theMSA 1000 storage box with a 2GB fibre cable C7525A (5065-5102). The MSA 1000has 1 * 2GB SFP Transceiver which was connect directly to the back of the Server using the 2GB cable.

    1 LUN was created using the CLI tool provided, accessed on a hyperterm from COM1on the Proliant to the console port on the front of the MSA 1000 for the CRS disk asfollows:

    LUN creation procedure:

    CLI> add unit 0 data = "disk101-disk102" raid_level=1stripe_size=128 spare disk103

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 19 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    20/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    This above configuration will Mirror disk101 and disk102 adding disk103 as a hotspare, however any configuration you choose will work fine as ling as the LUN iscreated.

    Once the LUN is created you can reboot both servers. Use fdisk l to view allattached devices.

    The next operation involves partitioning the drive using fdisk in our example wepartitioned the drive as follows:

    [root@rac2 root]# fdisk /dev/sdcDevice contains neither a valid DOS partition table, nor Sun,SGI or OSF disklabelBuilding a new DOS disklabel. Changes will remain in memoryonly,until you decide to write them. After that, of course, thepreviouscontent won't be recoverable.

    The number of cylinders for this disk is set to 4427.There is nothing wrong with that, but this is larger than 1024,and could in certain setups cause problems with:1) software that runs at boot time (e.g., old versions of LILO)2) booting and partitioning software from other OSs

    (e.g., DOS FDISK, OS/2 FDISK)Warning: invalid flag 0x0000 of partition table 4 will becorrected by w(rite) Command (m for help): p Disk /dev/sdc: 36.4 GB, 36413314560 bytes255 heads, 63 sectors/track, 4427 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot Start End Blocks Id System

    Command (m for help): nCommand action

    e extendedp primary partition (1-4)

    pPartition number (1-4): 1First cylinder (1-4427, default 1):Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-4427, default4427):

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 20 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    21/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    Using default value 4427 Command (m for help): wThe partition table has been altered! Calling ioctl() to re-read partition table.Syncing disks.

    Ensure you can see the newly created partiton from both systems using fdisk.

    On both nodes create the directory where you want to mount the OCFS to with theappropiate permissions.

    [root@rac2 root]# mkdir p /u02/oradata/orcl ; chown oracle:dba/u02/oradata/orcl

    Next we must configure the newly created partition as an OCFS (Oracle Cluster FileSystem) disk. As super user from one system and one system only run the mkfs.ocfscommand as follows where

    [root@rac2 root]# mkfs.ocfs -F -b 128 -L /u02/oradata/orcl-m /u02/oradata/orcl -u '500' -g '501' -p 0755 /dev/sdc1Cleared volume header sectorsCleared node config sectors

    Cleared publish sectorsCleared vote sectorsCleared bitmap sectorsCleared data blockWrote volume header

    Once we created the OCFS from one node we need to mount the file system fromeach node as follows, add the following entry to each /etc/fstab on each node

    /dev/sdc1 /u02/oradata/orcl ocfs _netdev 0 0

    You can now mount the newly created filesystem form each node one at a time (aswhen you first mount he ocfs it must initialize) by typing:

    # mount a

    or

    # mount -t ocfs /dev/sdc1 /u02/oradata/orcl

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 21 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    22/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    16 DATABASE DISK SETUP

    16.1LUN Setup

    As perviously stated we used an MSA 1000, which can be configured using the CLIinterface which accesses a console port on the front the MSA 1000. The MSA itself isquiet easy to configure and allowed us flexibility to set up different disk configurationsbased on whatever database setup we needed. In the end we decided to create twodatabases on RAC cluster for demo purposes:

    - One Database taking advantage of ASM technology- One Database taking advantage of OCFS technology

    In summary the disk configuration was as follows

    1 LUN For CRS i.e. CSS and OCR disk (2 disks RAID 1)3 LUNS For our ASM database (3 separate disks)1 LUN For the OCFS database (5 disks with one hot spare)

    The CLI utility itself is quiet easy to use however here is a quick 101 on our setup

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 22 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    23/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    CRS LUN

    CLI> ADD UNIT 0 DATA = "disk101-disk102" raid_level=1First volume to be configured on these drives.Logical Unit size = 17359 MBRAID overhead = 17359 MBTotal space occupied by new unit = 34718 MBFree space left on this volume: = 0 MBUnit 0 is created successfully.

    OCFS LUN

    CLI> ADD UNIT 1 DATA = "disk103-disk107" raid_level=5First volume to be configured on these drives.Logical Unit size = 69460 MBRAID overhead = 17365 MBTotal space occupied by new unit = 86825 MBFree space left on this volume: = 0 MBUnit 1 is created successfully.

    CLI> add spare unit=1 disk108Spare drive(s) has been added. Use 'show unit 1' to confirm.

    ASM LUNS

    CLI> ADD UNIT 1 DATA = "disk110" raid_level=0First volume to be configured on these drives.Logical Unit size = 17359 MBRAID overhead = 0 MBTotal space occupied by new unit = 17359 MBFree space left on this volume: = 0 MB

    Unit 2 is created successfully.

    CLI> ADD UNIT 3 DATA = "disk111" raid_level=0First volume to be configured on these drives.Logical Unit size = 17359 MBRAID overhead = 0 MBTotal space occupied by new unit = 17359 MBFree space left on this volume: = 0 MBUnit 3 is created successfully.

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 23 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    24/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    CLI> ADD UNIT 4 DATA = "disk112" raid_level=0First volume to be configured on these drives.Logical Unit size = 17359 MB

    RAID overhead = 0 MBTotal space occupied by new unit = 17359 MBFree space left on this volume: = 0 MBUnit 4 is created successfully.

    16.2 OCFS Database Disk Setup

    At this stage you can now reboot both systems and using

    [root@rac2 root]# fdisk l

    you can now see the new SCSI devices.

    Using fdisk we created a new partition for the OCFS, which we used for our OCFSdatabase setup, and 3 new partitions for the ASM disks, which we used for our ASMdatabsae setup.

    As with the CRS setup you need to create the OCFS partition for the OCFS databasesetup e.g.

    On both nodes:

    [root@rac2 root]# mkdir p /u02/oradata/db ; chownoracle:dba /u02/oradata/db

    On one node only

    [root@rac2 root]# mkfs.ocfs -F -b 128 -L /u02/oradata/db -m /u02/oradata/db -u '500' -g '501' -p 0755 /dev/sdd1

    On both nodes place the appropriate entry into the /etc/vfstab file e.g.

    /dev/sdd1 /u02/oradata/db ocfs _netdev 0 0

    You can now mount the newly created filesystem form each node one at a time (aswhen you first mount he ocfs it must initialize) by typing:

    # mount a

    or

    # mount -t ocfs /dev/sdd1 /u02/oradata/db

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 24 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    25/31

  • 7/28/2019 Hp Ia64 Rac Install

    26/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    Specify the location of your ORACLE_HOME and Name and Click Next e.g.

    Name: crshome Location: /u02/oracle/crshome

    Select your default language and Click Next.

    Specify the Cluster Configuration and Click Next e.g.

    Cluster Name: crsPublic Node Name: rac1 Private Node Name: 10.0.0.2Public Node Name: rac2 Private Node Name: 10.0.0.3

    Note: You can use IP addresses or hostnames for the Private Note Name.

    Specify the Network Interface Usage e.g.

    Interface Name: eth0 Subnet: 192.168.196.0 Interface Type: PublicInterface Name: eth1 Subnet: 10.0.0.0 Interface Type: Private

    Specify the location of the Oracle Cluster Registry e.g.

    /u02/oradata/orcl/CRSDisk

    (Note if for some reason you do not see this screen it means the /etc/oracle/ocr.locfile exists and there is a previous install of CRS on your system, you will have toremove this file and follow the De-Install CRS procedure in 17.1 )

    Specify the location of the Voting Disk e.g.

    /u02/oradata/orcl/CSSDisk

    You will now be asked to run orainstRoot.sh on each node in the cluster, RunorainstRoot.sh on each node from a separate xterms.

    After the S/W installation is complete you will be asked to run root.sh from first theNode you are performing the install from and second the remaining nodes in thecluster in our case the second node. Remember be patient here and it is importantyou get a log similar to the following bellow from the first node you run root.sh on if this is successful the second node should be fine.

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 26 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    27/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    [root@rac1 crs]# ./root.shRunning Oracle10 root.sh script...\nThe following environment variables are set as:

    ORACLE_OWNER= oracleORACLE_HOME= /u02/oracle/crs

    Finished running generic part of root.sh script.Now product-specific root actions will be performed.Checking to see if Oracle CRS stack is already up...Setting the permissions on OCR backup directoryOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/u02/oracle' is not owned by rootWARNING: directory '/u02' is not owned by rootclscfg: EXISTING configuration version 2 detected.clscfg: version 2 is 10G Release 1.assigning default hostname rac2 for node 1.assigning default hostname rac1 for node 2.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node : node 1: rac2 10.0.0.2 rac2

    node 2: rac1 10.0.0.3 rac1clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override.-force is destructive and will destroy any previous clusterconfiguration.Oracle Cluster Registry for cluster has already beeninitializedAdding daemons to inittabPreparing Oracle Cluster Ready Services (CRS):Expecting the CRS daemons to be up within 600 seconds.

    Once this has completed go directly to the RAC Software Install and Install the RACsoftware .

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 27 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    28/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    18 RAC SOFTWARE INSTALL At this stage we will install the RAC Software without installing a database.From either a CD or an NFS share containing 2 directories called Disk1 & Disk2,which contain copies of the CDs run the installer as the oracle user.

    [oracle@rac1 oracle]$ ./runInstaller

    At the Welcome Screen click Next.

    Specify the location of your ORACLE_HOME and Name and Click Next e.g.

    Name: orclhome Location: /u02/oracle/orclhome

    Select your default language and Click Next.

    At the next screen you should be presented with a list of nodes in your cluster, if CRSinstalled successfully you will see a list of all the nodes. Select Cluster Installationoption and all nodes in the list and Click Next.

    Select the type of Installation we select Enterprise Edition.

    Select "Do not create a starter database". And Click Next.(Note: we will create or databases later with dbca)

    Click Next again to start S/W installation.

    At the end of the software installation you will be asked to run root.sh on all nodes inthe cluster, again in our case both nodes in the cluster. First run root.sh from thenode you are running the installation from, once the script is finished the VIPConfiguration Assistant (VIPCA), will appear, please fill in all details i.e. Node name,IP Alias Name (the VIP hostname), IP Address and Subnet Mask for each node in thecluster:

    Node Name: rac1.linux.bogus IP Alias Name: rac1-vip.linux.bogus IP Address: 192.168.196.100 Subnet Mask: 255.255.255.0

    Once this has completed run root.sh on the remaining nodes and Click Next back atthe main screen.

    You can exit the installation at the end of the OUI

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 28 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    29/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    19 RAC DATABASE CREATION

    At this stage we can create our database with dbca, in our example we created 2databases one using OCFS and one using ASM.,

    As the ORACLE user launch dbca

    [oracle@rac2 oracle]$ ./dbca

    Select Oracle Real Application Clusters database and Click Next. Select Create a Database and Click Next. Click the Select All button to select all servers e.g. rac1 & rac2 and Click Next. Select the type of database you want, in our case we choose General Purpose

    database. In Database Identification enter Global Database Name and SID Prefix

    Global Database Name: orcl.linux.bogusSID Prefix: orcl

    In Database Management Options, we used the default Configure the Databasewith Enterprise Manager and Click Next.

    In Database Credentials specify the passwords for SYS, System, DBSNMP and

    SYSMAN. Click Next.

    19.1Storage Options

    19.1.1 Cluster File System

    o At the Storage Options screen if you wish to create a database usingCluster File System click on Cluster File System.

    o From Database File Location we choose Use Oralce-Managed File andspecified the Database Area and Click Next e.g.

    /u02/oradata/db

    o You can now Click Next to accept all default parameters or change anyparameters you wish.

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 29 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    30/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

    Note: We did not choose any Recovery options for the database and weadded the Sample Schemas to the database by Clicking on the SampleSchemas check box in Database Content.

    The database will now install.

    19.1.2 Automatic Storage Manager ASM

    o At the Storage Options screen if you wish to create a database using Cluster File System click on ASM.

    o You will now be asked to create an ASM Instance, enter and confirm theSYS password and Click Next. AA dialog box will appear, at the prompt ClickOK. Dbca will create and start and ASM instance on all nodes in the cluster.

    o Click Next and you will see the Create Disk Group window with the 3 ASMvolumes you created.

    ORCL:VOL1, ORCL:VOL2, and ORCL:VOL3

    For the disk group name field enter in a diskgroup name e.g.

    ORCL_ASMDG1

    o In Select Members Disks, select all Volumes and Click OK.

    o Once the ASM Disk group creation process has completed select thecheck box next to the new disk group and Click Next.

    o From Database File Location we choose Use Oralce-Managed Filee.g.

    Database Area : +ORCL_ASMDG1

    o You can now Click Next to accept all default parameters or change anyparameters you wish.

    Note: We did not choose any Recovery options for the database and weadded the Sample Schemas to the database by Clicking on the SampleSchemas check box in Database Content.

    The database will now install.

    Issue 1.0 Oracle Corporation - Company Confidential

    Page 30 of 31

  • 7/28/2019 Hp Ia64 Rac Install

    31/31

    10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0