Upload
kamal-kumar
View
236
Download
9
Embed Size (px)
Citation preview
Build Oracle 10g RAC Database On Solaris / Linux
With VMWare
1
All rights reserved. No part of this publication may be reproduced, stored in retrieval system or transmitted, in any form or by any means, electronic,
mechanical, photocopying, recording, or otherwise, without the prior permission of the author.
2
About the Book“Build Oracle 10g RAC Database on Solaris / Linux with VMWare” covers all the important aspects of Implementing/Managing Oracle 10g RAC —from installation to day-to-day maintenance.
The book would be a very good starting point for someone who wants to know about Oracle RAC.
To my parents and my beloved country
3
Table of Contents
1. Introduction 72. Implement Oracle RAC on Solaris
13
Task 1: Prepare System for Oracle RACSection: A Create first Virtual Machine
13Section: B Install Solaris 10 OS on first Virtual Machine
14Section: C Create Shared Storage and Configure
15Section: D Create and Configure the Second Virtual Machine
16Section: E Add Ether Net Adapter for Private Network in Both Machine
16Section: F Prepare Disk for OCR, Voting and ASM Storage
17
Task 2: Install Oracle ClusterwareSection: A Check All Pre-request for Clusterware installation
20Section: B Create Oracle User and Group 22Section: C Configure SSH 22Section: D Specify default gateway
23
4
Section: E Install Oracle Clusterware 24
Task 3: Install Oracle 10gR12 Software/binary 28
Task 4: Configure Oracle Listener 24
Task 5: Create and Configure ASM Instance and DISK GroupsBy “DBCA” 36By “MANUAL” 40
Tack 6: Create DatabaseBy “DBCA” 42By “MANUAL” 50
Task 7: Setup Test Transport Failover 53
3. Implement 10gR2 Oracle RAC on Linux
Section: A Hardware Requirements and Overview 58Section: B Configure the First Virtual Machine 59Section: C Install and Configure Enterprise Linux on the First Virtual
Machine 63Section: D Create and Configure the Second Virtual Machine 77Section: E Configure Oracle Automatic Storage Management (ASM)
81Section: F Configure Oracle Cluster File System (OCFS2) 82Section: G Install Oracle Clusterware 85Section: H Install Oracle database binary 86Section: I Configure Oracle Listener 86Section: J Confihure ASM 86Section: K Create Database 86Section: L Setup Test Transport Failover 53
4. Convert 10gR2 Stand-alone database to Oracle RAC on Solaris
5
Method: “Manually” 87Method: “rconfig” 95Method: “DBCA” 98
5. Managing OCR and Voting Disk 100
6. Administering Cluster Ready Services (CRS) 106
7. Administrating Services 109
8. Managing UNDO, Temporary and redo logs in RAC Environment 112
9. De-Installing Oracle Real Application Clusters Software and Database 115
10. De-Installation RAC Component after Fail Installation 116
11. Adding a Node To 10gR2 RAC cluster 117
12. Removing a 10gR2 RAC cluster Node 120
13. RAC Load Balancing 122
14. RAC Failover Case Study 125
15. Oracle RAC Log Directory 127
6
Introduction
What is RAC?
RAC stand for Real Application cluster. RAC provides cluster solution that ensures high availability of instance and load balancing.
What is Cluster?
A cluster is a set of 2 or more machine (nodes) that share resource to perform the same task.
What is RAC Database?
A RAC database is 2 or more instance running on a set of clusters nodes with all instances accessing a shared set of database files.
Different between RAC and Non RAC database
RAC stands for Real Application Clusters. It allows multiple nodes in a clustered system to mount and open a single database that resides on shared disk storage. Should a single system fail (node), the database service will still be available on the remaining nodes.
A non-RAC database is only available on a single system. If that system fails, the database service will be down (single point of failure).
Why use RAC?
We can achieve benefit in the following ways:
High availability - If some nodes fail, the remainder of the nodes will still be available for processing requests.
Speedup (increased transaction response time) - RAC normally adds some overhead.
Scale-up (increased transaction volume) - RAC can be used to provide increased application scalability
What are Clusterware processes?
When clusterware is started, the following process will be running:
crsd – Cluster Resource Services Daemon cssd – Cluster Synchronization Services Daemon evmd – Event Manager Daemon
7
Major component in 10G RAC (Hardware Level and Software Level)
Here we discuss about major component in 10g RAC. We can categories 10g RAC component between Hardware and Software level.
In Hardware level, three major components are Shared Disk Storage, Private network and Public Network.
In RAC architecture, we need Shared Disk Storage because the database file, redo logs and control files (also OCR and VOTING Disk) must be accessible by each node.
For Shared Disk configuration, we can use SCSI (just a Bunch of Disk), Storage Area network (SAN) and Network Attached Storage (NAS)
Each node must be connected to all other node via private high-speed network in RAC architecture.
Each node must be assign public IP address. We must assign a virtual IP address (VIP) for maintain high availability. At the time of node failure, the failed node's IP address can be reassigned to a surviving node to allow applications to continue accessing the database through the same IP address.
In Software Level, also three major components are OS, Clusterware Software and Storage Mechanism we would like to use for the database (OCFS/ASM/RAW)
At the time of RAC implementation, be ensure that we have install all necessary software package, setting kernel parameter, configure the Network, configure the disk device and create directory structure and also ensure Oracle clusterware have install on each cluster node.
We can use following file storage mechanism for the oracle clusterware component and database componentt:
For Clusteware Component:OCFS (Release 1 or 2)Raw devicesThird party cluster filesystem such as GPFS or Veritas
For RAC database storage:
OCFS (Release 1 or 2)ASMRaw devices
8
Third party cluster filesystem such as GPFS or Veritas
Hardware Requirements for Oracle Real Application Clusters
Each node in a cluster requires the: External shared disks (All nodes connected with External Shared Storage)
for storing OCR, Voting Disk file and Database files. One Network Ethernet Card for private connection. We assign IP address
for each node to serve as the private Interconnect.
Important:
Private interconnect must be separate from the public network Private interconnect must be accessible on the same network interface
on each node Must have a unique address on each node One Network Ethernet Card for public connection. We assign IP address
for each node, to be used as the Virtual IP address for client Connections and for connection fail over
Storage Option for Oracle Real Application Clusters
We can use a file system (NFS) or raw device (partition) for Oracle Clusterware File. We can use ASM or raw device for database or recovery file storage but can not use raw devices for recovery.
NOTE:
Oracle recommends that you choose Automatic Storage Management (ASM) as the storage option for database and recovery files.
You cannot use ASM to store Oracle Clusterware files, because these files must be accessible before any ASM instance starts.
If you intend to use ASM with RAC, and you are configuring a new ASM instance, then your system must meet the following conditions:
9
All nodes on the cluster have the release 2 (10.2) version of Oracle Clusterware installed.
Any existing ASM instance on any node in the cluster is shut down. If you do not have a storage option that provides external file
redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.
Memory structure and background process in Oracle RAC
Oracle RAC is composed of two or more database instances. They are composed of Memory structures and background processes same as the single instance database. Oracle RAC instances use two processes GES (Global Enqueue Service), GCS (Global Cache Service) that enable cache fusion. Oracle RAC instances are composed of following background processes:
An Oracle RAC database has the same processes and memory structures as a single-instance Oracle database as well as additional process and memory structures that are specific to Oracle RAC.
The list of Oracle RAC processes are:
1)LMS—Global Cache Service Process
2)LMD—Global Enqueue Service Daemon
3)LMON—Global Enqueue Service Monitor
4)LCK0—Instance Enqueue Process
LMS: - Is called Global Cache Service process. This process manages the data sharing and data exchange in RAC Environment. Each cache blocks record in Global Cache Directory.
LMON: - is the Lock monitor Process and is responsible for managing the Global Enqueue services. It maintains consistency of GCS memory in case of process death.
LMD; - is the daemon process that manages enqueue manager service request for GCS.
LCK0:- Is managing instance resource request and cross-instance call operation for shared resources.
10
DIAG: - is a lightweight daemon process for all the diagnostic need of an instance in an RAC environment.
How Oracle RAC does works?
Each instance will have its own set of buffer but will be able to request and receive data blocks currently held in another instance cache.
In single instance environment, the buffer cache access by only one set of process.
In RAC, the buffer cache of one node may contain data that is requested by another node.
The management of data sharing and exchange in this environment is done by Global cache Services (GCS)
When a block is transferred out of a local cache to another cache, the GRD is updated.
Global Resource Directory (GRS) is the internal repository that record and store the current status of the data blocks. The GRD is managed by Global Cache Service (GCS) and Global Enqueue Services (GES).
Following information is available in GRD.Data block identifiersLocation of most current versionModes of data block (Null/Share/Exclusive)
When one instance departs the cluster, the GRD portion of that instance needs to be redistributing to the surviving node.
When a new instance enters the cluster, the GRD portions of the existing instance must be redistributed to create the GRD portion of the new instance.
What is Cache Coherency? Is an important part of RAC. Cache coherency is the technique of keeping multiple copies of a buffer consistent between different oracle instances of different nodes. Global Cache Management ensure that access to master copy of a data blocks in one buffer cache is coordinated with the copy of the block in another buffer cache. This ensure the most recent copy of a block in a buffer cache contains all changes that are mad to that block by any instance in the system regardless of wheather those changes have been commited on the transaction lavel.
11
Oracle 10g RAC Architecture (Graphical View)
Implement 10gR2 Oracle RAC on Solaris 10 with VMware
12
Important Tips
Overview of the RAC database environment, which will be setup in bellow steps
Host Nam
e
Database
Name
Instance Name
Database File Storage OCR & CRS Voting Disk
RAC1 PROD PROD1 ASM RAWRAC2 PROD PROD2 ASM RAW
Overview of Network Setup DetailsNode: RAC1 Node: RAC2
Network Adaptor pcn0 form public (IP: 192.168.0.111)
Network Adaptor pcn0 form public (ip: 192.168.0.222)
Network Adaptor pcn1 form public (IP: 192.168.0.11)
Network Adaptor pcn1 form public (IP: 192.168.0.22)
IP for VIP: 192.168.0.50 IP for VIP:192.168.0.60
Overview of Shared Storage partitionDetails Size Usesc0d1s0 500 MB OCRc0d1s1 500 MB Voting Diskc0d1s3 3 GB ASM DISK 1c0d1s4 3 GB ASM DISK2c0d1s6 2 GB ASM DISK3
Task 1 (Prepare System for Oracle RAC)
Section: A (Create first Virtual Machine)
1. Create the windows folders to house the first virtual machines and the shared storage.
F:\>mkdir RAC1F:\>mkdir SHARED-STORAGE
2. Open VMWARE Console and Click on New Virtual Machine and click next.3. Select Virtual Machine Configurations: Custom 4. Select Gust Operation System: Sun Solaris and version solaris10 5. Type virtual Machine Name (RAC1) and Location (F:/RAC1). 6. Set Access Right (Select Default Value) 7. Select Startup/Shutdown option (Select Default value)
13
8. Process Configuration (Select number of process: One)9. Set memory for the virtual Machine (700MB)10. Select Network Connection. Total 4 options (Select Host Only Network)
User Bridge connection Use network address translation (NAT) Use host only network (Select Host Only Network) Do not use a network
11. Select I/O Adepter types (LSI Logic)12. Select a Disk (create a new virtual disk)13. Select a Disk type (Select: IDE) 14. Specify Disk Capacities. (8 GB) and Click on Finish Button.Now your virtual machine is ready to install Solaris 10 OS
Section: B (Install Solaris 10 OS on first Virtual Machine)
1. Double Click on CD-ROM Devices. Select Use ISO image when you planning to install OS through ISO image otherwise Use physical drive.2. Click on Start the Virtual Machine. 3. Select Solaris Interactive (Default) installation 4. Configure keyboard Layout screen appear (press F2) 5 Select a language English 6 welcome screens appear (Click Next) 7 Select network connectivity and (Click Next).8. DHCP Screen appears select No and (Click Next)9. Type Host Nome and (Click Next) ------RAC110. Type IP Address and (Click Next) ------192.168.0.11111. Type NETMASK (Select default) and (Click Next)12. Select No for Enable Ipv6 for pcn0 and (Click Next)13. Select None for Default Route and (Click Next)14. Select No for Enable Kerberos security and (Click Next)15. Select None for Name Services and (Click Next)16. NFSv4 Domain Name select Default and (Click Next)17. Select Geographic Time Zones and (Click Next)18. Continent and Country (INDIA) and (Click Next) 19. Accept the default date and time and (Click Next) 20. Enter ROOT password and (Click Next)21. Select Yes for Enabling Remote Services and (Click Next)22. Confirm Information and (Click Next)23 Select default Install option and (Click Next)
After that the system is being analyzed. Please wait screen appear and select the Type of Installation. Now Installer Install the OS…
Section: C (Create Shared Storage and Configure)
►Create virtual disks for storage usage, which is shared by Both Machine
14
1. down the virtual Machine (RAC1) 2. Go to VMware Server Console. Click on Edit virtual machine settings.
Virtual Machine Settings: Click on Add. Add Hardware Wizard: Click on Next. Hardware types: Select Hard Disk. Select a Disk: Disk: Select create a new virtual disk. Select a Disk Type: Virtual Disk Type: Select IDE (Recommended). Specify Disk Capacity: Disk capacity: Enter “10GB.” Select Allocate all disk space now. Specify Disk File: Disk file: Enter “F :\> SHARED-STORAGE\DISK1.vmdk.” Click on Advanced Add Hardware Wizard: Virtual device node: Select IDE0.1. Mode: Select Independent, Persistent for all shared disks.
3. Click on Finish.
► Modify virtual machine configuration file. Additional parameters are required to enable disk sharing between the two virtual RAC nodes. Open the configuration file, F:\>SUNOS-1\ Solaris 10.vmx.vmx and add the bold parameters listed below....priority.grabbed = "normal"priority.ungrabbed = "normal"disk.locking = "FALSE"diskLib.dataCacheMaxSize = "0"ide0:1.sharedBus = "virtual"ide0:0.redo = ""ethernet0.addressType = "generated-----ethernet0.connectionType = "hostonly"ide0:1.present = "TRUE"ide0:1.fileName = "E:\SHARED-DISK.vmdk"ide0:1.redo = ""checkpoint.vmState = ""ide0:1.mode = "independent-persistent"
15
ide0:1.deviceType= "disk"floppy0.present = "FALSE"
Section: D (Create and Configure the Second Virtual Machine)
1. Create the windows folders to house the second virtual machines.E:\>mkdir RAC2
2. Shutdown the First Virtual Machine 3. Copy all the files from F:\RAC1 to E:\RAC2 4. Open VMware Server Console, press CTRL-O to open the second virtual machine, E:\RAC2\Solaris 10.vmx. 5. Rename the second virtual machine name from RAC1 to RAC2.
Click on Start this virtual machine to start RAC2, leaving RAC1 powered off.
RAC2 – Virtual Machine: Select create a new identifier. 6. Log in as the root user and modify the network configuration.
Follow below step for modifying Host Name and IP$ ifconfig <Ethernat> <new IP> (eg: $ifconfig pcn0 192.168.0.222)$ ifconfig <Ethernet> up (eg: $ifconfig pcn0 up)$ go to /etc/hosts file and change IP and host (eg 192.168.0.222 rac2
rac2)$ go to /etc/nodenames file and change host name (eg: RAC2)$ go to /etc/hostname.<Ethernet> (eg: /etc/hostname.pcn0) and change host name (eg: RAC2)
7. Restart the Second Virtual Machine. 8. Start the First virtual Machine 9. Verify all changes and Enjoy.
Section: E (Add Ether Net Adapter for Private Network in Both Machine)
1. Put following entry in /etc/hosts file on Both Node.
192.168.0.111 rac1 rac1192.168.0.222 rac2 rac2192.168.0.11 rac1-priv rac1-priv192.168.0.22 rac2-priv rac2-priv192.168.0.50 rac1-vip rac1-vip192.168.0.60 rac2-vip rac2-vip
2. Power off both machines(Note: Complete step 3 to 8 on both node one by one)3. Click on Edit virtual machine setting 4. Click Add button and select Ethernet Adapter 5. Select Host-Only Network type6. Start the Both Machine and check network setting on both machines.
16
7. Go to RAC1 node: (Plumb Ethernet)$ifconfig pcn1 plumb 192.168.137.11 netmask 255.255.255.0$ifconfig pcn1 up$ go to /etc/hostname.pcn1 and change host name (eg: rac1-priv) $ifconfig –a (This will show all Ethernet Adapter)
8. Go to RAC2 node: (Plumb Ethernet)$ifconfig pcn1 plumb 192.168.137.222 netmask 255.255.255.0$ifconfig pcn1 up$ go to /etc/hostname.pcn1 and change host name (eg: rac2-priv) $ifconfig –a (This will show all Ethernet Adapter)
9. Power on both machine and verify your network setting
Section: F (Prepare Disk for OCR, Voting and ASM Storage)
1. Run bellow mention command on both node
$devfsadm
2. Log on to RAC1 node and complete bellow mention procedure.
Here we will create following partition:
c0d1s0 for OCR Diskc0d1s1 for Voting Disk c0s1s3 form ASM Diskc0s1s4 from ASM Disk c0s1s5 from ASM Disk
3. Complete bellow mention steps for creating partition
$format
17
18
19
4. Set ownership of Disk (Note: This will be done after creating Oracle user and dba group)
Default owner is root:sys needs to be changed to oracle:dba .We need to set oracle:dba ( here oracle is a Oracle owner and dba is a group) ownership of the disk.
Check ownership of Disk. Execute following command for all slice as a root user.
Example:
$ ls -lhL /dev/rdsk/c0d1s0 crw-r----- 1 root sys 118, 64 Feb 16 02:10 /dev/rdsk/c0d1s0
$ chown oracle:dba /dev/rdsk/c0d1s0
Task 2 (Install Oracle Clusterware)
Section: A (Check All Pre-request for Clusterware installation)
Step 1: Check Hardware Requirement
RAM should be at least 1GB How to Check? Use following command: $ /usr/sbin/prtconf | grep "Memory size"
Swap Size: If RAM more then 2GB then Swap should be equal of RAM and If RAM between 1 to 2 gb then Swap should be 1.5 times of RAM How to Check? Use following command: # /usr/sbin/swap –s or log in as a “ROOT” user and execute following command: $swap –s or swap –lHow to ADD Swap Space?
Method A1. create a file system with desire space (eg: /c0d2s0) 2. Add the bellow mention line in /etc/vfstab file
/dev/dsk/c1t0d0s3 --swap -no -
Method B
1. create a file system with desire space (eg: /c0d2s0)2. Use the swap -a command to add additional swap area.
$swap -a /dev/dsk/c1t0d0s3
20
Method C
Swap files can be used when you need to add swap space and do not have a free partition to use. To add a swap file, complete the following steps:
1. Create a 1G swap file named swapfile in the partition that have enough free space, for example /export/data directory.
# mkfile 1000m /export/data/swapfile
2. Add the swap file to the system’s swap space.
# swap -a /export/data/swapfile
3. List the details of the modified system swap space with swap -l
4. List a summary of the modified system swap space with swap -s
5. To use a swap file when the system is subsequently rebooted, add an entry for the swap file in the /etc/vfstab file.
/export/data/swapfile --swap -no -
Temp Space: Minimum 400 MB of disk space required in the /tmp directory How to check? Use following command: # df -k /tmp
Check system architecture How to Check? Use following command: # /bin/isainfo –kv
Result should be:64-bit SPARC installation:64-bit sparcv9 kernel modules32-bit x86 installation:32-bit i386 kernel modules64-bit x86 installation:64-bit amd64 kernel modules
Step 2: Check Required PackagesHow to check? Use following command: # pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibm SUNWlibms SUNWsprot SUNWsprox SUNWtoo SUNWi1of SUNWi1cs SUNWi15cs SUNWxwfnt
21
If a package that is required for your system architecture is not installed, then install it.
Step 3: Check PatchesRefer to your operating system or software documentation for information about installing packages
Section B (Create Oracle User and Group)
Step 1 Create Oracle Users and Group (in all Nodes)
Log in as a root and execute:
# groupadd oinstall# groupadd dba# mkdir -p /export/home/oracle# useradd -d /export/home/oracle -g oinstall -G dba oracle# ac# passwd oracleNew Password:Re-enter new Password:passwd: password successfully changed for oracle
Section C (Configure SSH)
Step 1 Create .SSH, and Create RSA Keys on Each Node
Log in as an Oracle/CRS user and execute bellow steps
$ mkdir ~/.ssh$ chmod 700 ~/.ssh$ /usr/bin/ssh-keygen -t rsa
At the prompts:Accept the default location for the key file (press Enter).Enter and confirm a pass phrase unique for this installation user.
Step 2 Add All Keys to a Common authorized_keys File
On the primary node (RAC1), change directories to the .ssh directory. Then, add the RSA key to the authorized_keys file.
$ cd .ssh$ cat id_rsa.pub >> authorized_keys$ ls
22
In the .ssh directory, you should see the id_rsa.pub keys that you have created, and the file authorized_keys.
Step 3 Copy authorized_key file to all cluster node (RAC2)
$ scp authorized_keys RAC2:/export/home/oracle/.ssh/
The authenticity of host 'rac2 (192.168.0.222)' can't be established.RSA key fingerprint is c0:1f:20:34:54:b2:cd:9f:42:f2:d6:25:36:2f:3e:db.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'rac2,192.168.0.222' (RSA) to the list of known hosts.Password:authorized_keys 100% |*****************************| 221 00:00
Step 4 Log in on second node (RAC2) and Add the RSA keys for the second node (RAC2) to the authorized_keys file:
$cd .ssh$cat id_rsa.pub >> authorized_keys
Step 5 Copy authorized_key file to all cluster node (RAC2)
$ scp authorized_keys RAC1:/export/home/oracle/.ssh/
Step 6 Enabling SSH User Equivalency on Cluster Member Nodes. On the system where you want to run OUI, log in as the ORACLE/CRS user and execute:
$ ssh RAC1 date$ ssh RAC2 date$ ssh RAC1-PRIV date$ ssh RAC2-PRIV date
Section D (Specify default gateway)
specifying the default gateway to some dummy ip address (same subnet) in both of virtual machines.
$ vi /etc/defaultrouter and add the IPADDRESS of your router.
EXAMPLE:
23
vi /etc/defaultrouter 192.168.0.100 reboot to test. or netstat -r (This will show the current routing table) Another Example: route add destination gateway Example:
# route add default 192.168.0.100 netstat –r (This will now show your 'new' default gateway).
Section E (Install Oracle Clusterware)
Now Start the Oracle Cluster Ware Installation.
Step 1 Start run installer.
#./runInstallerIf you see any error (PRKC-1044) regarding SSH. Please use bellow mention solution.
Solution 1: you can use bello mention command.# ./runInstaller –remoteshell /usr/bin/ssh
Solution 2(permanent Solution): Complete bellow mentions steps.
#go to /usr and make directory “local” and go to local directory and create directory “bin”#create symbolic link
#ln –s /usr/bin/ssh /usr/local/bin/ssh#ln –s /usr/bin/scp /usr/local/bin/scp
Step 2 Specify Inventory directory and credentialsStep 3 Specify CRS Home and Path Details
24
Step 4 Installer Check pre-request. (This will show two warning, leave it)
Step 5 Specify cluster configuration. Click on Add button and specify public node name, private node name and virtual node name details for all RAC nodes.
25
Step 6 Specify network interface usage. At least one interface (pcn0) should be public.
Step 7 Specify OCR locations
26
Step 8 Specify Voting Disk locations (Choose External Redundancy
/dev/rdsk/c0d1s1)
Step 9 Click on Install button.
Step 10 you need to execute scripts as root user, which will be appearing in
screen.
27
Note: When you will execute “ROOT.SH” script on second node (RAC2), it will
run VIPCA in silent mode and configure VIP. We can also run the following Command in Second node as root user.
$CRS_HOME/bin/vipca for configuring VIP
Step 11Verify status of CRS services. Execute bellow command:
$/export/home/oracle/oracle/product/10.2.0/crs/bin/crs_stat –t
OR
$ srvctl status nodeapps -n rac1$ srvctl status nodeapps -n rac2
Task 3 (Install Oracle 10gR12 Software/binary)
2.5 GB of disk space should be required by Oracle Software1.3 GB of disk space should be required by General Purpose Database.
Here we have created two mount point /oracle and /database for oracle software and database.
Note: No specified operating system patches are required with Solaris 10 OS.
28
Per-requisite Steps:
Make sure that following software packages has been installed.
SUNWarcSUNWbtoolSUNWheaSUNWlibmSUNWlibmsSUNWsprotSUNWtooSUNWilofSUNWxwfntSUNWilcsSUNWsproxSUNWil5cs
we can verify that packages are installed or not by using following command: $pkginfo -i.
Check following executable file must be presents in /usr/ccs/bin
makearilnm
Checks swap space
Swap space should be 512MB or Twice the size of RAM. Use following command to know about Physical Memory and Swap space:
$ /usr/sbin/prtconf grep size$ /usr/sbin/swap –l
Need at least 400 MB of free space in /tmp directory.
Check kernel parameter
Set following kernel parameter in /etc/system file and reboot the server.
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmni=100
set semsys:seminfo_semmsl=256
29
set semsys:seminfo_semmni=100
Set X window enveiroment.
Log in as a root with CDE (Common Desktop Environment Session)
$ DISPLAY=:0.0$export DISPLAY$xhost +$ su – oracle$DISPLAY=:0.0$export DISPLAY$/usr/openwin/bin/xclock
Execute runInstaller
Step 1 Log in as a ORACLE user and execute run installer.
$./runInstaller
If you see any error (PRKC-1044) regarding SSH. Please use mention solution.
Solution 1: you can use bello mention command.# ./runInstaller –remoteshell /usr/bin/ssh
Solution 2: Complete bellow mentions steps.#go to /usr and make directory “local” and go to local directory and create directory “bin”#create symbolic link ( #ln –s /usr/bin/ssh /usr/local/bin/ssh)
30
Click on Next Button
Step 2 Select Installation type. Select Enterprise Edition and click on next button.
Step 3 Select Oracle home and Installation Path.
31
Step 4 Specify Hardware Cluster Installation Mode. Select Cluster installation and Select both RAC1, RAC2 nodes
32
Step 5 Select configuration option. Select software only.
33
Task 4 (Configure Oracle Listener)
The Network Configuration Assistant (NETCA) should only be launched and configured on one node. At the end of the configuration process, the NETCA starts up the Oracle listener on both nodes.
Step 1 Log in as a ORACLE user in RCA1 node and set following environment and invoke DBCA:
$export ORACLE_HOME=/export/oracle/oracle/product/10.2.0/db$export PATH=$ORACLE_HOME/bin:$PATH$netca
Step 2 Oracle Net Configuration Assistant: Real Application Clusters, Configuration:
Select "Cluster configuration"
Step 3 Oracle Net Configuration Assistant: TOPSNodes:
Click "Select all nodes"
Step 4 Oracle Net Configuration Assistant: Welcome:
34
Select "Listener configuration"
Step 5 Oracle Net Configuration Assistant: Listener Configuration, Listener:
Select "Add"
Step 6 Oracle Net Configuration Assistant: Listener Configuration, Listener Name:
Listener Name: LISTENER
Step 7 Oracle Net Configuration Assistant: Listener Configuration, Select Protocols
Selected Protocols: TCP
Step 8 Oracle Net Configuration Assistant: Listener Configuration, TCP/IP Protocol:
Select "Use the standard port number of 1521"
Step 9 Oracle Net Configuration Assistant: Listener Configuration, More Listeners?
Select "No"
Step 10 Oracle Net Configuration Assistant: Listener Configuration Done:
Click on "Next"
Step 11 Oracle Net Configuration Assistant: Welcome
Select "Naming Methods configuration"
Click on "Next"
Step 12 Oracle Net Configuration Assistant: Naming Methods Configuration, Select Naming:
Select "Local Naming"
35
Step 13 Oracle Net Configuration Assistant: Naming Methods Configuration Done:
Click on "Next"
Step 14 Oracle Net Configuration Assistant: Welcome
Click on "Finish"
Step 15 Verify status of services. $/export/home/oracle/oracle/product/10.2.0/crs/bin/crs_stat –t
Tack 5 (Create and Configure ASM Instance and ASM DISK Groups)
We can use two methods “DBCA” or “Manual”, for creating ASM Instance and Disk group. Here I will describe both methods. You can choose one of them.
By “DBCA”
Step 1 log in as a ORACLE user , set following environment and invoke DBCA :$export ORACLE_HOME=/export/oracle/oracle/product/10.2.0/db$export PATH=$ORACLE_HOME/bin:$PATH$dbca
Step 2 Choose oracle RAC database configurations.
36
Step 3 Choose “Configure Automatic Storage Management”
Step 4 Choose BOTH NODE.
37
Step 5 Choose As per screen Shot
Step 6 Clicks on Create New.
38
Step 7 Create Disk Group as per screen shot.
Step 8 Choose As per screen Shot. Create Disk Group as per screen shot.
39
Step 9 oracle Automatically create ASM, ASM Disk group and mount the Instance and Disk group on both node.
Step 10 Verify status of services. $/export/home/oracle/oracle/product/10.2.0/crs/bin/crs_stat –t
By “MANUAL”
Step 1 Create parameter files (int+ASM1.ora) for ASM instance on the first node.
Create init+ASM1.ora file for ASM Instance and put it default location.Cluser_database=trueasm_diskgroups='DATA',‘RECOVERY’#asm_diskstring='/dev/rdsk/c0d1s*'background_dump_dest=/export/home/oracle/oracle/admin/+ASM/bdumpcore_dump_dest==/export/home/oracle/oracle/admin/+ASM/cdumpuser_dump_dest==/export/home/oracle/oracle/admin/+ASM/udumpinstance_type=asmlarge_pool_size=16Mremote_login_passwordfile=exclusive+ASM1.instance_number=1+ASM2.instance_number=2
Step 2 Create password file for the ASM instance on first node. Using the orapwd utility, create an orapw+ASM1 file in $ORACLE_HOME/dbs on the second node.
$orapwd file=orapwd+ASM1 password=sys entries=5
Step 3 Start the first ASM instance (+ASM1).
$ export ORACLE_SID=+ASM1$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.2.0 - Production on Thu Oct 26 18:43:14 2009 Copyright (c) 1982, 2004, Oracle. All rights reserved. Connected to an idle instance.
SQL> startup nomount
ASM instance started Total System Global Area 125829120 bytes Fixed Size 769268 bytes Variable Size 125059852 bytes
40
Database Buffers 0 bytes Redo Buffers 0 bytesORA-15110: no diskgroups mountedSQL>
Step 4 create Disk Group
SQL> create diskgroup DATA normal redundancyfailgroup one disk '/dev/rdsk/c0d1s3'failgroup two disk '/dev/rdsk/c0d1s4';
Diskgroup created.
SQL> create diskgroup RECOVERY external redundancy disk '/dev/rdsk/c0d1s5';Diskgroup created.
Step 5 Create parameter files (int+ASM2.ora) for ASM instance on the second node.
Create init+ASM1.ora file for ASM Instance and put it default location.Cluser_database=trueasm_diskgroups='DATA',‘RECOVERY’#asm_diskstring='/dev/rdsk/c0d1s*'background_dump_dest=/export/home/oracle/oracle/admin/+ASM/bdumpcore_dump_dest==/export/home/oracle/oracle/admin/+ASM/cdumpuser_dump_dest==/export/home/oracle/oracle/admin/+ASM/udumpinstance_type=asmlarge_pool_size=16Mremote_login_passwordfile=exclusive+ASM1.instance_number=1+ASM2.instance_number=2
Step 6 Create password file for the ASM instance on second node. Using the orapwd utility, create an orapw+ASM2 file in $ORACLE_HOME/dbs on the second node.
$orapwd file=orapwd+ASM2 password=sys entries=5
Step 7 Start the second ASM instance (+ASM2).
$ export ORACLE_SID=+ASM2$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.2.0 - Production on Thu Oct 26 18:43:14 2009 Copyright (c) 1982, 2004, Oracle. All rights reserved.
41
Connected to an idle instance.
SQL> startup nomount
ASM instance started Total System Global Area 125829120 bytes Fixed Size 769268 bytes Variable Size 125059852 bytes Database Buffers 0 bytes Redo Buffers 0 bytesORA-15110: no diskgroups mountedSQL>
Step 8 Register the ASM instances with CRS.
For higher availability, register the ASM instances with the CRS. When registered, the CRS should detect any failed instances and automatically attempt to start up the instances. The CRS should also automatically start up the instances when the servers are rebooted. On node RCA1:
$ srvctl add asm -n rac1 -i +ASM1 -o /export/home/oracle/oracle/product/10.2.0/db$ srvctl start asm -n rac1 $ srvctl status asm -n rac1ASM instance +ASM1 is running on node rac1. On node RAC2:
$ srvctl add asm -n rac2 -i +ASM2 -o /export/home/oracle/oracle/product/10.2.0/db$ srvctl start asm -n rac2$ srvctl status asm -n rac2 ASM instance +ASM2 is running on node rac2.
Tack 6 (Create Database)
We can use two methods “DBCA” or “Manual”, for creating RAC Enable Database. Here I will describe both methods. You can choose one of them.
By “DBCA”
Step 1 log in as a ORACLE user , set following environment and invoke DBCA :$export ORACLE_HOME=/export/oracle/oracle/product/10.2.0/db$export PATH=$ORACLE_HOME/bin:$PATH
42
$dbca
Step 2 Follow Screen Shot
Step 3 Follow Screen Shot
Step 4 Follow Screen Shot
43
Step 5 Follow Screen Shot
Step 6 Follow Screen Shot
44
Step 7 Follow Screen Shot
Step 8 Follow Screen Shot
45
Step 9 Follow Screen Shot
Step 10 Follow Screen Shot
46
Step 11 Follow Screen Shot
Step 12 Follow Screen Shot
47
Step 13 Follow Screen Shot
Step 14 Follow Screen Shot
48
Step 15 Follow Screen Shot
Step 16 Follow Screen Shot
49
Step 17 After creating Database. Verify the Services.$/export/home/oracle/oracle/product/10.2.0/crs/bin/crs_stat –t
By “MANUAL”
Step 1 log in on first node (RAC1) as oracle user and create parameter file (initPROD1.ora) in default location with following parameter:
*.audit_file_dest='/export/home/oracle/oracle/product/10.2.0/db/admin/PROD/adump'*.background_dump_dest='/export/home/oracle/oracle/product/10.2.0/db/admin/PROD/bdump'#*.cluster_database=false*.compatible='10.2.0.2.0'*.core_dump_dest='/export/home/oracle/oracle/product/10.2.0/db/admin/PROD/cdump'*.db_block_size=8192*.db_create_file_dest='+DATA'*.db_domain=''*.db_file_multiblock_read_count=16*.db_name='PROD'*.sga_target = 250M*.job_queue_processes=10*.log_checkpoints_to_alert=TRUE*.pga_aggregate_target=100M*.processes=500*.remote_listener='LISTENERS_PROD'
50
*.remote_login_passwordfile='exclusive'*.sessions=200*.undo_management='AUTO'*.undo_tablespace='UNDOTBS1'*.user_dump_dest='/export/home/oracle/oracle/product/10.2.0/db/admin/PROD/udump'#PROD1.instance_name = PROD1#*.control_file=’+DATA’
Step 2 Create password file for the PROD1 instance on first node. Using the orapwd utility, create an orapw+PROD1 file in $ORACLE_HOME/dbs on the second node.
$orapwd file=orapwd+PROD1 password=sys entries=5
Step 3 Now go to $ORACLE_HOME/network/admin and edit tnsnames.ora to add entries for your instances, the database and the listeners. The listeners_prod entry has to look like this:
LISTENERS_PROD =(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521)))
Step 4 Start the instance and create the database on first node (RAC1)
$export ORACLE_HOME=/export/home/oracle/oracle/product/10.2.0/db$export PATH=$ORACLE_HOME/bin:$PATH$export ORACLE_SID=PROD1
SQL>
CREATE DATABASE PRODMAXINSTANCES 8MAXLOGHISTORY 100MAXLOGFILES 64MAXLOGMEMBERS 3MAXDATAFILES 150DATAFILE SIZE 300M AUTOEXTEND ON NEXT 10240K MAXSIZE 1024M EXTENT MANAGEMENT LOCALSYSAUX DATAFILE SIZE 200M AUTOEXTEND ON NEXT 10240K MAXSIZE 800MDEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE SIZE 200M AUTOEXTEND ON NEXT 10M MAXSIZE 1000MEXTENT MANAGEMENT LOCAL
51
UNDO TABLESPACE UNDOTBS1 DATAFILE SIZE 200M AUTOEXTEND ON NEXT 10M MAXSIZE 1000MCHARACTER SET WE8ISO8859P1NATIONAL CHARACTER SET AL16UTF16LOGFILEGROUP 1 SIZE 50M,GROUP 2 SIZE 50M,GROUP 3 SIZE 50M;
Step 5 Run following scripts:SQL>@$ORACLE_HOME/rdbsm/admin/catalog.sql SQL>@$ORACLE_HOME/rdbsm/admin/catproc.sqlSQL>@$ORACLE_HOME/rdbsm/admin/catclust.sql
Note: at this point Database created; next we will convert into RAC.
Step 6 Edit initPROD1.ora and add the cluster parameters:
*.cluster_database_instances=2*.cluster_database=truePROD1.instance_number=1PROD2.instance_number=2PROD2.thread=2PROD1.thread=1PROD1.undo_tablespace='UNDOTBS1'PROD2.undo_tablespace='UNDOTBS2'PROD1.instance_name = PROD1PROD1.instance_name = PROD2
Step 7 Copy initPROD1.ora and tnsnames.ora file to node2 (RAC2) and rename initPROD1.ora to initPROD2.ora.
Step 8 Now create the second undo tablespace:
SQL> Create undo tablespace undotbs2 datafile size 200M;Tablespace created
Step 9 Create the second instance's redo log thread:
SQL> alter database add logfile thread 2group 4 size 50M,group 5 size 50M,group 6 size 50M;Database altered
52
Step 10Issue a "shutdown immediate" now on RAC1, and then start the instance. It should come up ok,
Step 11 Now activate the 2nd redo log thread:
SQL> alter database enable public thread 2;
Step 12 Finally start your second instance. You have a proper RAC database up and running now. Check v$active_instances to make sure both appear in that view.
Step 13 Shut down both instances next and register the database and its instances with Clusterware as oracle:
$ srvctl add databaese -d PROD -o /export/home/oracle/oracle/product/10.2.0/db$ srvctl add instance -d PROD -i PROD1 -n rac1$ srvctl add instance -d PROD -i PROD2 -n rac2
Step 14 Check crs_stat to see if that worked and start the database:
$ srvctl start database -d PROD
Task 7 (Setup Test Transport Failover)
Oracle TAF enables any failed database connections to reconnect to another node within the cluster. The failover is transparent to the user. Oracle re-executes the query on the failed over instance and continues to display the remaining results to the user.
Step 1 Create a new database service.
Let’s begin by creating a new service called CRM. Database services can be created using either DBCA or the srvctl utility. Here you will use DBCA to create the CRM service on PROD1.
Service Name :CRMDatabase Name:PRODPreferred Instance : PROD1Available Instance : PROD2TAF Policy :BASIC
Log in as a oracle user in RAC1 node and execute:
53
rac1-> dbca
Welcome: Select Oracle Real Application Clusters database.Operations: Select Services Management.List of cluster databases: Click on Next.Database Services: Click on Add.Add a Service: Enter “CRM.”Select prod1 as the Preferred instance.Select prod2 as the Available instance.TAF Policy: Select Basic.Click on Finish.
Database Configuration Assistant: Click on No to exit.
Note: The Database Configuration Assistant creates the CRM service name entry in tnsnames.ora file.
Step 2 Check Service name String
SQL> connect system/oracle@prod1Connected.
SQL> show parameter service
NAME TYPE VALUE---------- -------------------- ----------- ------------------------service_names string prod, CRM
SQL> connect system/oracle@prod2Connected.
SQL> show parameter service
NAME TYPE VALUE---------- -------------------- ----------- ------------------------service_names string prod
Step 3 Connect the first session using the CRM service.
If the returned output of failover_type and failover_mode is 'NONE', verify that the CRM service is configured correctly in tnsnames.ora.
SQL> connect system/oracle@crmConnected.
SQL> select instance_number instance#, instance_name, host_name, status
54
from v$instance;
INSTANCE# INSTANCE_NAME HOST_NAME STATUS---------- ---------------- --------------------- ------------1 prod1 rac1 OPEN
SQL> select failover_type, failover_method, failed_over from v$session where username='SYSTEM';
FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER------------- --------------- ----------------SELECT BASIC NO
Step 4 Shut down the instance from another session.
Connect as the sys user on CRM instance and shut down the instance.
rac1-> export ORACLE_SID=prod1
rac1-> sqlplus / as sysdba
SQL> select instance_number instance#, instance_name, host_name, statusfrom v$instance;
INSTANCE# INSTANCE_NAME HOST_NAME STATUS---------- ---------------- --------------------- ------------1 prod1 rac1 OPEN
SQL> shutdown abort;ORACLE instance shut down.
Verify that the session has failed over.From the same CRM session you opened previously, execute the queries below to verify that the session has failed over to another instance.
SQL> select instance_number instance#, instance_name, host_name, statusfrom v$instance;
INSTANCE# INSTANCE_NAME HOST_NAME STATUS---------- ---------------- --------------------- ------------2 prod2 rac2 OPEN
SQL> select failover_type, failover_method, failed_over from v$sessionwhere username='SYSTEM';
FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER
55
------------- --------------- ----------------SELECT BASIC YES
Step 5 Relocate the CRM service back to the preferred instance.
After PROD1 is brought back up, the CRM service does not automatically relocate back to the preferred instance. You have to manually relocate the service to prod1.
rac1-> export ORACLE_SID=prod1
rac1-> sqlplus / as sysdba
SQL> startup
ORACLE instance started.Total System Global Area 209715200 bytesFixed Size 1218556 bytesVariable Size 104859652 bytesDatabase Buffers 100663296 bytesRedo Buffers 2973696 bytesDatabase mounted.Database opened.
SQL> show parameter serviceNAME TYPE VALUE---------- -------------------- ----------- ------------------------service_names string prod
rac2-> export ORACLE_SID=prod2
rac2-> sqlplus / as sysdba
SQL> show parameter service
NAME TYPE VALUE---------- -------------------- ----------- ------------------------service_names string prod, CRM
rac1-> srvctl relocate service -d prod -s crm -i prod2 -t prod1
SQL> connect system/oracle@prod1Connected.
SQL> show parameter service
56
NAME TYPE VALUE-------------- ---------------- ----------- ------------------------service_names string prod, CRM
SQL> connect system/oracle@devdb2Connected.
SQL> show parameter service
NAME TYPE VALUE----------- ------------------- ----------- ------------------------service_names string prod
Implement 10gR2 Oracle RAC on Linux with VMware
Here, you will learn how to install and configure two nodes on running Oracle RAC 10g Release 2 on Enterprise Linux and VMware Server. Note that this guide is intended for educational/evaluation purposes only; neither Oracle nor any other vendor will support this configuration.
The guide is structured into the following sections:
1. Hardware Requirements and Overview 2. Configure the First Virtual Machine 3. Configure Enterprise Linux on the First Virtual Machine 4. Create and Configure the Second Virtual Machine 5. Configure Oracle Automatic Storage Management (ASM) 6. Configure Oracle Cluster File System (OCFS2) 7. Install Oracle Clusterware 8. Install Oracle Database 10g Release 2 9. Explore the RAC Database Environment
57
10. Test Transparent Application Failover (TAF) 11. Database Backup and Recovery 12. Explore Oracle Enterprise Manager (OEM) Database Console 13. Common Issues
Section: A Hardware Requirements and Overview
Allocate a minimum of 700MB of memory to each virtual machine; reserve a minimum of 30GB of disk space for all the virtual machines.
HostName
OS Processor Memory Disk
indiandba
Windows XPProfessionalService Pack 2(32-bit)
IntelPentium 4
2 GB 250 GB,
An overview of guest operating system environment:
Host Name OS Processor MemoryRAC1 Enterprise Linux 4
(32-bit)1 700 MB
RAC2 Enterprise Linux 4 (32-bit)
1 700 MB
An overview of the virtual disk layout:
Virtual Disk on Host VirtualDisk on Gust
Virtual Device Node
Size (MB)
Description
d:\vm\rac\localdisk.vmdk /dev/sda1/dev/sda2/dev/sda3
SCSI0:0
20 “/”mountpointSwap spaceOraclebinaries
d:\vm\rac\sharedstorage\ocfs2disk.vmdk
/dev/sdb SCSI1:0
512 OCFS2disk
d:\vm\rac\sharedstorage\asmdisk1.vmdk
/dev/sdc SCSI1:1
3072
ASM diskgroup 1
d:\vm\rac\sharedstorage\ /dev/sdd SCSI 307 ASM disk
58
asmdisk2.vmdk 1:2 2 group 1d:\vm\rac\sharedstorage\asmdisk3.vmdk
/dev/sde SCSI1:3
2048
ASM flashrecoveryarea
An overview of the RAC database environment:
HostName
ASMInstanceName
RACInstanceName
DatabaseName
Database FileStorage
OCR &Voting Disk
RAC1 +ASM1 devdb1 devdb ASM OCFS2RAC2 +ASM2 devdb2 devdb ASM OCFS2
Section B Configure the First Virtual Machine
To create and configure the first virtual machine, you will add virtual hardware devices such as disks and processors. Before proceeding with the install, create the windows folders to house the virtual machines and the shared storage.
D:\>mkdir vm\rac\rac1D:\>mkdir vm\rac\rac2D:\>mkdir vm\rac\sharedstorage
Double-click on the VMware Server icon on your desktop to bring up the application:
1. Press CTRL-N to create a new virtual machine.2. New Virtual Machine Wizard: Click on Next.3. Select the Appropriate Configuration:a. Virtual machine configuration: Select Custom.4. Select a Guest Operating System:a. Guest operating system: Select Linux.b. Version: Select Red Hat Enterprise Linux 4.4. Name the Virtual Machine:a. Virtual machine name: Enter “rac1.”b. Location: Enter “d:\vm\rac\rac1.”5. Set Access Rights:a. Access rights: Select Make this virtual machine private.6. Startup / Shutdown Options:a. Virtual machine account: Select User that powers on the virtual machine.7. Processor Configuration:
59
a. Processors: Select One.8. Memory for the Virtual Machine:a. Memory: Select 700MB.9. Network Type:a. Network connection: Select Use bridged networking.10. Select I/O Adapter Types:a. I/O adapter types: Select LSI Logic.11. Select a Disk:a. Disk: Select Create a new virtual disk.12. Select a Disk Type:a. Virtual Disk Type: Select SCSI (Recommended).13. Specify Disk Capacity:a. Disk capacity: Enter “20GB.”b. Deselect Allocate all disk space now. To save space, you do not have to allocate all the disk space now.14. Specify Disk File:a. Disk file: Enter “localdisk.vmdk.”b. Click on Finish.15. Repeat steps 16 to 24 to create four virtual SCSI hard disks - ocfs2disk.vmdk (512MB), asmdisk1.vmdk (3GB), asmdisk2.vmdk (3GB), and asmdisk3.vmdk (2GB).16. VMware Server Console: Click on Edit virtual machine settings.17. Virtual Machine Settings: Click on Add.18. Add Hardware Wizard: Click on Next.19. Hardware Type:a. Hardware types: Select Hard Disk.19. Select a Disk:a. Disk: Select Create a new virtual disk.20. Select a Disk Type:a. Virtual Disk Type: Select SCSI (Recommended).21. Specify Disk Capacity:a. Disk capacity: Enter “0.5GB.”b. Select Allocate all disk space now. You do not have to allocate all the disk space if you want to save space. For performance reason, you will pre-allocate all the disk space for each of the virtual shared disk. If the size of the shared disks were to grow rapidlyespecially during Oracle database creation or when the database is under heavy DMLactivity, the virtual machines may hang intermittently for a brief period or crash in a few rare occasions.
23. Specify Disk File:a. Disk file: Enter “d:\vm\rac\sharedstorage\ocfs2disk.vmdk.”b. Click on Advanced.23. Add Hardware Wizard:a. Virtual device node: Select SCSI 1:0.
60
b. Mode: Select Independent, Persistent for all shared disks.c. Click on Finish.
Finally, add an additional virtual network card for the private interconnects and remove the floppy drive, if any.
25. VMware Server Console: Click on Edit virtual machine settings.26. Virtual Machine Settings: Click on Add.27. Add Hardware Wizard: Click on Next.28. Hardware Type:a. Hardware types: Ethernet Adapter.28. Network Type:a. Host-only: A private network shared with the hostb. Click on Finish.29. Virtual Machine Settings:a. Select Floppy and click on Remove.30. Virtual Machine Settings: Click on OK.
Modify virtual machine configuration file. Additional parameters are required to enable disk sharing between the two virtual RAC nodes. Open the configuration file, d:\vm\rac\rac1\Red Hat Enterprise Linux 4.vmx and add the bold parameters listed below.
config.version = "8"virtualHW.version = "4"scsi0.present = "TRUE"scsi0.virtualDev = "lsilogic"
61
memsize = "700"scsi0:0.present = "TRUE"scsi0:0.fileName = "localdisk.vmdk"ide1:0.present = "TRUE"ide1:0.fileName = "auto detect"ide1:0.deviceType = "cdrom-raw"floppy0.fileName = "A:"Ethernet0.present = "TRUE"displayName = "rac1"guestOS = "rhel4"priority.grabbed = "normal"priority.ungrabbed = "normal"
disk.locking = "FALSE"diskLib.dataCacheMaxSize = "0"scsi1.sharedBus = "virtual"
scsi1.present = "TRUE"scsi1:0.present = "TRUE"scsi1:0.fileName = "D:\vm\rac\sharedstorage\ocfs2disk.vmdk"scsi1:0.mode = "independent-persistent"
scsi1:0.deviceType = "disk"
scsi1:1.present = "TRUE"scsi1:1.fileName = "D:\vm\rac\sharedstorage\asmdisk1.vmdk"scsi1:1.mode = "independent-persistent"
scsi1:1.deviceType = "disk"
scsi1:2.present = "TRUE"scsi1:2.fileName = "D:\vm\rac\sharedstorage\asmdisk2.vmdk"scsi1:2.mode = "independent-persistent"
scsi1:2.deviceType = "disk"
scsi1:3.present = "TRUE"scsi1:3.fileName = "D:\vm\rac\sharedstorage\asmdisk3.vmdk"scsi1:3.mode = "independent-persistent"
scsi1:3.deviceType = "disk"
scsi1.virtualDev = "lsilogic"ide1:0.autodetect = "TRUE"floppy0.present = "FALSE"Ethernet1.present = "TRUE"
62
Ethernet1.connectionType = "hostonly"
Section C Install and Configure Enterprise Linux on the First Virtual Machine
1. On your VMware Server Console, double-click on the CD-ROM device on the right panel and select the ISO image for disk 1, Enterprise-R4-U4-i386-disc1.iso.
2. VMware Server console: Click on Start this virtual machine.
3. Hit Enter to install in graphical mode. 4. Skip the media test and start the installation.5. Welcome to enterprise Linux: Click on Next.6. Language Selection: <select your language preference>.7. Keyboard Configuration: <select your keyboard preference>.8. Installation Type: Custom.9. Disk Partitioning Setup: Manually partition with Disk Druid.
Warning: Click on Yes to initialize each of the device – sda, sdb, sdc, sdd, and sde.10. Disk Setup: Allocate disk space on sda drive by double-clicking on /dev/sda free space for the mount points (/ and /u01) and swap space. You will configure the rest of the drives for OCFS2 and ASM later.
Add Partition:Mount Point: /File System Type: ext3Start Cylinder: 1End Cylinder: 910
File System Type: SwapStart Cylinder: 911End Cylinder: 1170
Mount Point: /u01File System Type: ext3Start Cylinder: 1171End Cylinder: 2610
63
11. Boot Loader Configuration: Select only the default /dev/sda1 and leave the rest unchecked12. Boot Loader Configuration: Select only the default /dev/sda1 and leave the rest unchecked.13. Network Configuration:
A Network DevicesSelect and edit eth01. De-select Configure Using DHCP.2. Select Activate on boot.3. IP Address: Enter “192.168.2.131.”4. Netmask: Enter “255.255.255.0.”Select and edit eth11. De-select Configure Using DHCP.2. Select Activate on boot.3. IP Address: Enter “10.10.10.31.”4. Netmask: Enter “255.255.255.0.”
B HostnameSelect manually and enter “rac1.mycorpdomain.com.”
C Miscellaneous SettingsGateway: Enter “192.168.2.1.”Primary DNS: <optional>
64
Secondary DNS: <optional>
13. Firewall Configuration:a) Select No Firewall. If firewall is enabled, you may encounter an error, “mount.ocfs2: Transport endpoint is not connected while mounting” when you attempt to mount ocfs2 file system later in the set up.b) Enable SELinux?: Active.
14. Warning – No Firewall: Click on Proceed.15. Additional Language Support: <select the desired language>.16. Time Zone Selection: <select your time zone>17. Set Root Password: <enter your root password>18. Package Group Selection:
a. Select X Window System.b. Select GNOME Desktop Environment.Select Editors.Click on Details and select your preferred text editor.c.d. Select Graphical Internet.e. Select Text-based Internet.f. Select Office/Productivity.g. Select Sound and Video.h. Select Graphics.i. Select Server Configuration Tools.j. Select FTP Server.Select Legacy Network Server.Click on Details.1. Select rsh-server.2. Select telnet-server.k.l. Select Development Tools.m. Select Legacy Software Development.n. Select Administration Tools.Select System Tools.Click on Details. Select the following packages in addition to the default selectedpackages.Select ocfs-2-2.6.9-42.0.0.0.1EL (driver for UP kernel), or selectocfs-2-2.6.9-42.0.0.0.1ELsmp (driver for SMP kernel).1.2. Select ocfs2-tools.3. Select ocfs2console.Select oracle oracleasm-2.6.9-42.0.0.0.1EL (driver for UP kernel) or selectoracleasm-2.6.9-42.0.0.0.1ELsmp (driver for SMP kernel).4.5. Select sysstat.
65
o.p. Select Printing Support.18.19. About to Install: Click on Next.20. Required Install Media: Click on Continue.Change CD-ROM: On your VMware Server Console, press CTRL-D to bring up the VirtualMachine Settings. Click on the CD-ROM device and select the ISO image for disk 2,Enterprise-R4-U4-i386-disc2.iso, followed by the ISO image for disk 3,Enterprise-R4-U4-i386-disc3.iso.21.At the end of the installation:On your VMware Server Console, press CTRL-D to bring up the Virtual Machine Settings.Click on the CD-ROM device and select Use physical drive.a.b. Click on Reboot.
23. Welcome: Click on Next.24. License Agreement: Select Yes, I agree to the License Agreement.25. Date and Time: Set the date and time.26. Display: <select your desired resolution>.27. System User: Leave the entries blank and click on Next.28. Additional CDs: Click on Next.29. Finish Setup: Click on Next.
66
Congratulations, you have just installed Enterprise Linux on VMware Server!
Install VMware Tools. VMware Tools is required to synchronize the time between the host and guest machines.
On the VMware Console, log in as the root user,
1. Click on VM and then select Install VMware Tools.2. rac1 – Virtual Machine: Click on Install.3. Double-click on the VMware Tools icon on your desktop.4. cdrom: Double-click on VMwareTools-1.0.1-29996.i386.rpm.
67
5. Completed System Preparation: Click on Continue.6. Open up a terminal and execute vmware-config-tools.pl.
Enter the desired display size.
Synchronize Guest OS time with Host OS. When installing the Oracle Clusterware and Oracle Database software, the Oracle installer will initially install the software on the local node and then remotely copies the software to the remote node. If the date and time of both RAC nodes are not synchronized, you will likely receive errors similar to the one below.
"/bin/tar: ./inventory/Components21/oracle.ordim.server/10.2.0.1.0: timestamp 2006-11-04 06:24:04 is 25 s in the future"
To ensure a successful Oracle RAC installation, the time on the virtual machines has to synchronize with the host machine. Perform the steps below to synchronize the time as the root user.
1. Execute “vmware-toolbox” to bring up the VMware Tools Properties window. Under the Options tab, select Time synchronization between the virtual machine and the host operating system. You should find the tools.syncTime = "TRUE" parameter appended to the virtual machine configuration file, d:\vm\rac\rac1\Red Hat Enterprise Linux 4.vmx.
68
2. Edit /boot/grub/grub.conf and add the options, "clock=pit nosmp noapic nolapic" to the line that reads kernel /boot/. You have added the options to both kernels. You are only required to make the change to your specific kernel.
#boot=/dev/sdadefault=0timeout=5splashimage=(hd0,0)/boot/grub/splash.xpm.gzhiddenmenutitle Enterprise (2.6.9-42.0.0.0.1.ELsmp)root (hd0,0)kernel /boot/vmlinuz-2.6.9-42.0.0.0.1.ELsmp roroot=LABEL=/ rhgb quiet clock=pit nosmp noapic nolapicinitrd /boot/initrd-2.6.9-42.0.0.0.1.ELsmp.imgtitle Enterprise-up (2.6.9-42.0.0.0.1.EL)root (hd0,0)kernel /boot/vmlinuz-2.6.9-42.0.0.0.1.EL ro root=LABEL=/rhgb quiet clock=pit nosmp noapic nolapicinitrd /boot/initrd-2.6.9-42.0.0.0.1.EL.img
3. Reboot rac1.# reboot
Create the oracle user. As the root user, execute# groupadd oinstall# groupadd dba# mkdir -p /export/home/oracle /ocfs# useradd -d /export/home/oracle -g oinstall -G dba -s /bin/ksh oracle# chown oracle:dba /export/home/oracle /u01# passwd oracleNew Password:Re-enter new Password:passwd: password successfully changed for oracle
Create the oracle user environment file./export/home/oracle/.profileexport PS1="`/bin/hostname -s`-> "export EDITOR=viexport ORACLE_SID=devdb1export ORACLE_BASE=/u01/app/oracleexport ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs_1export LD_LIBRARY_PATH=$ORACLE_HOME/lib
69
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/binumask 022
Create the filesystem directory structure. As the oracle user, executerac1-> mkdir -p $ORACLE_BASE/adminrac1-> mkdir -p $ORACLE_HOMErac1-> mkdir -p $ORA_CRS_HOMErac1-> mkdir -p /u01/oradata/devdb
Increase the shell limits for the Oracle user. Use a text editor and add the lines listed below to /etc/security/limits.conf, /etc/pam.d/login, and /etc/profile. Additional information can be obtained from the documentation.
/etc/security/limits.conf
oracle soft nproc 2047oracle hard nproc 16384oracle soft nofile 1024oracle hard nofile 65536
/etc/pam.d/loginsession required /lib/security/pam_limits.so
/etc/profileif [ $USER = "oracle" ]; thenif [ $SHELL = "/bin/ksh" ]; thenulimit -p 16384ulimit -n 65536elseulimit -u 16384 -n 65536fifi
Install Enterprise Linux software packages. The following additional packages are required for Oracle software installation. If you have installed the 64-bit version of Enterprise Linux, the installer should have already installed these packages.
libaio-0.3.105-2.i386.rpm openmotif21-2.1.30-11.RHEL4.6.i386.rpm
Extract the packages from the ISO CDs and execute the command below as the root user.# lslibaio-0.3.105-2.i386.rpm openmotif21-2.1.30-11.RHEL4.6.i386.rpm
70
## rpm -Uvh *.rpmwarning: libaio-0.3.105-2.i386.rpm: V3 DSA signature: NOKEY, key ID b38a8516Preparing...########################################### [100%]1:openmotif21########################################### [ 50%]2:libaio########################################### [100%]
Configure the kernel parameters. Use a text editor and add the lines listed below to /etc/sysctl.conf.To make the changes effective immediately, execute /sbin/sysctl –p.# more /etc/sysctl.confkernel.shmall = 2097152kernel.shmmax = 2147483648kernel.shmmni = 4096kernel.sem = 250 32000 100 128fs.file-max = 65536net.ipv4.ip_local_port_range = 1024 65000net.core.rmem_default = 1048576net.core.rmem_max = 1048576net.core.wmem_default = 262144net.core.wmem_max = 262144
Modify the /etc/hosts file.# more /etc/hosts127.0.0.1 localhost192.168.2.131 rac1.mycorpdomain.com rac1192.168.2.31 rac1-vip.mycorpdomain.com rac1-vip10.10.10.31 rac1-priv.mycorpdomain.com rac1-priv192.168.2.132 rac2.mycorpdomain.com rac2192.168.2.32 rac2-vip.mycorpdomain.com rac2-vip10.10.10.32 rac2-priv.mycorpdomain.com rac2-priv
Configure the hangcheck timer kernel module. The hangcheck timer kernel module monitors the system's health and restarts a failing RAC node. It uses two parameters, hangcheck_tick (defines the system checks frequency) and hangcheck_margin (defines the maximum hang delay before a RAC node is reset), to determine if a node is failing.
71
Add the following line in /etc/modprobe.conf to set the hangcheck kernel module parameters.
/etc/modprobe.confoptions hangcheck-timer hangcheck_tick=30 hangcheck_margin=180To load the module immediately, execute "modprobe -v hangcheck-timer".
Create disk partitions for OCFS2 and Oracle ASM. Prepare a set of raw disks for OCFS2 (/dev/sdb),and for Oracle ASM (/dev/sdc, /dev/sdd, /dev/sde).
On rac1, as the root user, execute
# fdisk /dev/sdbCommand (m for help): nCommand actione extendedp primary partition (1-4)pPartition number (1-4): 1First cylinder (1-512, default 1):Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-512, default 512):Using default value 512Command (m for help): wThe partition table has been altered!Calling ioctl() to re-read partition table.Syncing disks.
# fdisk /dev/sdcCommand (m for help): nCommand actione extendedp primary partition (1-4)pPartition number (1-4): 1First cylinder (1-391, default 1):Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-391, default 391):Using default value 391Command (m for help): wThe partition table has been altered!Calling ioctl() to re-read partition table.Syncing disks.
# fdisk /dev/sdd
72
Command (m for help): nCommand actione extendedp primary partition (1-4)pPartition number (1-4): 1First cylinder (1-391, default 1):Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-391, default 391):Using default value 391Command (m for help): wThe partition table has been altered!Calling ioctl() to re-read partition table.Syncing disks.
# fdisk /dev/sdeCommand (m for help): nCommand actione extendedp primary partition (1-4)pPartition number (1-4): 1First cylinder (1-261, default 1):Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-261, default 261):Using default value 261Command (m for help): wThe partition table has been altered!Calling ioctl() to re-read partition table.Syncing disks.
# fdisk -lDisk /dev/sda: 21.4 GB, 21474836480 bytes255 heads, 63 sectors/track, 2610 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System/dev/sda1 * 1 910 7309543+ 83 Linux/dev/sda2 911 1170 2088450 82 Linux swap/dev/sda3 1171 2610 11566800 83 LinuxDisk /dev/sdb: 536 MB, 536870912 bytes64 heads, 32 sectors/track, 512 cylindersUnits = cylinders of 2048 * 512 = 1048576 bytesDevice Boot Start End Blocks Id System/dev/sdb1 1 512 524272 83 LinuxDisk /dev/sdc: 3221 MB, 3221225472 bytes
73
255 heads, 63 sectors/track, 391 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System/dev/sdc1 1 391 3140676 83 LinuxDisk /dev/sdd: 3221 MB, 3221225472 bytes255 heads, 63 sectors/track, 391 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System/dev/sdd1 1 391 3140676 83 LinuxDisk /dev/sde: 2147 MB, 2147483648 bytes255 heads, 63 sectors/track, 261 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System/dev/sde1 1 261 2096451 83 Linux
Install oracleasmlib package. Download the ASM library from OTN and install the ASM RPM as theroot user.# rpm -Uvh oracleasmlib-2.0.2-1.i386.rpmPreparing...########################################### [100%]1:oracleasmlib########################################### [100%]
At this stage, you should already have the following ASM packages installed.[root@rac1 swdl]# rpm -qa | grep oracleasmoracleasm-support-2.0.3-2oracleasm-2.6.9-42.0.0.0.1.ELsmp-2.0.3-2oracleasmlib-2.0.2-1
Map raw devices for ASM disks. A raw device mapping is required only if you are planning on creating ASM disks using standard Linux I/O. An alternative to creating ASM disks is to use the ASM library driver provided by Oracle. You will configure ASM disks using ASM library driver later.
Perform the following tasks to map the raw devices to the shared partitions created earlier. The raw devices have to bind with the block devices each time a cluster node boots.
Add the following lines in /etc/sysconfig/rawdevices.
/etc/sysconfig/rawdevices
/dev/raw/raw1 /dev/sdc1
74
/dev/raw/raw2 /dev/sdd1/dev/raw/raw3 /dev/sde1
To make the mapping effective immediately, execute the following commands as the root user:
Assigning devices:/dev/raw/raw1 --> /dev/sdc1/dev/raw/raw1: bound to major 8, minor 33/dev/raw/raw2 --> /dev/sdd1/dev/raw/raw2: bound to major 8, minor 49/dev/raw/raw3 --> /dev/sde1/dev/raw/raw3: bound to major 8, minor 65done# chown oracle:dba /dev/raw/raw[1-3]# chmod 660 /dev/raw/raw[1-3]# ls -lat /dev/raw/raw*crw-rw---- 1 oracle dba 162, 3 Nov 4 07:04 /dev/raw/raw3crw-rw---- 1 oracle dba 162, 2 Nov 4 07:04 /dev/raw/raw2crw-rw---- 1 oracle dba 162, 1 Nov 4 07:04 /dev/raw/raw1
As the oracle user, execute
rac1-> ln -sf /dev/raw/raw1 /u01/oradata/devdb/asmdisk1rac1-> ln -sf /dev/raw/raw2 /u01/oradata/devdb/asmdisk2rac1-> ln -sf /dev/raw/raw3 /u01/oradata/devdb/asmdisk3
Modify /etc/udev/permissions.d/50-udev.permissions. Raw devices are remapped on boot. The ownership of the raw devices will change to the root user by default upon boot. ASM will have problem accessing the shared partitions if the ownership is not the oracle user. Comment the original line, “raw/*:root:disk:0660” in /etc/udev/permissions.d/50-udev.permissions and add a new line, “raw/*:oracle:dba:0660.”
/etc/udev/permissions.d/50-udev.permissions# raw devicesram*:root:disk:0660#raw/*:root:disk:0660raw/*:oracle:dba:0660
Section D Create and Configure the Second Virtual Machine
To create the second virtual machine, simply shut down the first virtual machine, copy all the files in d:\vm\rac\rac1 to d:\vm\rac\rac2 and perform a few configuration changes
75
Modify network configuration.
1. As the root user on rac1,# shutdown –h now
2. On your host system, copy all the files in rac1 folder to rac2.D:\>copy d:\vm\rac\rac1 d:\vm\rac\rac2
3. On your VMware Server Console, press CTRL-O to open the second virtual machine,d:\rac\rac2\Red Hat Enterprise Linux 4.vmx.
4. VMware Server console: Rename the virtual machine name from rac1 to rac2. Right-click on the
new rac1 tab you have just opened and select Settings. Select the Options tab.
1. Virtual machine name: Enter “rac2.”
Click on Start this virtual machine to start rac2, leaving rac1 powered off.
rac2 – Virtaul Machine: Select Create a new identifier.
5. Log in as the root user and execute system-config-network to modify the network configuration.
76
IP Address: Double-click on each of the Ethernet devices and use the
table below to make the necessary changes.
Device IP Address Subnet mask Default gateway address
eth0 192.168.2.132 255.255.255.0 192.168.2.1
eth1 10.10.10.32 255.255.255.0 <leave empty>
MAC Address: Navigate to the Hardware Device tab and probe for a new MAC address for each of the Ethernet device.
Hostname and DNS: Use the table below to make the necessary changes to the entries in the DNS tab and press CTRL-S to save
Hostname Primary DNS Secondary DNSDNS searchpath
rac2.mycorpdomain.com Enter your Enter your DNS Accepts theDNS IP IP address or default oraddress or leave it empty. leave it
leave it empty. empty.
Finally, activate each of the Ethernet device.
Modify /etc/hosts. Add the following entry in /etc/hosts.
127.0.0.1 localhost
VIPCA will attempt to use the loopback address later during the Oracle clusterware software installation.
Modify /export/home/oracle/.profile. Replace the value of ORACLE_SID with devdb2.
Establish user equivalence with SSH. During the Cluster Ready Services (CRS) and RAC installation, the Oracle Universal Installer (OUI) has to be able to copy the software as oracle to all RAC nodes without being prompted for a
77
password. In Oracle 10g, this can be accomplished using ssh instead of rsh.
To establish user equivalence, generate the user's public and private keys as the oracle user on both nodes. Power on rac1 and perform the following tasks on both nodes.
On rac1,
rac1-> mkdir ~/.sshrac1-> chmod 700 ~/.sshrac1-> ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/export/home/oracle/.ssh/id_rsa):Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /export/home/oracle/.ssh/id_rsa.Your public key has been saved in /export/home/oracle/.ssh/id_rsa.pub.The key fingerprint is:87:54:4f:92:ba:ed:7b:51:5d:1d:59:5b:f9:44:da:b6 [email protected]> ssh-keygen -t dsaGenerating public/private dsa key pair.Enter file in which to save the key (/export/home/oracle/.ssh/id_dsa):Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /export/home/oracle/.ssh/id_dsa.Your public key has been saved in /export/home/oracle/.ssh/id_dsa.pub.The key fingerprint is:31:76:96:e6:fc:b7:25:04:fd:70:42:04:1f:fc:9a:26 [email protected]
On rac2,rac2-> mkdir ~/.sshrac2-> chmod 700 ~/.sshrac2-> ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/export/home/oracle/.ssh/id_rsa):Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /export/home/oracle/.ssh/id_rsa.Your public key has been saved in /export/home/oracle/.ssh/id_rsa.pub.The key fingerprint is:29:5a:35:ac:0a:03:2c:38:22:3c:95:5d:68:aa:56:66 [email protected]> ssh-keygen -t dsa
78
Generating public/private dsa key pair.Enter file in which to save the key (/export/home/oracle/.ssh/id_dsa):Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /export/home/oracle/.ssh/id_dsa.Your public key has been saved in /export/home/oracle/.ssh/id_dsa.pub.The key fingerprint is:4c:b2:5a:8d:56:0f:dc:7b:bc:e0:cd:3b:8e:b9:5c:7c [email protected]
On rac1,rac1-> cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysrac1-> cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keysrac1-> ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysThe authenticity of host 'rac2 (192.168.2.132)' can't be established.RSA key fingerprint is 63:d3:52:d4:4d:e2:cb:ac:8d:4a:66:9f:f1:ab:28:1f.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'rac2,192.168.2.132' (RSA) to the list of known hosts.oracle@rac2's password:rac1-> ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keysoracle@rac2's password:rac1-> scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keysoracle@rac2's password:authorized_keys 100% 1716 1.7KB/s 00:00
Test the connection on each node. Verify that you are not prompted for password when you run the following the second time.
ssh rac1 datessh rac2 datessh rac1-priv datessh rac2-priv datessh rac1.mycorpdomain.com datessh rac2.mycorpdomain.com datessh rac1-priv.mycorpdomain.com datessh rac2-priv.mycorpdomain.com date
Section E Configure Oracle Automatic Storage Management (ASM)
Oracle ASM is tightly integrated with Oracle Database and works with Oracle’s suite of data management tools. It simplifies database storage management and provides the performance of raw disk I/O.
Configure ASMLib. Configure the ASMLib as the root user on both nodes.
79
# /etc/init.d/oracleasm configureConfiguring the Oracle ASM library driver.This will configure the on-boot properties of the Oracle ASM librarydriver. The following questions will determine whether the driver isloaded on boot and what permissions it will have. The current valueswill be shown in brackets ('[]'). Hitting without typing ananswer will keep that current value. Ctrl-C will abort.Default user to own the driver interface []: oracleDefault group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: yFix permissions of Oracle ASM disks on boot (y/n) [y]: yWriting Oracle ASM library driver configuration: [ OK ]Loading module "oracleasm": [ OK ]Mounting ASMlib driver filesystem: [ OK ]Scanning system for ASM disks: [ OK ]
Create ASM disks. Create the ASM disks on any one node as the root user.# /etc/init.d/oracleasm createdisk VOL1 /dev/sdc1Marking disk "/dev/sdc1" as an ASM disk: [ OK ]# /etc/init.d/oracleasm createdisk VOL2 /dev/sdd1Marking disk "/dev/sdd1" as an ASM disk: [ OK ]# /etc/init.d/oracleasm createdisk VOL3 /dev/sde1Marking disk "/dev/sde1" as an ASM disk: [ OK ]Verify that the ASM disks are visible from every node.# /etc/init.d/oracleasm scandisksScanning system for ASM disks: [ OK ]# /etc/init.d/oracleasm listdisksVOL1VOL2VOL3
Section F Configure Oracle Cluster File System (OCFS2)
OCFS2 is a general-purpose cluster file system developed by Oracle and integrated with the Enterprise Linux kernel. It enables all nodes to share files concurrently on the cluster file system and thus eliminates the need to manage raw devices. Here you will house the OCR and Voting Disk in the OCFS2 file system. Additional information on OCFS2 can be obtained from OCFS2 User’s Guide.
You should already have the OCFS2 RPMs installed during the Enterprise Linux installation. Verifythat the RPMs have been installed on both nodes.
rac1-> rpm -qa | grep ocfsocfs2-tools-1.2.2-2
80
ocfs2console-1.2.2-2ocfs2-2.6.9-42.0.0.0.1.ELsmp-1.2.3-2
Create the OCFS2 configuration file. As the root user on rac1, execute
# ocfs2console
1. OCFS2 Console: Select Cluster, Configure Nodes.2. “The cluster stack has been started”: Click on Close.3. Node Configuration: Click on Add.4. Add Node: Add the following nodes and then click on Apply.
Name: rac1IP Address: 192.168.2.131IP Port: 7777Name: rac2IP Address: 192.168.2.132IP Port: 7777
5. Verify the generated configuration file.# more /etc/ocfs2/cluster.confnode:ip_port = 7777ip_address = 192.168.2.131number = 0name = rac1cluster = ocfs2node:ip_port = 7777ip_address = 192.168.2.132number = 1name = rac2cluster = ocfs2cluster:node_count = 2name = ocfs2
6. Propagate the configuration file to rac2. You can rerun the steps above on rac2 to generate the configuration file or select Cluster, Propagate onfiguration on the OCFS2 Console on rac1 to propagate the configuration file to rac2.
Configure the O2CB driver. O2CB is a set of clustering services that manages the communication between the nodes and the cluster file system. Below is a description of the individual services:
NM: Node Manager that keep track of all the nodes in the cluster.conf HB: Heartbeat service that issues up/down notifications when nodes
join or leave the cluster
81
TCP: Handles communication between the nodes DLM: Distributed lock manager that keeps track of all locks, its owners,
and status CONFIGFS: User space driven configuration file system mounted at
/config DLMFS: User space interface to the kernel space DLM
Perform the procedure below on both nodes to configure O2CB to start on boot.
When prompted for a value for the heartbeat dead threshold, you have to specify a value higher than 7 to prevent the nodes from crashing due to the slow IDE disk drive. The heartbeat dead threshold is a variable used to calculate the fence time.
Fence time (seconds) = (heartbeat dead threshold -1) * 2
A fence time of 120 seconds works well in our environment. The value of heartbeat dead threshold should be the same on both nodes.
As the root user, execute
# /etc/init.d/o2cb unloadStopping O2CB cluster ocfs2: OKUnmounting ocfs2_dlmfs filesystem: OKUnloading module "ocfs2_dlmfs": OKUnmounting configfs filesystem: OKUnloading module "configfs": OK# /etc/init.d/o2cb configureConfiguring the O2CB driver.This will configure the on-boot properties of the O2CB driver.The following questions will determine whether the driver is loaded onboot. The current values will be shown in brackets ('[]'). Hittingwithout typing an answer will keep that current value. Ctrl-Cwill abort.Load O2CB driver on boot (y/n) [y]: yCluster to start on boot (Enter "none" to clear) [ocfs2]:Specify heartbeat dead threshold (>=7) [7]: 61Writing O2CB configuration: OKLoading module "configfs": OKMounting configfs filesystem at /config: OKLoading module "ocfs2_nodemanager": OKLoading module "ocfs2_dlm": OKLoading module "ocfs2_dlmfs": OKMounting ocfs2_dlmfs filesystem at /dlm: OKStarting O2CB cluster ocfs2: OK
82
Format the file system. Before proceeding with formatting and mounting the file system, verify that O2CB is online on both nodes; O2CB heartbeat is currently inactive because the file system is not mounted.
# /etc/init.d/o2cb statusModule "configfs": LoadedFilesystem "configfs": MountedModule "ocfs2_nodemanager": LoadedModule "ocfs2_dlm": LoadedModule "ocfs2_dlmfs": LoadedFilesystem "ocfs2_dlmfs": MountedChecking O2CB cluster ocfs2: OnlineChecking O2CB heartbeat: Not active
You are only required to format the file system on one node. As the root user on rac1, execute
# ocfs2console1. OCFS2 Console: Select Tasks, Format.2. Format:
Available devices: /dev/sdb1Volume label: oracleCluster size: AutoNumber of node slots: 4Block size: Auto
6. OCFS2 Console: CTRL-Q to quit.
Mount the file system. To mount the file system, execute the command below on both nodes.
# mount -t ocfs2 -o datavolume,nointr /dev/sdb1 /ocfs
To mount the file system on boot, add the following line in /etc/fstab on both nodes.
/etc/fstab/dev/sdb1 /ocfs ocfs2 _netdev,datavolume,nointr 0 0
Create Oracle Clusterware directory. Create the directory in OCFS2 file system where the OCR and Voting Disk will reside.On rac1,# mkdir /ocfs/clusterware# chown -R oracle:dba /ocfs
83
You have completed the set up of OCFS2. Verify that you can read and write files on the shared cluster file system from both nodes.
Section G Install Oracle Clusterware
Go to page 20 and Check All Pre-request for Clusterware installationGo to page 24 and Install Oracle Clusterware
Section H Install Oracle database binary
Go to page 28 and Install Oracle Database Binary.
Section I Configure Oracle Listener
Go to page 34 and Configure Oracle Listener
Section J Create and Configure ASM Instance and ASM DISK Groups
Go to page 36 and create/Configure Oracle ASM
Section K Create Database
Go to page 42and create Database
Section L Setup Test Transport Failover
Go to page 53and create Database
84
Convert 10gR2 Stand-alone database to Oracle RAC
There are multiple ways in Oracle 10gR2 to convert a single instance database into aclustered database.
We can use the following methods to do this:
- Manually - DBCA- Oracle Enterprise Manager - rconfig utility (new with 10gR2 there is utility called rconfig which is located in $ORACLE_HOME/bin.)
Here we will describe all methods to do Conversion stand-alone database to RCA.
Method: “Manually”
Here's an overview of our single-instance database environment before converting to RAC:
Host Name Instance Name
Database Name
Database File Storage
RAC1 PROD1 PROD OS file system
And an overview of the RAC database environment:
Host Name Instance Name
Database Name
Database File Storage
OCR/Voting Disk
RAC1 PROD1 PROD ASM RAWRAC2 PROD2 PROD ASM RAW
Task 1 (Create Shared Storage and Configure) Go to Page 15
Task 2 (Create and Configure the Second Virtual Machine) Go to page 16
85
Task 3 (Add Ether Net Adapter for Private Network in Both Machine) Go to page 16
Task 4 (Prepare Disk for OCR, Voting and ASM Storage) Go to page 17
Task 5 (Install Oracle Clusterware) Go to page 24
Task 6 (Install Oracle 10gR12 Software/binary) Go to page 28
Task 7 (Configure Oracle Listener) Go to page 34
Task 8 (Create and Configure ASM Instance and ASM DISK Groups) Go to page 36
Task 9 (Mount and open database with new Oracle Home)
Step 1 copy init parameter (initPROD1.ora) file from <OLD_ORACLE_HOME/dbs> to <NEW_ORACLE_HOME/dbs on node1.
Step 1 Set environment path.$ export ORACLE_HOME=<SET_PATH_NEW_ORACLE_HOME>$export PATH=$ORACLE_HOME/bin:$PATH$export ORACLE_SID=PROD1#sqlplus /nologSQL> conn / as sysdbaSQL> startup
Task 10 (Migrate Database to RAC)
We must use RMAN to migrate the data files to ASM disk groups. All data files will be migrated to the newly created disk group, DATA. The redo logs and control files are created in DATA.
Step 1 Migrate Datafile & controle filr to ASM.
SQL> connect sys/sys@prod1 as sysdbaConnected.SQL> alter system set db_create_file_dest=’+DATA’;
System altered.
SQL> alter system set control_files='+DATA/cf1.dbf' scope=spfile;
System altered.
SQL> shutdown immediate;
86
$ rman target /
RMAN> startup nomount; Oracle instance started Total System Global Area 419430400 bytesFixed Size 779416 bytesVariable Size 128981864 bytesDatabase Buffers 289406976 bytesRedo Buffers 262144 bytes RMAN> restore controlfile from '/oracle/oradata/prod1/control01.ctl'; RMAN> alter database mount; database mountedreleased channel: ORA_DISK_1 RMAN> backup as copy database format '+DATA';
RMAN> switch database to copy; datafile 1 switched to datafile copy "+DATA/prod1/datafile/system.257.1"datafile 2 switched to datafile copy "+DATA/prod1/datafile/undotbs1.259.1"datafile 3 switched to datafile copy "+DATA/prod1/datafile/sysaux.258.1"datafile 4 switched to datafile copy "+DATA/prod1/datafile/users.260.1"
RMAN> alter database open; database opened
RMAN> exit
SQL> connect sys/sys@prod1 as sysdbaConnected.SQL> select tablespace_name, file_name from dba_data_files; TABLESPACE FILE_NAME --------------------- -----------------------------------------USERS +DATA/prod1/datafile/users.260.1 SYSAUX +DATA/prod1/datafile/sysaux.258.1 UNDOTBS1 +DATA/prod1/datafile/undotbs1.259.1 SYSTEM +DATA/prod1/datafile/system.257.1
87
Step 2 Migrate temp tablespace to ASM.
SQL> alter tablespace temp add tempfile size 100M; Tablespace altered.
SQL> select file_name from dba_temp_files; FILE_NAME ------------------------------------- +DATA/prod1/tempfile/temp.264.3
Step 3 Migrate redo logs to ASM.
Drop existing redo logs and recreate them in ASM disk groups, DATA.
SQL> alter system set db_create_online_log_dest_1='+DATA'; System altered. SQL> alter system set db_create_online_log_dest_2='+DATA'; System altered.
SQL> select group#, member from v$logfile; GROUP# MEMBER--------------- ---------------------------------- 1 /oracle/oradata/prod1/redo01.log 2 /oracle/oradata/prod1/redo02.log
SQL> alter database add logfile group 3 size 10M;
Database altered.
SQL> alter system switch logfile;
System altered.
SQL> alter database drop logfile group 1;
Database altered.
SQL> alter database add logfile group 1 size 100M;
Database altered.
88
SQL> alter database drop logfile group 2;
Database altered.
SQL> alter database add logfile group 2 size 100M;
Database altered.
SQL> alter system switch logfile;
System altered.
SQL> alter database drop logfile group 3;
Database altered.
SQL> select group#, member from v$logfile; GROUP# MEMBER--------------- ---------------------------------------- 1 +DATA/prod1/onlinelog/group_1.265.3 1 +DATA/prod1/onlinelog/group_1.257.1 2 +DATA/prod1/onlinelog/group_2.266.3 2 +DATA/prod1/onlinelog/group_2.258.1
Step 4 Add additional control file. If an additional control file is required for redundancy, you can create it in ASM as you would on any other filesystem.
SQL> connect sys/sys@prod1 as sysdbaConnected to an idle instance.
SQL> startup mountORACLE instance started. Total System Global Area 419430400 bytesFixed Size 779416 bytesVariable Size 128981864 bytesDatabase Buffers 289406976 bytesRedo Buffers 262144 bytesDatabase mounted.
SQL> alter database backup controlfile to '+RECOVERY/cf2.dbf';
89
Database altered. SQL> alter system set control_files='+DATA/cf1.dbf ','+RECOVERY/cf2.dbf' scope=spfile; System altered. SQL> shutdown immediate; ORA-01109: database not open Database dismounted.ORACLE instance shut down.
SQL> startupORACLE instance started. Total System Global Area 419430400 bytesFixed Size 779416 bytesVariable Size 128981864 bytesDatabase Buffers 289406976 bytesRedo Buffers 262144 bytesDatabase mounted.Database opened.SQL> select name from v$controlfile;
NAME---------------------------------------+DATA/cf1.dbf+RECOVERY/cf2.dbfAfter successfully migrating all the data files over to ASM, the old data files are no longer needed and can be removed. Your single-instance database is now running on ASM!
Task 11 (Register the ASM instances with CRS)
For higher availability, register the ASM instances under the CRS framework. When registered, the CRS should detect any failed instances and automatically attempt to start up the instances. The CRS should also automatically start up the instances when the servers are rebooted.
On node1 (RAC1):
$ srvctl add asm -n rac1 -i +ASM1 -o <ORCALE_HOME>$ srvctl start asm -n rac1 $ srvctl status asm -n rac1ASM instance +ASM1 is running on node rac1.
90
On node 2(RAC2): $ srvctl add asm -n rac2 -i +ASM2 -o <ORCALE_HOME>$ srvctl start asm -n rac2 $ srvctl status asm -n rac2 ASM instance +ASM2 is running on node rac2.
Task 12 (Add RAC specific parameters in pfile on node 1 (RAC1)
Modify the initPROD1.ora file on node 1 (RAC1) and copy this file to default location of node 2 (RAC2). Add and modify the following parameters:
*.cluster_database_instances=2*.cluster_database=true*.remote_listener='LISTENERS_PROD’prod1.thread=1prod1.instance_number=1prod1.undo_tablespace='UNDOTBS1'prod2.thread=2prod2.instance_number=2prod2.undo_tablespace='UNDOTBS2'
Task 12 (Create RAC Data Dictionary Views)
Create the RAC data dictionary views on the first RAC instance.
SQL> !echo $ORACLE_SIDprod1
SQL> spool /tmp/catclust.logSQL> @$ORACLE_HOME/rdbms/admin/catclust.........SQL> spool offSQL> shutdown immediate;
Task 13 (Register the RAC instances with CRS)
On node 1 (RAC1):
$ srvctl add database -d prod -o $ORACLE_HOME $ srvctl add instance -d prod -i prod1 -n rac1
91
$ srvctl add instance -d prod -i prod2 -n rac2$ srvctl start instance -d prod -i prod1
Task 14 (Create redo logs for the second RAC instance)
SQL> connect sys/sys@prod1a as sysdbaConnected.SQL> alter database add logfile thread 2 group 3 size 100M;SQL> alter database add logfile thread 2 group 4 size 100M;SQL> select group#, member from v$logfile; GROUP# MEMBER--------------- ---------------------------------------- 1 +DATA/prod/onlinelog/group_1.265.3 1 +DATA/prod/onlinelog/group_1.257.1 2 +DATA/prod/onlinelog/group_2.266.3 2 +DATA/prod/onlinelog/group_2.258.1 3 +DATA/prod/onlinelog/group_3.268.1 3 +DATA/prod/onlinelog/group_3.259.1 4 +DATA/prod/onlinelog/group_4.269.1 4 +DATA/prod/onlinelog/group_4.260.1
8 rows selected.
SQL> alter database enable thread 2;
Database altered.
Task 15 (Create undo tablespace for the second RAC instance)
SQL> create undo tablespace UNDOTBS2 datafile size 200M;
SQL> select tablespace_name, file_name from dba_data_files where tablespace_name=’UNDOTBS2’; TABLESPACE FILE_NAME --------------------- --------------------------------------UNDOTBS2 +DATA/prod/datafile/undotbs2.270.1
Task 16 (Start up the second RAC instance)
$ srvctl start instance -d prod -i prod1b$ crs_stat -t
$ srvctl status database -d prodInstance prod1a is running on node rac1
92
Instance prod1b is running on node rac2
$ srvctl stop database -d prod$ srvctl start database -d prod$ sqlplus system/system@prod1
SQL> select instance_number instance#, instance_name, host_name, status from gv$instance; INSTANCE# INSTANCE_NAME HOST_NAME STATUS 1 prod1 rac1 OPEN 2 prod2 rac2 OPEN
Task 17 (Setup Test Transport Failover) Go to page 53
Congratulations, you have converted your single-instance database to RAC.
Method: “rconfig”
New “rconfig” utility with 10gR2, which is located in $ORACLE_HOME/bin.
It uses a file called $ORACLE_HOME/assistants/rconfig/sampleXMLs/ConvertToRAC.xml which you need to edit in order to adjust some parameters as needed and then save it under a new name.
This project is structured into following steps:1. Installing Oracle Cluster Software2. installing Oracle Database 3. Copy the sample ConvertToRAC.xml file from
$ORACLE_HOME/assistants/rconfig/sampleXMLs directory to any temp directory
4. Modify the Sample XML file and verify if it is correct5. Modify the Sample XML file and execute the sample XML file using
rconfig utility
Here's an overview of our single-instance database environment before converting to RAC:
Host Name Instance Name
Database Name
Database File Storage
RAC1 PROD1 PROD OS file system
And an overview of the RAC database environment:
93
Host Name Instance Name
Database Name
Database File Storage
OCR/Voting Disk
RAC1 PROD1 PROD ASM RAWRAC2 PROD2 PROD ASM RAW
Task 1 (Create Shared Storage and Configure) Go to Page 15
Task 2 (Create and Configure the Second Virtual Machine) Go to page 16
Task 3 (Add Ether Net Adapter for Private Network in Both Machine) Go to page 16
Task 4 (Prepare Disk for OCR, Voting and ASM Storage) Go to page 17
Task 5 (Install Oracle Clusterware) Go to page 24
Task 6 (Install Oracle 10gR12 Software/binary) Go to page 28
Task 8 (Create and Configure ASM Instance and ASM DISK Groups) Go to page 36
Task 10 (Migrate Database to RAC)
Step 1 COPY THE SAMPLE XML FILE to TEMP localtion.
$cp /export/home/oracle/oracle/10.2.0/db/assistants/rconfig/sampleXMLs/ConvertToRAC.xml /temp
Step2 MODIFY THE copy SAMPLE FILE
Modify bellow mention parameter:
1. Convert verify="ONLY" before carrying out the actual conversion. This will perform a test run to validate parameters and flag any incorrect settings or issues that need to be resolved before the final conversion takes place. It is very important and below are few examples that shows that I am missing some information and so it is corrected.
Convert verify="YES": rconfig performs checks to ensure that the prerequisites for single-instance to RAC conversion have been met before it starts conversion
94
Convert verify="NO": rconfig does not perform prerequisite checks, and starts conversion
Convert verify="ONLY" rconfig only performs prerequisite checks; it does not start conversion after completing prerequisite checks
2. Specify 'SourceDBHome' variable in ConvertToRAC.xml as Non- RAC oracle home (e.g. $OLD_ORACLE_HOME path). As I am using same RAC Oracle HOME for non-RAC instance and so specifying the same RAC $ORACLE_HOME
3. Specify 'TagetDBHome' variable in ConvertToRAC.xml as RAC oracle home (e.g. $ORACLE_HOME path).
4. Specify the Database SID [PROD]
5. Specify the SYS password for Database instance
6. Specify the SYS password for ASM instance
7. Remove any additional Node and specify the Correct Node Name
8. Specify the Instance Prefix you want to use like PROD in our case
9. Specify any change in Network Port.
10. Specify the ASM DiskGroup that will be used
11. execute the rconfig and use the following syntax to run the rconfig command:
$ rconfig <path to rconfig xml file >
This rconfig run will:
• Migrate the database to ASM storage (Only if ASM is specified as storage option in the configuration XML file above)• Create database instances on all nodes in the cluster• Configure listener and NetService entries• Configure and register CRS resources• Start the instances on all nodes in the cluster
Task 17 (Setup Test Transport Failover)
Congratulations, you have converted your single-instance database to RAC.
Method: “DBCA”
95
We can use DBCA to convert from single-instance Oracle databases to RAC. DBCA automates the configuration of the control file attributes, creates the undo tablespaces and the redo logs, and makes the initialization parameter file entries for cluster-enabled environments. It also configures Oracle Net Services, Oracle Clusterware resources, and the configuration for RAC database management for use by Oracle Enterprise Manager or the SRVCTL utility.
Task 1 (Create Shared Storage and Configure) Go to Page 15
Task 2 (Create and Configure the Second Virtual Machine) Go to page 16
Task 3 (Add Ether Net Adapter for Private Network in Both Machine) Go to page 16
Task 4 (Prepare Disk for OCR, Voting and ASM Storage) Go to page 17
Task 5 (Install Oracle Clusterware) Go to page 24
Task 6 (Install Oracle 10gR12 Software/binary) Go to page 28
Task 8 (Create and Configure ASM Instance and ASM DISK Groups) Go to page 36
Step 1 create Current database template
Go to Standalone database server and start DBCA. At the Welcome page, click next. On the Operations page, select Manage Templates, and click Next. On the Template Management page, select Create a database
template and From an existing database (structure as well as data), and click Next.
On the Source Database page, select the database name in the Database instance field, and click Next.
On the Template Properties page, enter a name for your template in the Name field.
By default, the template files are generated in the directory ORACLE_HOME/assistants/dbca/templates and click next.
On the Location of Database Related Files page, choose Convert the file location to use OMF structure, and click Finish.
96
DBCA generates two files: a database structure file (template_name.dbc), database preconfigured image file (template_name.dfb) and database control file (Template_Name.ctrl)
Step 2 Copies Template file to Target System.
Copy template_name.dbf,template _name.ctrl and template_name.dfb to Target System fefault location.
Step 3 Run the DBCA on Oracle RAC node (target System)On the DBCA Template Selection page, use the template that you copied to a temporary location.
Managing OCR and Voting Disk
OCR:Oracle Cluster registry (OCR) store the cluster configuration information and database configuration information like cluster node list, cluster database instance to node mapping and CRS application resource profile.
OCR location is specified during CRS installation. OCR.loc file indicate the OCR device location.
97
OCR.loc file located in /etc/oracle on linux system and /var/opt/oracle on Solaris system.
We create OCR in shared disk storage that must be accessible to all cluster nodes.
The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.
How to check health of OCR? We just use “ocrcheck” utility.
How to take Backup of OCR? There are two methods of backup. The first method uses automatically and the second method uses manually created logical OCR export files.
Automatically: Oracle automatically takes a backup of OCR to default location on
every four hours. Oracle always retains the last three backup copies of the OCR. Default location is $CRS_HOME/cdata/cluster_name where
cluster_name is the name of your cluster. We can change backup default location of OCR by using ocrconfig
command. (Example: $ ocrconfig – backuploc <location> The CRSD process that also creates and retains an OCR backup for
each full day and at the end of each week. We cannot customize the backup frequencies or the number of files
that Oracle retains.
Manually:
We can take export backup of OCR after making changes by using oraconfig command. (Example: ocrconfig –export <location>)
How to Recovering the OCR?In event of a failure, before you attempt to restore the OCR, ensure that the OCR is unavailable.
Run the following command to check the status of the OCR: ocrcheck
If this command does not display the message 'Device/File integrity check succeeded' for at least one copy of the OCR, then both the primary OCR and the OCR mirror have failed. You must restore the OCR from a backup.
Restoring the OCR from Automatically Generated OCR Backups:
Step 1 Identify the available OCR backups using the ocrconfig command:
98
# ocrconfig -showbackup
Step 2 Review the contents of the backup using the following ocrdump command, where file_name is the name of the OCR backup file:
$ ocrdump -backupfile file_name
Step 3 As the root user, stop Oracle Clusterware on all the nodes in your Oracle RAC cluster by executing the following command:
# crsctl stop crs
Step 4 Repeat this command on each node in your Oracle RAC cluster.
Step 5 As the root user, restore the OCR by applying an OCR backup file.
# ocrconfig -restore file_name
Step 6 As the root user, restart Oracle Clusterware on all the nodes in your cluster by restarting each node, or by running the following command:
# crsctl start crs
Repeat this command on each node in your Oracle RAC cluster.
Step 7 Use the Cluster Verify Utility (CVU) to verify the OCR integrity. Run the following command, where the -n all argument retrieves a list of all the cluster nodes that are configured as part of your cluster:
$ cluvfy comp ocr -n all [-verbose]
Recovering the OCR from an OCR Export File:
We use the ocrconfig -import command to restore the OCR
Step 1 Log in as a Root User. Stop oracle Clusterware on all nodes.
Step 2 Restore the OCR data by importing the contents of the OCR export file using the following command, where:
ocrconfig -import <file_location>
Step 3 Start oracle Clusterware on all nodes.
crsctl start crs
99
Step 4 Use the CVU to verify the OCR integrity.
cluvfy comp ocr -n all [-verbose]
How to Adding an OCR Location
You can add an OCR location after an upgrade or after completing the Oracle RAC installation. If you already mirror the OCR, then you do not need to add an OCR location; Oracle Clusterware automatically manages two OCRs when you configure normal redundancy for the OCR. Oracle RAC environments do not support more than two OCRs, a primary OCR and a secondary OCR.
Run the following command to add an OCR location:
ocrconfig -replace ocr <disk location>
Run the following command to add an OCR mirror location:
ocrconfig -replace ocrmirror <disk_location>
How to Replacing an OCR
If you need to change the location of an existing OCR, or change the location of a failed OCR to the location of a working one, you can use the following procedure as long as one OCR file remains online.
Step 1 Use the OCRCHECK utility to verify that a copy of the OCR other than the one you are going to replace is online using the following command:
ocrcheck
Step 2 Verify that Oracle Clusterware is running on the node on which the you are going to perform the replace operation using the following command:
crsctl check crs
Step 3 Run the following command to replace the OCR:
ocrconfig -replace ocr <destination_location>
Run the following command to replace an OCR mirror location:
ocrconfig -replace ocrmirror destination_file
100
How to Repairing an Oracle Cluster Registry Configuration on a Local Node
You may need to repair an OCR configuration on a particular node if your OCR configuration changes while that node is stopped. For example, you may need to repair the OCR on a node that was shut down while you were adding, replacing, or removing an OCR. To repair an OCR configuration, run the following command on the node on which you have stopped the Oracle Clusterware daemon:
ocrconfig –repair ocrmirror device_name
How to Removing an Oracle Cluster Registry
To remove an OCR location, at least one OCR must be online. You can remove an OCR location to reduce OCR-related overhead or to stop mirroring your OCR because you moved your the OCR to a redundant storage system, such as a redundant array of independent disks (RAID).
To remove an OCR location from your Oracle RAC environment:
Step 1 Use the OCRCHECK utility to ensure that at least one OCR other than the OCR that you are removing is online.
ocrcheck
Step 2 Run the following command on any node in the cluster to remove one copy of the OCR:
ocrconfig -replace ocr
Step 3 This command updates the OCR configuration on all the nodes on which Oracle Clusterware is running.
Voting Disks
The voting disk records node membership information. A node must be able to access more than half of the voting disks at any time.
Backing up Voting DisksThe node membership information does not usually change; you do not need to back up the voting disk every day. However, back up the voting disks at the following times:
After installationAfter adding nodes to or deleting nodes from the cluster
101
After performing voting disk add or delete operations
How to take backup?
dd if=/dev/rdsk/c0d1s1 of=/tmp/voting.dmp
When you use the dd command for making backups of the voting disk, the backup can be performed while the Cluster Ready Services (CRS) process is active; you do not need to stop the crsd.bin process before taking a backup of the voting disk.
Recovering Voting Disks
If a voting disk is damaged, and no longer usable by Oracle Clusterware, you can recover the voting disk if you have a backup file. Run the following command to recover a voting disk where backup_file_name is the name of the voting disk backup file and voting_disk_name is the name of the active voting disk:
dd if=backup_file_name of=voting_disk_name
Adding and Removing Voting Disks
You can dynamically add and remove voting disks after installing Oracle RAC. Do this using the following commands where path is the fully qualified path for the additional voting disk. Run the following command as the root user to add a voting disk:
crsctl add css votedisk path
Run the following command as the root user to remove a voting disk:
crsctl delete css votedisk path
crsctl query css votedisk
102
Administering Cluster Ready Services (CRS)
We use Cluster Control Utility “CRSCTL” to perform administrative operation of oracle clusterware. It is located in $CRS_HOME/bin and must be executed by the “root” user.
1. To check the current state of all oracle clusterware daemon:
$ ./crsctl check crsCSS appears healthyCRS appears healthyEVM appears healthy
2. You can also check the state of individual oracle clusterware daemon:
$ ./crsctl check cssdCSS appears healthy
103
$ ./crsctl check crsdCRS appears healthy
$ ./crsctl check evmdEVM appears healthy
3. To start oracle clusterware
$ ./crsctl start crsAttempting to start CRS stackThe CRS stack will be started shortly
4. To stop oracle clusterware
$ ./crsctl stop crsStopping resources.Successfully stopped CRS resourcesStopping CSSD.Shutting down CSS daemon.Shutdown request successfully issued.
5. To disable oracle clusterware:
# ./crsctl disable crs
6. To enable oracle clusterware:
$ ./crsctl enable crs
7. To list the module for debugging in CSS
$ ./crsctl lsmodules cssThe following are the CSS modules ::CSSDCOMMCRSCOMMNS
8. CRS_STAT: It reports the current state of resources configured in the OCR.
$ ./crs_stat -tName Type Target State Host———————————————————————————–ora….C1.inst application ONLINE ONLINE rac1ora….C2.inst application ONLINE ONLINE rac2ora….AC1.srv application ONLINE ONLINE rac1ora.RAC.abc.cs application ONLINE ONLINE rac1
104
ora.RAC.db application ONLINE ONLINE rac2ora….AC1.srv application ONLINE ONLINE rac1ora….ice2.cs application ONLINE ONLINE rac1ora….AC1.srv application ONLINE ONLINE rac1
9. CRS_STOP: This command used to stop resource or cluster member.
$ ./crs_stop ora.rac1.onsAttempting to stop `ora.rac1.ons` on member `rac1`Stop of `ora.rac1.ons` on member `rac1` succeeded.
10. CRS_START: This command used to start resource or cluster member.
$ ./crs_start ora.rac1.onsAttempting to start `ora.rac1.ons` on member `rac1`Start of `ora.rac1.ons` on member `rac1` succeeded.
11. OCRCHECK : It verifies the integrity of the OCR.
$ ./ocrcheckStatus of Oracle Cluster Registry is as follows :Version : 2Total space (kbytes) : 5237072Used space (kbytes) : 9360Available space (kbytes) : 5227712ID : 794527192Device/File Name : /apps/oracle/oradata/ocrDevice/File integrity check succeededCluster registry integrity check succeeded
12 Check that all nodes have joined the cluster.
$./olsnodes
13 Oracle interface Configuration (oifcfg)
$./oifcfg getif
Oracle Interface Configure Utility
This command should return values for global public and global cluster_interconnect.
If the command does not return a value for global cluster_interconnect, enter the following command to delete and set the desired interface:
105
$ ./oifcfg delif –global$./oifcfg setif –global <interface name >/<subnet>:public$./oifcfg setif –global <interface name >/<subnet>:cluster_interconnect\
Cluster Name Check Utility
This utility orints the cluster name information:
$./cemutlo –n -w
Administrating Services
The following tools are available for administrating services.
DBCAOEMDBMS_SERVICEServer Control Utility
Here we will discuss only about Server Control Utility (SRVCTL).
We can use SRVCTL to Add, start, stop, and Enable, Disable and remove instances and services.
Command Syntax:
Srvctl addThe SRVCTL add command add configuration in the OCR.
Add Database: - srvctl add database -d <DB NAME> -o <ORACLE HOME>Add Instance: - srvctl add instance -d <DB NAME> -i <Instance Name> -n <Node Name>Add nodeapps: - srvctl add nodeapps -n <Node Name> -o <Oracle Home> -A <VIP>/255.255.255.0
106
Add asm: - srvctl add asm -n <Node Name> -i <ASM Instance> -o <ORACLE Home>
srvctl configThe SRVCTL config command displays the configuration stored in the OCR.
Config database: - srvctl config database -d <Db Name>config nodeapps:- srvctl config nodeapps -n <node name>config asm:- srvctl config asm -n <node name>config listener:- srvctl config listener -n <node name>
srvctl start
Start database:- srvctl start database -d <Db name> -o openStart instance:- srvctl start instance -d <db name> -i <instance name>Start nodeapps:- srvctl start nodeapps -n <node name>Start asm:- srvctl start asm -n <node name> -i <ASM instance>Start listener:- srvctl start listener -n <node name>
srvctl stopStop database: - srvctl stop database -d <db name>Stop instance: - srvctl stop instance -d <dba name> -i <instance name>Stop nodeapps: - srvctl stop nodeapps -n <node name>Stop asm: - srvctl stop asm -n <node name> -i <ASM instance>Stop listener:- srvctl stop listener -n <node name>
srvctl status
srvctl status databasesrvctl status database -d <db name> -vsrvctl status instancesrvctl status instance -d <db name> -i <instance name> -vsrvctl status nodeappssrvctl status nodeapps -n <node name>srvctl status asmsrvctl status asm -n <node name>
srvctl remove
srvctl remove databasesrvctl remove database -d <db name>srvctl remove instancesrvctl remove instance -d <db name> -i <instance name>srvctl remove nodeappssrvctl remove nodeapps -n <node name>srvctl remove asm
107
srvctl remove asm -n <node name> -i <ASM Instance>srvctl remove listener -n node1 -l lsnr01
AlterNet method to remove listener by using crs_unregister
1. $ crs_stat | grep NAME\= | grep lsnrNAME=ora.rac1.LISTENER_RAC1.lsnrNAME=ora.rac2.LISTENER_RAC2.lsnr
then $ crs_unregister ora.rac1.LISTENER_RAC1.lsnr$ crs_unregister ora.rac2.LISTENER_RAC2.lsnr
srvctl enable
srvctl enable databasesrvctl enable database -d <db name>srvctl enable instancesrvctl enable instance -d <db name> -i <instance name>srvctl enable asmsrvctl enable asm -n <node name> -i <ASM Instance>
srvctl disable
srvctl disable databasesrvctl disable database -d <db name>srvctl disable instancesrvctl disable instance -d <db name> -i <instance name>srvctl disable asmsrvctl disable asm -n <node name> -i <asm instance>
108
Managing UNDO, Temporary and Redo logs in RAC Environment
Managing UNDO in RAC Environment
In the oracle RAC Environment, each instance store transaction undo data in its dedicated undo Tablespace. We can set the undo Tablespace for each instance by setting the undo_tablespace parameter and undo_management to be the same across all the instances.
Example:
<Node 1 instance name>.undo_tablespace=undo_tbs1<Node 2 instance name>.undo_tablesapce=undo_tbs2
Managing Temporary Tablespace
In an RAC environment, a user will always use the same assigned temporary Tablespace irrespective of the instance being used. Each instance creates a temporary segment in the temporary Tablespace it is using. If an instance is running a big sort operation requires a large temporary tablesapce , it can reclaim the space used by other instance’s temporary segments in that tablesapce.
Main Point:All instance share the same temporary TablespaceSize should be at least equal to the concurrent maximum requirement of
the entire instance.
Administrating Online redologs
109
Each instance has exclusive write access to its own online redolog files. An instance can read another instance current on line redologs file to perform instance recovery it that instance has terminated abnormally. Online redologs file needs to be located on a shared storage device and can not be on a local node.
How to Enable Archiving in the RAC Environment
Step 1 Log in Node1.
Step 2 set cluster_database=false in parameter file.
Step 3 shut down all the instances.Srvctl stop database –d <db name>
Step 4 mount the database SQL> startup mount
Step 5 Enable ArchivingSQL> alter database archivelog;
Step 6 Change cluser_database=true in parameter file
Step 7 Shutdown the instanceSQL> shutdown immediate
Step 8 Start all the instance.Scrvctl start database –d <db_name>
How to Enable Archiving in the RAC Environment
Step 1 log in node 1 Step 2 Verify that the database is running in Archive log mode.Step 3 Set parameter cluster_database =false
SQL> alter system set cluster_database=falce scope=spfile sid=’prod1’
Step 4 set parameter DB_RECOVERY_FILE_DEST_SIZE and DB_RECOVERYU_FILE_DEST
SQL> alter system set DB_RECOVERY_FILE_DEST_SIZE=200M scope=spfile;
SQL> alter system set DB_RECOVERY_FILE_DEST=/dev/rdsk/c0d3s1 scope=spfile;
110
Step 5 Shut down all instance.
# srvctl stop database –d <db_name>
Step 6 Mount the database
SQL> statup mount
Step 7 Enable the flashback.
SQL> alter database flashback on;
Step 8 Set parameter cluster_database = true
SQL> alter system set cluster_database=falce scope=spfile sid=’prod1’
Step 9 Shutdown instance
SQL> shutdown
Step 10 Start all instance
$ srvctl start database –d >db_name>
111
De-Installing Oracle Real Application Clusters Software and Database
Step 1 Examine the Oratab file (/var/opt/oracle/oratab in Solaris) to identify the instance dependencies on this oracle home.
Step 2 Start DBCA, selects Oracle Real Application Clusters Database, select Delete a database, and selects the database that you want to delete. Repeat this step to delete all databases.
Steps 3 connect with ASM instance, run the following command:
a) Check disk group which are using by ASM instance:
SQL> select * from V$ASM_DISKGROUP;
b) Drop the disk group.
SQL> drop <diskgroup diskgroup_name> including contents;
c) Shut down ASM on all RAC nodes, and verify that all ASM instances are stopped.
e) To remove the ASM entry from the OCR, run the following command for all nodes on which this Oracle home exists:
srvctl remove asm -n nodename
Step 4 Remove oracle home (On All Node)
Go to $ORACLE_HOME/oui/bin and execute ./runInstaller. Start OUI, and remove any existing Oracle Database 10g with RAC software by selecting Deinstall Products and selecting the Oracle home that you want to remove.
Step 5 Remove oratab entries for the deleted Oracle home databases.
Step 6 De-Installing Oracle Clusterware
112
a) Go to $CRS_HOME/install directory and execute rootdelete.sh script (On All node)
# /export/home/oracle/product/10.2.0/crs/install/rootdelete.sh
This will disable the Oracle Clusterware applications that are running on the cluster node.
b) Run the script CRS_home/install/rootdeinstall.sh on a local node to remove the OCR.
c) Go to $CRS_HOME/oui/bin and execute. /runInstaller. Start OUI. In the Welcome page, click Deinstall Products to display the list of installed products. Select the Oracle Clusterware home to de-install (Execute on All Node)
De-Installation RAC Component after Fail Installation
Step 1
Go to $CRS_HOME/install directory and execute rootdelete.sh script (On All node)
# /export/home/oracle/product/10.2.0/crs/install/rootdelete.sh
This will disable the Oracle Clusterware applications that are running on the cluster node.
Step 2
Go to $CRS_HOME/install directory and execute rootdeinstall.sh on a local node to remove the OCR. (Only One Node)
Step 3
Start OUI and Select the Oracle Clusterware home to de-install.
113
Adding a Node To 10gR2 RAC cluster
Step 1 Preparing Access to the New Node
Adding the public and private node names for the new node to the /etc/hosts file on the existing nodes, RCA1, RAC2 and also new Node RAC3
Verifying the new node can be accessed (using the ping command) from the existing nodes
Configure SSH on new node.Running the following command on either docrac1 or docrac2 to verify the
new node has been properly configured:cluvfy stage -pre crsinst -n docrac3
Step 2 Adding an Oracle Clusterware Home to a New Node (RAC3) Using OUI
Log in as a oracle use on node 1 (RAC1)Go to $ORACLE_CRS_HOME/oui/bin and execute ./addNode.sh and click
next. The Oracle Universal Installer (OUI) displays the Node Selection PageEnter the node that you want to add and verify the entries that OUI
displays on the Summary Page click Next.Monitor the progress of the copy crs home to new node and verify the total
size of the CRS directoryVerify crs is started on new node and nodeapps are started except for
listener, and then exit OUI.Obtain the remote port identifier, which you need to know for the next
step, by running the following command on the existing node from the $ORACLE_CRS_HOME/opmn/conf directory:
$ cat $ORA_CRS_HOME/opmn/conf/ons.configlocalport=6113remoteport=6201loglevel=3useocr=on
From the $ORACLE_CRS_HOME/bin directory on an existing node(RAC1), run the Oracle Notification Service (RACGONS) utility as in the following example where remote_port is the port number from the previous step and node3 is the name of the node that you are adding:
114
. /racgons add_config New_Node:Remote_Port
Example:
$. /racgons add_config rac3:6201
Step 3 Adding an Oracle Home to a New Node Using OUI
1. Log in as a oracle use on node 1 (RAC1)2. Go to ORACLE_HOME/oui/bin and execute ./addNode.sh and click next.3. When OUI displays the Node Selection Page, select the node to be added and click Next.4. Verify the entries that OUI displays on the Summary Page and click Next.
Step 4 reconfigure listener on new node
Log in as a oracle use on node 1 (RAC1)Start the Oracle Net Configuration Assistant by entering netca at the
system prompt from the $ORACLE_HOME/bin directory.Select Listener configuration, and click Next.Select Add to create a new Listener, then click Next.Accept the default value of LISTENER for the Listener name by clicking
Next.Choose TCP and move it to the Selected Protocols area, then click Next.Choose Use the standard port number of 1521, then clicks next.Select Cluster configuration for the type of configuration to perform, then
click Next.Select the name of the node you are adding, for example RAC3, then click
Next.NETCA creates a Listener using the configuration information provided.
You can now exit NETCA.
Step 5 add ASM instance on new node
1. Add ASM entry in /var/opt/oracle/oratab (on Solaris) and /etc/oratab (on Linux) on new node
+ASM3:/export/home/oracle/oracle/product/10.2.0/db_1:N
2. Rename init+ASM1 to init+ASM33. Create admin directories for ASM instance4. Add new ASM instance on all RAC nodes
+ASM3.instance_number=3
115
6. Add ASM to cluster
srvctl add asm -n rac3 -i +ASM3 –o /export/home/oracle/oracle/product/10.2.0/db_1
6. Start asm instance
srvctl start asm -n rac3
Step 6 Creating a Database Instance
Log in as a Oracle user on New Node (RAC3)Start DBCA by entering dbca at the system prompt from the
$ORACLE_HOME/bin directory.Select Oracle Real Application Clusters database, and then click Next.Select Instance Management, and then click Next.Select Add an Instance, then click Next.In the List of Cluster Databases window, select the active Oracle RAC
database to which you want to add an instance, for example PROD. Enter the user name and password for the database user that has SYSDBA privileges. Click Next.
You will see a list of the existing instance. Click Next, and on the following screen enter PROD3 as the instance name and choose RAC3 as the node name.
This will create a database instance PROD3 (on rac3) , click next in the Database storage screen. During creation, you will be asked whether the ASm instance should be extended to RAC3, choose yes.
You should now have a new cluster database instance and ASM instance running on the new node. After you terminate your DBCA session, you should run the following command to verify the administrative privileges on the new node and obtain detailed information about these privileges:
CRS_home/bin/cluvfy comp admprv -o db_config -d oracle_home -n rac3 -verbose
116
Removing a 10gR2 RAC cluster Node
Step 1 Deleting the Database Instance
Log in as a Oracle user on New Node (RAC3)Start DBCA by entering dbca at the system prompt from the
$ORACLE_HOME/bin directory.Select Oracle Real Application Clusters database, and then click Next.Select Instance Management, and then click Next.Select Delete an Instance, then click Next.Select the database name, then choose the instance to delete and confirm
the deletion.
Step 2 Clean up the ASM
Log on node 1 (RAC1)stop ASM services for node 3 (RAC3)
$srvctl stop asm –n rac3remove the ASM service
$srvctl remove asm –n rac4Remove ASM Directory Structure of node to be deletedEdit /var/opt/oracle/oratab (on Solaris) /etc/oratab (on Linux) and remove
ASM instance references.
Step 4 Remove the Listener from Node to be deleted
Log in as a Oracle user on New Node (RAC3)Start NETCA by entering dbca at the system prompt from the
$ORACLE_HOME/bin directory.Choose Cluster ManagementChoose ListenerChoose RemoveConfirm delectation of LISTENER
Step 5 removing the Node from the Database
1. Log in as a Oracle user on New Node (RAC3)Go to $ORACLE_HOME/oui/bin and Execute :
117
./runInstaller –updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES=(rac3)” – local$./runInstaller
Choose th deinstall product and select the dbhome. This will remove the database home software and leave behind only some files and directories.
Log in as a oracle user on NODE1 (RAC1)Go to to $ORACLE_HOME/oui/bin and Execute:
./runInstaller –updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES=(rac1,rac2)”
Step 6 Removing RAC3 from the Clusterware
Log in on RAC1 nodeObtain the remote port identifier:, by running the following command on
the existing node from the $ORACLE_CRS_HOME/opmn/conf directory:
$ cat $ORA_CRS_HOME/opmn/conf/ons.configlocalport=6113remoteport=6201loglevel=3useocr=on
Run the following command$ORACLE_CRS_HOME/bin/racgone remove_config rac3:6201
log in as a root user on RAC3 and execute:$ORACLE_CRS_HOME/install$./rootdelete.sh
run following from RAC1 node:$ORACLE_CRS_HOME/bin/olsnodes –n
run the following from RAC3 Node:$cd $ORACLE_CRS_HOME/oui/bin$./runInstaller –updateNodeList ORACLE_HOME=$ORACLE_CRS_HOME “CLUSTER_NODES=(rac3)” CRS=TRUE –local$./runInstaller
Choose Deinstall Software and remove the CRS_HOMERun the following from Node 1 (RAC1)
$cd $ORACLE_CRS_HOME/oui/bin$ ./runInstaller –updateNodeList ORACLE_HOME=$ORACLE_CRS_HOME “CLUSTER_NODES={rac1,rac2}” CRS=TRUE
118
RAC Load Balancing
In my opinion, there are 2 types of load balancing in a 10g RAC.
Client Side Load Balancing Server Side Load Balancing
Client Side Load BalancingIn Client Side Load Balancing Method, when a user attempt to connect to the database , the connection distribute across several listeners and oracle database randomly select an address in the address list and connects to that node listener’s.
Client Side load Balancing is configured by adding LOAD_BALANCE=ON in tnsnames.ora file.
How to Configure?Suppose, you have a 2-node 10g RAC cluster, and for load balancing, you have the following entry in my tnsnames.ora file.
PROD= (description= (load_balance=on) (address=(protocol=tcp)(host=rac1-vip)(port=1521)) (address=(protocol=tcp)(host=rac2-vip)(port=1521)) (connect_data= (service_name=PROD) ) )
Testing Client Side Load Balancing BehaviorIt is quite simple to setup Oracle Net tracing on the client, to test and show whether Client Side Load Balancing is working properly. This type of load balancing has nothing to do with balancing server side load. It is to do with balancing load across the listeners.
If you want to trace the Client Side Load Balancing behavior from the client side, so You enable Oracle net tracing by adding the following lines in sqlnet.ora and you should fine trace files for each physical connection made under $ORACLE_HOME/network/trace.
TRACE_LEVEL_CLIENT = USERTRACE_FILE_CLIENT = SQLTRC
Step 1 Log on any Client Machine (here we user UNIX flavor Client)
119
Step 2 open $ORACLE_HOME/network/admin/sqlnet.ora file and add following line.
TRACE_LEVEL_CLIENT = USERTRACE_FILE_CLIENT = SQLTRC
Step 3 Executes bellow mention bash shell scripts.
for a in {1..1000}doecho $asqlplus -s system/manager@PROD<<EOFEOFdoneexit 0
Step 4 Examining the traces file We can use a combination of grep, and wc to see the result of load balancing. Each connection produced 1 trace files, so we have 1000 trace files for 1000 connections.
$ ls -l *.trc |wc -l1000
$ grep nsc2addr *.trc | grep load_balance |grep rac1-vip |wc -l498
$ grep nsc2addr *.trc | grep load_balance |grep rac2-vip |wc -l502
We can see over 1000 connections, 498 were made to rac1-vip listener, whilst 502 were made to rac2-vip listener. It is a fairly even distribution.
Server Side Load Balancing In Server Side Load Balancing Method, when users make a connection request, the listener directs connection request to the best instance providing by each database instance PMON process.
To implement server-side load balancing, listeners must be configured on all node and the REMOTE_LISTENERS initialization parameter must be added to the database’s PFILE or SPFILE so that the database knows to search out the value provided in that parameter in the database server’s TNSNAMES.ORA
If you want server side load balancing, REMOTE_LISTENER should point to all listeners on all nodes, otherwise don't set REMOTE_LISTENER.
120
The purpose of REMOTE_LISTENER is to connect all instances with all listeners so the instances can propagate their load balance advisories to all listeners. if you connect to a listener, this listener uses the advisories to decide who should service your connect. if the listener decides its local instance(s) are least loaded and should service your connect it passes your connect to the local instance. if the node you connected to is overloaded, the listener can use TNS redirect to redirect your connect a less loaded instance.
How to Setup Server Side Load Balancing
Requirements:
New entries in every client's TNSNAMES.ORA file for the new alias New entries in the TNSNAMES.ORA file of every node in the cluster to
include the REMOTE_LISTENER setting The addition of *.REMOTE_LISTENER parameter to all nodes in cluster
to force each node's Listener to register with each other
Step1 Add bellow entries to each server's TNSNAMES.ORA file to enable Server-Side Load Balancing:
LISTENERS_PROD = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521)) )
Step 2 Run this command to add the REMOTE_LISTENERS initialization parameter to the common SPFILE for all nodes in the RAC clustered database:
ALTER SYSTEM SET REMOTE_LISTENER = LISTENERS_PROD SID='*' SCOPE=BOTH;
RAC Failover Case Study
121
Case: Test the failover instance (PROD1) of a connected oracle user in the event of an insert.
Action:
1. Connect to any oracle instance.C:> sqlplus /nologSQL> Conn scott/tiger@PROD
2. Check connected instanceSQL> select instance_name from v$instance;INSTANCE_NAME----------------PROD1
3. Create Table TEST.Sql> create table test (Col1 number);
4. insert over 100,000 line of data into the table
sql>declarex number;beginfor x in 1..100000 loopinsert into test values (x);end loop;end;
5. Telnet into the node that the SCOTT user connected. 6. Terminate the PROD1 instance.C:> telnet 10.1.1.170
bash-3.00$ ps -ef | grep smonoraprod 25829 16727 0 16:25:18 pts/1 0:00 grep smonoraprod 1645 1 0 15:53:10 ? 0:01 ora_smon_PROD1oraprod 6631 1 0 20:39:52 ? 0:04 asm_smon_+ASM1
kill PROD1 SMON Proceee ( $ kill -9 1645)
7. Check user session should be hang
8. now start instance (PROD1)$ /srvctl start instance -d prod -i prod1
122
9. When instance UP. User terminal show a Error: “ERROR at line 1: ORA-25402: transaction must roll back”
10. rollback the transaction and continue work.
Conclusion:The insert transaction should be affected and should not failover to the next node.
Case: Test the failover instance (PROD1) of a connected oracle user in the event of Delete and Update.
The Delete/Update transaction should be affected and should not failover to the next node.
Case: Test the failover instance (PROD1) of a connected oracle user in the event of Select.
The Select Statement should not be affected and failover to the next node without any disruption.
Case: To test the failover of the private interconnect
Action:
1. Pull out one of the interconnect wire and do a ping to the other node via the private address.
2. The ping should be successful with return messages
Case: To test the failover of the network card of the public IP address
Action:
1. Plug out one of the cable connecting the node to the public network.2. Ping from the observation client to the node3. The ping from the client should be able to go thru’
Oracle RAC Log Directory
Each component in the CRS stack has its respective directories created under the CRS home:
123
The Cluster Ready Services Daemon (crsd) Log Files$CRS home/log/hostname/crsd
Oracle Cluster Registry (OCR) Log Files$CRS Home/log/hostname/client
Cluster Synchronization Services (CSS) Log Files$CRS Home/log/hostname/cssd
Event Manager (EVM) Log Files$CRS Home/log/hostname/evmd
RACG Log Files$ORACLE_HOME/log/hostname/racg
124