Upload
rgibhanu
View
210
Download
6
Tags:
Embed Size (px)
Citation preview
1
Automatic Storage Management (ASM)
Automatic Storage Management (ASM) is oracle’s logical volume manager, it uses OMF (Oracle
Managed Files) to name and locate the database files. It can use raw disks, filesystems or files which
can be made to look like disks as long as the device is raw. ASM uses its own database instance to
manage the disks, it has its own processes and pfile or spfile, it uses ASM disk groups to manage
disks as one logical unit.
Provides automatic load balancing over all the available disks, thus reducing hot spots in the file
system
Prevents fragmentation of disks, so you don't need to manually relocate data to tune I/O
performance
Adding disks is straight forward - ASM automatically performs online disk reorganization
when you add or remove storage
Uses redundancy features available in intelligent storage arrays
The storage system can store all types of database files
Using disk group makes configuration easier, as files are placed into disk groups
ASM provides stripping and mirroring (fine and coarse gain - see below)
ASM and non-ASM oracle files can coexist
ASM is free!!!!!!!!!!!!!
2
The three components of ASM are
ASM Instance
is a special instance that does not have any data files, there is only ASM instance one per server which manages all ASM files for each database. The instance looks after the disk groups and allows access to the ASM files. Databases access the files directly but uses the ASM instance to locate them. If the ASM instance is shutdown then the database will either be automatically shutdown or crash.
ASM Disk Groups Disks are grouped together via disk groups, these are very much like logical volumes.
ASM Files Files are stored in the disk groups and benefit from the disk group features i.e. stripping and mirroring.
ASM Summary
database is allowed to have multiple disk groups
You can store all of your database files as ASM files
Disk group comprises a set of disk drives
ASM disk groups are permitted to contain files from more than one disk
Files are always spread over every disk in an ASM disk group and belong to one disk group only
ASM allocates disk space in allocation units of 1MB
ASM Processes
There are a number of new processes that are started when using ASM, both the ASM instance and
Database will start new processes
3
ASM Instance
RBAL (rebalance master)
coordinates the rebalancing when a new disk is add or removed
ARB[1-9] (rebalance)
actually does the work requested by the RBAL process (upto 9 of these)
Database Instance
RBAL opens and closes the ASM disk
ASMB connects to the ASM instance via session and is the communication between ASM and RBMS, requests could be file creation, deletion, resizing and also various statistics and status messages.
ASM registers its name and disks with the RDBMS via the cluster synchronization service (CSS).
This is why the oracle cluster services must be running, even if the node and instance is not clustered.
The ASM must be in mount mode in order for a RDBMS to use it and you only require the instance
type in the parameter file.
ASM Disk Groups
An ASM disk group is a logical volume that is created from the underlying physical disks. If storage
grows you simply add disks to the disks groups, the number of groups can remain the same.
ASM file management has a number of good benefits over normal 3rd party LVM's
performance
redundancy
ease of management
security
ASM Stripping
ASM stripes files across all the disks within the disk group thus increasing performance, each stripe is
called an ‘allocation unit’. ASM offers two types of stripping which is dependent on the type of
database file
Coarse Stripping used for datafile, archive logs (1MB stripes)
Fine Stripping used for online redo logs, controlfile, flashback files(128KB stripes)
ASM Mirroring
Disk mirroring provides data redundancy, this means that if a disk were to fail Oracle will use the
other mirrored disk and would continue as normal. Oracle mirrors at the extent level, so you have a
primary extent and a mirrored extent. When a disk fails, ASM rebuilds the failed disk using mirrored
extents from the other disks within the group, this may have a slight impact on performance as the
rebuild takes place.
All disks that share a common controller are in what is called a failure group, you can ensure
redundancy by mirroring disks on separate failure groups which in turn are on different controllers,
4
ASM will ensure that the primary extent and the mirrored extent are not in the same failure group.
When mirroring you must define failure groups otherwise the mirroring will not take place.
There are three forms of Mirroring
External redundancy - doesn't have failure groups and thus is effectively a no-mirroring
strategy
Normal redundancy - provides two-way mirroring of all extents in a disk group, which result
in two failure groups
High redundancy - provides three-way mirroring of all extents in a disk group, which result in
three failure groups
ASM Files
The data files you create under ASM are not like the normal database files, when you create a file you
only need to specify the disk group that the files needs to be created in, Oracle will then create a
stripped file across all the disks within the disk and carry out any redundancy required, ASM files are
OMF files. ASM naming is dependent on the type file being created, here are the different file-
naming conventions
fully qualified ASM filenames - are used when referencing existing ASM files
(+dgroupA/dbs/controlfile/CF.123.456789)
numeric ASM filenames - are also only used when referencing existing ASM files
(+dgroupA.123.456789)
alias ASM filenames - employ a user friendly name and are used when creating new files and
when you refer to existing files
alias filenames with templates - are strictly for creating new ASM files
incomplete ASM filenames - consist of a disk group only and are used for creation only.
Creating ASM Instance
Creating a ASM instance is like creating a normal instance but the parameter file will be smaller,
ASM does not mount any data files, it only maintains ASM metadata. ASM normally only needs
about 100MB of disk space and will consume about 25MB of memory for the SGA, ASM does not
have a data dictionary like a normal database so you must connect to the instance using either O/S
authentication as SYSDBA or SYSOPER or using a password file.
The main parameters in the instance parameter file will be
instance_type - you have two types RDBMS or ASM
instance_name - the name of the ASM instance
asm_power_limit - maximum speed of rebalancing disks, default is 1 and the range is 1 - 11
(11 being the fastest)
asm_diskstring - this is the location were oracle will look for disk discovery
asm_diskgroups - diskgroups that will be mounted automatically when the ASM instance is
started.
5
You can start an ASM instance with nomount, mount but not open. When shutting down a ASM
instance this passes the shutdown command to the RDBMS (normal, immediate, etc)
ASM Configuration
Parameter file (init+asm.ora)
instance_type=’asm’
instance_name=’+asm’
asm_power_limit=2
asm_diskstring=’\\.\f:’,’\\.\g:’,’\\.\h:’
asm_diskgroup= dgroupA, dgroupB
Note: file should be created in $ORACLE_HOME/database
Create service (windows only)
c:> oradim –new –asmsid +ASM –startmode manual
Set the oracle_sid environment variable (windows or unix)
c:> set ORACLE_SID=+ASM (windows only)
export ORACLE_SID=+ASM (unix only)
Login to ASM instance and start instance
c:> sqlplus /nolog;
sql> connect / as sysdba;
sql> startup pfile=init+asm.ora
Note: sometimes you get a ora-15110 which means that the diskgroups are
not created yet.
ASM Operations
Instance name select instance_name from v$instance;
Create disk group
create diskgroup diskgrpA high redundancy
failgroup failgrpA disk ’\\.\f:’ name disk1
failgroup failgrpB disk ’\\.\g:’ name disk2 force
failgroup failgrpC disk ’\\.\h:’ name disk3;
6
create diskgroup diskgrpA external redundancy
Note: force is used if disk has been in a previous diskgroup, external
redundancy uses third party mirroring i.e SAN
Add disks to a group alter diskgroup diskgrpA add disk
'\\.\i:' name disk4;
'\\.\j:' name disk5;
Remove disks from a group
alter diskgroup diskgrpA drop disk disk6;
Remove disk group drop diskgroup diskgrpA including contents
resizing disk group alter diskgroup diskgrpA resize disk 'disk3' size 500M;
Undo remove disk group alter database diskgrpA undrop disks;
Display diskgroup info
select name, group_number, name, type, state, total_mb, free_mb from
v$asm_diskgroup;
select group_number, disk_number, name, failgroup, create_date, path,
total_mb from v$asm_disk;
select group_number, operation, state, power, actual, sofar, est_work,
est_rate, est_minutes from v$asm_operation;
Rebalance a diskgroup (after disk failure and disk has been replaced)
alter diskgroup diskgrpA rebalance power 8;
Note: to speed up rebalancing increase the level upto 11, remember that
this will also decrease performance, you can also use the wait parameter
this will hold the commandline until it is finished
Dismount or mount a diskgroup
alter diskgroup diskgrpA dismount;
alter diskgroup diskgrpA mount;
Check a diskgroups integrity
alter diskgroup diskgrpA check all;
Diskgroup Directory
alter diskgroup diskgrpA add directory '+diskgrpA/dir1'
Note: this is required if you use aliases when creating databse files i.e
'+diskgrpA/dir/control_file1'
adding and drop aliases alter diskgroup diskgrpA add alias '+diskgrpA/dir/second.dbf' for
'+diskgrpB/datafile/table.763.1';
alter diskgroup diskgrpA drop alias '+diskgrpA/dir/second.dbf'
Drop files from a diskgroup
alter diskgroup diskgrpA drop file '+diskgrpA/payroll/payroll.dbf';
Using ASM Disks
Examples of using ASM disks
create tablespace test datafile ‘+diskgrpA’ size 100m;
alter tablespace test add datafile ‘+diskgrpA’ size 100m;
alter database add logfile group 4 ‘+dg_log1’,’+dg_log2’ size 100m;
alter system set log_archive_dest_1=’location=+dg_arch1’;
alter system set db_recovery_file_dest=’+dg_flash’;
Display performance
select path, reads, writes, read_time, write_time,
read_time/decode(reads,0,1,reads) "AVGRDTIME",
write_time/decode(writes,0,1,writes) "AVGWRTIME"
from v$asm_disk_stat;
RMAN is the only way to backup ASM disks.
Backup backup as copy database format ‘+dgroup1’
7
Oracle 10g ASM Installation Steps in RHEL !!!
ASM -
ASM i.e. Automatic Storage Management simplifies administration of Oracle related files by
allowing the administrator to reference disk groups rather than individual disks and files,
which ASM manages internally. On Linux, ASM is capable of referencing disks as raw devices
or by using the ASMLib software.
========================================================================
In this article, we assume that you have configured a RHEL system along with Oracle Database
Software on it. To install Oracle 10g Database Software, you can follow this link. Do not create
any database now, only install the Oracle Software.
We have configured the system with below details,
Hostname - asm10g
IP address eth0 - 192.168.0.4
Gateway eth0 - 192.168.0.1
Also we have 3 raw devices available for ASM configuration (sdb, sdc, sdd). We will see the
configuration of both, ASMLib as well as RAW devces in the below article.
========================================================================
1) Partition the Disks:
# fdisk /dev/sdb
Command (m for help): n
Command action
—e extended
—p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305
Command (m for help): w
The partition table has been altered!
8
Calling ioctl() to re-read partition table.
Syncing disks.
Always create the raw devices as primary partition and allocate the whole disk. Don’t make multiple
primary partitions on a single disk for ASM installation. It might not work properly.
Do the above steps for ‘/dev/sdc’ and ‘/dev/sdd’ device also.
To update the kernel, run the below command,
# partprobe
Now check whether you are able to see the newly created raw devices.
# fdisk -l
2) ASMLib Configuration:
- Determine your kernel version and accordingly dowload the ASMLib software from OTN.
# uname -r
- The below packages were downloaded which were suitable for my kernel.
oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
oracleasm-support-2.1.7-1.el5.i386.rpm
- Install the packages.
# rpm -ivh oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm oracleasmlib-2.0.4-1.el5.i386.rpm
oracleasm-support-2.1.7-1.el5.i386.rpm
- Now configure the ASM kernel module.
# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets (‘[]‘). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
9
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [ OK ]
Creating /dev/oracleasm mount point: [ OK ]
Loading module “oracleasm”: [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
- Once the ASM kernel module is configured, now create the disks.
# /etc/init.d/oracleasm createdisk DATAGRP /dev/sdb1
Marking disk “/dev/sdb1″ as an ASM disk: [ OK ]
# /etc/init.d/oracleasm createdisk ARCHGRP /dev/sdc1
Marking disk “/dev/sdc1″ as an ASM disk: [ OK ]
# /etc/init.d/oracleasm createdisk LOGGRP /dev/sdd1
Marking disk “/dev/sdd1″ as an ASM disk: [ OK ]
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
- To list the disks configured.
# /etc/init.d/oracleasm listdisks
ARCHGRP
DATAGRP
LOGGRP
The ASM disks are now ready for use.
3) RAW Device Setup:
- Edit the file ‘/etc/sysconfig/rawdevices’, and add the below lines:
# vi /etc/sysconfig/rawdevices
/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdc1
/dev/raw/raw3 /dev/sdd1
- Now restart the service.
# service rawdevices restart
Assigning devices:
10
/dev/raw/raw1 --> /dev/sdb1
/dev/raw/raw1: bound to major 8, minor 17
/dev/raw/raw2 --> /dev/sdc1
/dev/raw/raw2: bound to major 8, minor 33
/dev/raw/raw3 --> /dev/sdd1
/dev/raw/raw3: bound to major 8, minor 49
done
- Change the ownership and permissions of raw devices
# chown oracle.oinstall /dev/raw/raw1
# chown oracle.oinstall /dev/raw/raw2
# chown oracle.oinstall /dev/raw/raw3
# chmod 600 /dev/raw/raw1
# chmod 600 /dev/raw/raw2
# chmod 600 /dev/raw/raw3
The ASM raw disks are configured. You can start your database creation now.
4) Create ASM Instance:
- Creation of the ASM instance is the same, whether you make use of ASMLib or RAW devices.
When using ASMLib, the candidate disks are listed using the stamp associated with them, while the
raw devices are listed using their device name.
- Login as oracle user and start Database Configuration Assistant.
$ dbca
- WELCOME Screen
11
Click on Next to continue.
- OPERATIONS Screen
Select ‘Configure Automatic Storage Management’ option click on Next to continue.
A warning message will get displayed saying that ‘Oracle Cluster Syncronization Service (CSS)’ is
not currently running.
12
Open a new terminal and login as root.
Execute the command shown in the warning window.
# /u01/app/oracle/product/10.2.0/db/bin/localconfig add
When the execution is complete, click on OK button and again click Next to continue.
- CREATE ASM INSTANCE Screen
Enter password that will be used for ASM instance.
Click on Next.
13
A confirmation window will get open for creating the ASM instance.
Click on OK and ASM instance will get created as shown in below pic.
- ASM DISK GROUPS Screen
Initially the window will be blank as shown above.
14
Click on “Create New” and CREATE DISK GROUP screen will get open.
Enter Disk Group Name: DATAGRP
Select Redundancy: External
- When using ASMLib, the Disk Path column will contain values as given below
ORCL:DATAGRP
ORCL:ARCHGRP
ORCL:LOGGRP
- When using raw devices, the Disk Path column will contain candidate disks
/dev/raw/raw1
/dev/raw/raw2
/dev/raw/raw3
Since we are using raw devices, select 1st raw device i.e. /dev/raw/raw1
Click on OK and the ASM Disk Creation will start.
15
- Similarly create ARCHGRP and LOGGRP disk groups as shown in above steps.
- Once all disk groups are configured, you should see 3 disk groups similar to the below pic.
Now on ASM DISK GROUPS Screen, click on Finish. You will get a popup window. Click on No to
continue.
16
- Now the ASM instance has been configured. You can check the ASM instance running as below.
$ ps -ef | grep pmon
========================================================================
5) Listener Configuration:
- Before starting with the database creation, configure the listener and register ASM instance with it.
$ netca
- If you don’t see the ASM instance register with the listener service, then do the below steps
$ export ORACLE_SID=+ASM
$ sqlplus / as sysdba
SQL> alter system register;
SQL> exit
$ lsnrctl status
Now you will see that the ASM instance has been registered with the listener service.
========================================================================
6) Create Database:
- Start DBCA.
$ dbca
- WELCOME Screen
17
Click on Next to continue.
- OPERATIONS Screen
Select “Create a Database” and click on Next to continue.
18
- DATABASE TEMPLATES Screen
Click on Next to continue.
- DATABASE IDENTIFICATION Screen
19
Enter the DB name and click on Next to continue.
- MANAGEMENT OPTIONS Screen
If you want to configure your database with Enterprise Manager, then check the option “Configure
the database with Enterprise Manager” else uncheck it.
Click on Next to continue.
- DATABASE CREDENTIALS Screen
20
Enter common password for all accounts and click on Next to continue.- STORAGE OPTIONS
Select “Automatic Storage Management (ASM)”
Click on Next to continue. It will prompt you for sys password of ASM.
21
Enter the password that you configured while creating ASM instance.
- ASM DISK GROUPS Screen
22
Select all the disk groups click on Next. - DATABASE FILE LOCATIONS Screen
Select “Use Oracle-Managed Files” and enter path as +DATAGRP
Click on “Multiplex Redo Logs and Control Files…” and enter path as shown below,
23
- RECOVERY CONFIGURATION Screen
If you want to enable Flash logs, check the “Specify Flash Recovery Area” and enter the details as
shown.
If you want to enable archiving mode, select “Enable Archiving” and click “Edit Archive Mode
Parameters”. Enter the path as shown below.
Click on Next to continue.
24
- DATABASE CONTENT Screen
Click on Next to continue.
- INITIALIZATION PARAMETERS Screen
25
Click on Character Sets tab,
Select Character Set as Unicode (AL32UTF8).
Select Default Date Format as India.
Click on Next to continue.
- DATABASE STORAGE Screen
26
You can see the OMF file format for datafiles, controlfiles and redolog files. Click on Next to
continue.
- CREATION OPTIONS Screen
27
By default, the “Create Database” option is selected.
If you want to create scripts, select the “Generate Database Creation Scripts” option.
Click on Finish.
- CONFIRMATION Screen
28
Click on OK to start the installation.
- GENERATION OF SCRIPTS Screen
29
Since we had selected the option to generate the database creation scripts, before starting the database
creation the scripts get generated.
Click on OK.
- DATABASE CREATION PROGRESS Screen
30
You can observe the Database creation.
- END OF DATABASE CREATION Screen
31
- The Database has been created using ASM as storage option. You can verify the location of
database files as below,
$ ps -ef | grep pmon
$ ps -ef | grep lsn
$ export ORACLE_SID=asmdb
$ sqlplus / as sysdba
SQL> select name from v$datafile;
SQL> select name from v$tempfile;
SQL> select name from v$controlfile;
SQL> select member from v$logfile;
SQL> show parameter log_archive_file_dest;
SQL> show parameter db_recover;
32
7) Switching from Raw Devices to ASMLib:
If you prefer to use ASMLIB rather than RAW devices, follow the below steps. This is just an extra
activity which you might try.
- Shutdown any databases using the ASM instance, but leave the ASM instance itself running.
- Now connect to the ASM instance.
$ export ORACLE_SID=+ASM
$ sqlplus / as sysdba
- Alter the ASM disk string to exclude the raw devices used earlier, then shutdown the ASM instance.
SQL> ALTER SYSTEM SET asm_diskstring = ‘ORCL:DISK*’ SCOPE=SPFILE;
System altered.
SQL> SHUTDOWN IMMEDIATE;
ASM diskgroups dismounted
ASM instance shutdown
SQL>
- At this point the disks will not be used by ASM because they are not stamped. Now issue the
renamedisk command as the root user for each disk.
# /etc/init.d/oracleasm renamedisk /dev/sdb1 DISK1
Renaming disk “/dev/sdb1″ to “VOL1″: [ OK ]
# /etc/init.d/oracleasm renamedisk /dev/sdc1 DISK2
Renaming disk “/dev/sdc1″ to “VOL2″: [ OK ]
# /etc/init.d/oracleasm renamedisk /dev/sdd1 DISK3
Renaming disk “/dev/sdd1″ to “VOL3″: [ OK ]
Notice, the stamp matches the discovery string set earlier. The ASM instance can now be started.
SQL> STARTUP
ASM instance started
Total System Global Area 83886080 bytes
Fixed Size 1217836 bytes
Variable Size 57502420 bytes
ASM Cache 25165824 bytes
33
ASM diskgroups mounted
SQL>
The ASM instance is now using ASMLib, rather than raw devices. All dependent databases can now
be started.
8) Switching from ASMLib to Raw Devices:
- Shutdown any databases using the ASM instance, but leave the ASM instance itself running.
- Now connect to the ASM instance.
$ export ORACLE_SID=+ASM
$ sqlplus / as sysdba
- Alter the ASM disk string to match the raw devices that you want to use, then shutdown the ASM
instance.
SQL> ALTER SYSTEM SET asm_diskstring = ‘/dev/raw/raw*’ SCOPE=SPFILE;
System altered.
SQL> SHUTDOWN IMMEDIATE;
ASM diskgroups dismounted
ASM instance shutdown
SQL>
- Perform all the steps listed in the Raw Device Setup, then start the ASM instance.
SQL> STARTUP
ASM instance started
Total System Global Area 83886080 bytes
Fixed Size 1217836 bytes
Variable Size 57502420 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
SQL>
The ASM instance is now using the disks as raw devices, rather than as ASMLib disks. All dependent
databases can now be started
34
Installing 11gR2 Standalone Server with ASM and Role Separation in RHEL 6
There's an earlier post with the steps specific to installing standalone 11gR2 with role separation and
using ASM to data files. This post is to highlight any steps specific to installing 11gR2 (11.2.0.3) on
RHEL 6. Instead of the vncserver RHEL 6 has a new remote desktop service. If vncserver is preferred
use yum service to install tigervnc-server from the public yum.
The RHEL 6 kernel version is
uname -r
2.6.32-220.el6.x86_64
1. Create the users and groups for role seperate installation
groupadd oinstall
groupadd dba
groupadd oper
groupadd asmadmin
groupadd asmdba
groupadd asmoper
useradd -g oinstall -G dba,oper,asmdba oracle
useradd -g oinstall -G asmadmin,asmdba,asmoper,dba grid
grid user should be part of dba group as per metalink note 1084186.1 (see earlier post's step 15)
2. To get asmlib libraries with RHEL 6, it would require an unbreakable linux account (more on
1089399.1). In this installation block devices will be used for ASM. Use udev rules to add necessary
permissions to the block devices.
# ASM DATA
KERNEL=="sde[1]", OWNER="grid", GROUP="asmadmin", MODE="660"
# ASM FLASH
KERNEL=="sdf[1]", OWNER="grid", GROUP="asmadmin", MODE="660"
35
Group has been set to asmadmin, setting it to asmdba (as in the case with earlier post using asmlib)
could result in following warning (which was observed with cluster pre-req check not standalone pre-
req check).
3. Set CV_ASSUME_DISTID=OEL6 in the cvu_config file as explained in installing 11gR2 on RHEL
6 Other pre-req steps are omitted here but it's expected these are carried out before continuing with
rest of the steps.
4. Run cluster verify tool for high availability service option
./runcluvfy.sh stage -pre hacfg
Performing pre-checks for Oracle Restart configuration
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rhel6m2:/tmp"
Check for multiple users with UID value 502 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
36
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "elfutils-libelf(x86_64)"
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "glibc-headers"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
37
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check failed for "pdksh"
Check failed on nodes:
rhel6m2
Package existence check passed for "expat(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Pre-check for Oracle Restart configuration was unsuccessful.
Even though CV_ASSUME_DISTID=OEL6 was set before running the above command the pdksh
check fails (more on 1454982.1). But this happens only on cluvfy, the pre-req check done through
OUI ignores this pdksh check and there won't be any failed pre-reqs.
5. Carry out the installation of GI for standalone server.
Using block devices for ASM
38
Separate OS groups for ASM administration
GI for standalone home is created under ORACLE_BASE (if not warning is given see step 7 on earlier
post)
39
oraInventory directory must have oinstall as group.
Summary
40
Run root scripts when prompted
41
# /opt/app/oracle/product/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/oracle/product/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file:
/opt/app/oracle/product/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
42
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node rhel6m2 successfully pinned.
Adding Clusterware entries to upstart
rhel6m2 2012/05/30 12:24:37
/opt/app/oracle/product/11.2.0/grid/cdata/rhel6m2/backup_20120530_122437.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
Once the root.sh script finishes the other configuration tools (netca,asmca) will run, and this will
conclude the installation of GI for standalone.
6. Installation of database software and creation of database is no different to that of RHEL 5
installation with one notable difference is that prior to executing runInstaller set
CV_ASSUME_DISTID=OEL6 in the database software's cvu_config file as well to ignore the pdksh
check.
43
Oracle Database 11gR2: Installing Grid Infrastructure
Synopsis. Oracle Database 11g Release 2 makes it much simpler to configure and incorporate many
of the grid computing features that were only available in a Real Application Clusters (RAC)
clustered database environment in previous releases for a single-instance Oracle database. This
article – the first in this series - will demonstrate how to install and configure a new Oracle 11g
Release 2 (11gR2) Grid Infrastructure home as the basis for the majority of these grid computing
features.
It’s been a few months since I summarized the incredible array of new features that Oracle has
introduced as part of Oracle Database Release 11gR2, and in that span of time, I’ve been
experimenting with those features as I’ve built a new infrastructure for experimentation. Among the
most intriguing new features is the consolidation of Automatic Storage Management (ASM) with
Oracle Clusterware (OC) into a pragmatic and sensible arrangement called the Oracle Grid
Infrastructure (GI). As I’ll demonstrate in this article, the venerable Oracle Universal Installer (OUI)
utility gets a welcome update in this release, but first I’ll need to perform quite a bit of system
administration work before we can invoke it and explore its new features.
First ... A Word About The (Computing) Environment. I’ve made some long-desired changes to
my home office’s personal computing infrastructure so that I can manage my workload effectively
and efficiently with my favorite virtualization environment, VMWare:
I’ve upgraded to Oracle Enterprise Linux (OEL) 5 Update 2 (kernel 2.6.18-92.el5) for my base computing platform, a home-grown gaming server with 4GB of memory running an AMD Opteron dual-core processor.
I’ve also finally moved up to VMWare Workstation Version 7.0.0 for all my VMWare endeavors, and though I still occasionally long for the freedom of VMWare Server 2.0a (as in free!), I’ve found that Workstation is just as stable and that it works extremely well with OEL as both its host and guest OS.
Setting Up For Oracle 11gR2 Grid Infrastructure
I’m going to implement my 11gR2 Grid Infrastructure via a series of Oracle best practices that I’ve
encountered over the years and have gleaned through a thorough reading of Oracle’s technical
documentation. I’ll be using raw disk partitions for configuring all of the ASM disks that will
eventually comprise the various ASM disk groups needed for my demonstrations.
Related Articles
Oracle RAC Administration - Part 5: Administering the Clusterware and ASM Storage Super-Sizing A Database: Oracle 10g Tablespace Enhancements
Creating the Required Raw Partitions. The Oracle 11gR2 Grid Infrastructure leverages ASM to
store multiple copies of the Oracle Clusterware Registry (OCR) file, multiple Voting Disks, and of
course the ASM disk groups disks themselves. Since the maximum number of logical partitions that
can be created within any one extended partition is 12, I’ve created two VMWare virtual disks sized
at 18.5 GB and 11.0 GB, respectively. Here’s the output from the terminal session during which I
used the Linux fdisk command to create the remaining logical partitions:
44
[root@11gR2Base ~]# fdisk -l /dev/sde
Disk /dev/sde: 19.3 GB, 19327352832 bytes
255 heads, 63 sectors/track, 2349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 2349 18868311 5 Extended
/dev/sde5 1 281 2257069+ 83 Linux
/dev/sde6 282 562 2257101 83 Linux
/dev/sde7 563 843 2257101 83 Linux
/dev/sde8 844 1124 2257101 83 Linux
/dev/sde9 1125 1405 2257101 83 Linux
/dev/sde10 1406 1686 2257101 83 Linux
/dev/sde11 1687 1967 2257101 83 Linux
/dev/sde12 1968 2248 2257101 83 Linux
[root@11gR2Base ~]# fdisk -l /dev/sdf
Disk /dev/sdf: 12.0 GB, 12079595520 bytes
255 heads, 63 sectors/track, 1468 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdf1 1 1468 11791678+ 5 Extended
/dev/sdf5 1 281 2257069+ 83 Linux
/dev/sdf6 282 562 2257101 83 Linux
/dev/sdf7 563 843 2257101 83 Linux
/dev/sdf8 844 1124 2257101 83 Linux
/dev/sdf9 1125 1405 2257101 83 Linux
Assigning Raw Partitions to Block Device Endpoints. Oracle has recommended for some time that
block devices are a much better choice for the ASM file system, especially since I’ve occasionally
heard rumors that support for traditional raw devices allocated through the /etc/sysconfig/rawdevices
configuration file may be reduced or disappear in the future.
For this and all future Oracle 11gR2 features demonstrations, I’ve configured a special service,
losetup, that will construct, configure and allocate virtual block devices during server startup. For the
losetup script to work properly, however, note that I also needed to increase the default number of
loopback devices from eight to 16; I did this by adding the following line to the /etc/modprobe.conf
system configuration file, and then rebooting the server to make sure it took effect:
options loop max_loop=16
Listing 1.1 shows the losetup script I used to complete the assignment of raw partitions to virtual
block devices. After I copied the script to file /etc/init.d/losetup, I then registered the new service (as
the root user) via chkconfig:
#> chmod 775 /etc/init.d/losetup
#> chkconfig losetup --add
#> chkconfig losetup on
#> chkconfig losetup --list
45
After rebooting the server, here’s the result of implementing the losetup script – the successful
allocation of block devices as shown below:
[root@11gR2Base ~]# ls -la /dev/xv*
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdb -> /dev/loop1
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdc -> /dev/loop2
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdd -> /dev/loop3
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvde -> /dev/loop4
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdf -> /dev/loop5
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdg -> /dev/loop6
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdh -> /dev/loop7
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdi -> /dev/loop8
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdj -> /dev/loop9
lrwxrwxrwx 1 root root 11 Feb 7 20:49 /dev/xvdk -> /dev/loop10
lrwxrwxrwx 1 root root 11 Feb 7 20:49 /dev/xvdl -> /dev/loop11
lrwxrwxrwx 1 root root 11 Feb 7 20:49 /dev/xvdm -> /dev/loop12
lrwxrwxrwx 1 root root 11 Feb 7 20:49 /dev/xvdn -> /dev/loop13
Configuring and Implementing ASMLIB
To keep my ASM configuration simple to manage, I’ll also use the Oracle ASM disk management
drivers that ASMLIB provides to “stamp” each target mount point before actually creating ASM
disks and disk groups. First, I’ll confirm that the oracleasm drivers appropriate to my OS kernel
version have indeed been installed:
[root@11gR2Base ~]# rpm -qa | grep oracleasm
oracleasm-2.6.18-92.el5xen-2.0.4-1.el5
oracleasm-2.6.18-92.el5-2.0.4-1.el5
oracleasm-2.6.18-92.el5debug-2.0.4-1.el5
oracleasm-support-2.0.4-1.el5
Excellent! My system administrator took care of this when she installed Oracle Enterprise Linux 5
Update2; otherwise, I’d be forced to remind her to download the appropriate ORACLEASM drivers
and then install them on my server. However, it was necessary to make sure that the connection to the
appropriate oracleasm RPMs was available, and that took a little extra manipulation as shown in the
output below:
[root@11gR2Base ~]# /usr/lib/oracleasm/oracleasm_debug_link 2.6.18-92.el5 $(uname
-r)
oracleasm_debug_link: Target exists
[root@11gR2Base ~]# ls -l /lib/modules/$(uname -r)/kernel/drivers/addon/oracleasm
total 576
-rw-r--r-- 1 root root 579514 May 23 2008 oracleasm.ko
[root@11gR2Base ~]# /etc/init.d/oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
46
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
[root@11gR2Base ~]# /etc/init.d/oracleasm status
Checking if ASM is loaded: [ OK ]
Checking if /dev/oracleasm is mounted: [ OK ]
“Stamping” Candidate Disks With ASMLIB. Now that ASMLIB is configured properly, it’s time
to apply ASMLIB “stamps” to each virtual device via the createdisk command as shown below. This
makes it much simpler to configure and manage ASM disks without having to use complex mount
point naming conventions:
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK1 /dev/xvdb
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK2 /dev/xvdc
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK3 /dev/xvdd
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK4 /dev/xvde
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK5 /dev/xvdf
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK6 /dev/xvdg
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK7 /dev/xvdh
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK8 /dev/xvdi
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK1 /dev/xvdj
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK2 /dev/xvdk
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK3 /dev/xvdl
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK4 /dev/xvdm
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK5 /dev/xvdn
Finally, I’ll invoke ASMLIB’s listdisks command to confirm that all disks have been correctly
“stamped” and are now ready for use in concert with my upcoming Grid Infrastructure installation:
[root@11gR2Base ~]# /etc/init.d/oracleasm listdisks
ACFDISK1
ACFDISK2
ACFDISK3
ACFDISK4
ACFDISK5
ASMDISK1
ASMDISK2
ASMDISK3
ASMDISK4
ASMDISK6
ASMDISK7
ASMDISK8
47
48