Oracle Database 11g Release 2 RAC On RHEL 5.4 Using NFS:-
NFS is an abbreviation of Network File System, a platform independent technology created by Sun Microsystems that
allows shared access to files stored on computers via an interface called the Virtual File System (VFS) that runs on
top of TCP/IP.Computers that share files are considered NFS servers, while those that access shared files are considered
NFS clients. An individual computer can be either an NFS server, a NFS client or both.We can use NFS to provide shared
storage for a RAC installation. In a production environment we would expect the NFS server to be a NAS, but for
testing it can just as easily be another server, or even one of the RAC nodes itself. To cut costs, this articles uses
one of the RAC nodes as the source of the shared storage. Obviously, this means if that node goes down the whole database
is lost, so it's not a sensible idea to do this if you are testing high availability.If you have access to a NAS or a
third server you can easily use that for the shared storage,making the whole solution much more resilient.
Whichever route you take, the fundamentals of the installation are the same.
The Single Client Access Name (SCAN) should really be defined in the DNS or GNS and round-robin between one of 3 addresses,
which are on the same subnet as the public and virtual IPs. In this article I've defined it as a single IP address in
the "/etc/hosts" file,which is wrong and will cause the cluster verification to fail, but it allows me to complete
the install without the presence of a DNS.
Server Hardware Requirements:-
Each node must meet the following minimum hardware requirements:
We have 2 Node configured on Virtual Box(VMWARE).
1.Virtual Machine name:- RAC1 or RAC2
2.At least 2 GB of Physical RAM
3.Swap space equivalent to the multiple of the available RAM : 3 GB
4.Atleast 1gb space on /tmp location.
5.Upto 4 GB of free space for oracle Software.
Download the following software:-
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linx8664soft-100572.htmla} Oracle Database 11g Release 2 (11.2.0.1.0) for Linux x86-64
b) Oracle Grid Infrastructure 11g Release 2 (11.2.0.1.0) for Linux x86-64
c) Redhat Linux 5.4
on RAC1:-
vi /etc/sysconfing/network
NETWORKING=yes
HOSTNAME=rac1.soumya.com
:wq
#hostname rac1.soumya.com
HOSTNAME=rac1.soumya.com
IP Address eth0: 192.168.2.110 (public address)
IP Address eth1: 192.168.3.110 (private address)
Default Gateway eth0: 192.168.2.1 (public address)
Default Gateway eth1: none
Virtual ip : 192.168.2.150
on RAC2:-
vi /etc/sysconfing/network
NETWORKING=yes
HOSTNAME=rac2.soumya.com
:wq
#hostname rac2.soumya.com
IP Address eth0: 192.168.2.111 (public address)
IP Address eth1: 192.168.3.111 (private address)
Default Gateway eth0: 192.168.2.1 (public address)
Default Gateway eth1: none
Virtual ip : 192.168.2.151
SCAN IP:192.168.2.192
Ip concept in RAC :-
Please keep in mind that Public ip ,Virtual ip and SCAN IP should in same subnet.Where the
private ip is used for interconnects.
Public IP: The public IP address is for the server. This is the same as any server IP address,
a unique address with exists in /etc/hosts.
Private IP: Oracle RAC requires "private IP" addresses to manage the CRS, the clusterware heartbeat process and the cache fusion layer.
Virtual IP: Oracle uses a Virtual IP (VIP) for database access. The VIP must be on the same subnet as the public IP address.
The VIP is used for RAC failover (TAF).
Scan IP:-Single Client Access Name (SCAN) is an Oracle Real Application Clusters (Oracle RAC)
feature that provides a single name for clients to access Oracle Databases running in a cluster.
To add an additional Ethernet Card in VM:-
Open VMWARE Workstation.
VM -> Settings -> Select Network Adapter -> Click on Add option below ->Network Adapter -> Bridge Only -> Finish
On both nodes(Rac1 and Rac2):-
vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
#Public Network (eth0)
192.168.2.110 rac1.soumya.com rac1
192.168.2.111 rac2.soumya.com rac2
#NFS Storage
192.168.2.102 racstorage.soumya.com racstorage
#Private interconnect (eth1)
192.168.3.110 rac1-pvt.soumya.com rac1-pvt
192.168.3.111 rac2-pvt.soumya.com rac2-pvt
#Public Virtual IP(VIP) address for- (eth0)
192.168.2.150 rac1-vip.soumya.com rac1-vip
192.168.2.151 rac2-vip.soumya.com rac2-vip
#SCAN IP
192.168.2.192 rac-scan.soumya.com rac-scan
Install the following rpms:-
binutils-2.17.50.0.6-2.el5
compat-libstdc++-33-3.2.3-61
elfutils-libelf-0.125-3.el5
elfutils-libelf-devel-0.125
gcc-4.1.1-52
gcc-c++-4.1.1-52
glibc-2.5-12
glibc-common-2.5-12
glibc-devel-2.5-12
glibc-headers-2.5-12
libaio-0.3.106
libaio-devel-0.3.106
libgcc-4.1.1-52
libstdc++-4.1.1
libstdc++-devel-4.1.1-52.e15
make-3.81-1.1
sysstat-7.0.0
unixODBC-2.2.11
unixODBC-devel-2.2.11
libXp-1.0.0-8
oracleasmlib-2.0.4-1 (download from http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html)
On both nodes perform the following activities:-
Add or amend the following lines to the "/etc/sysctl.conf" file.
# vi /etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 1054504960
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
:wq
Run the following command to change the current kernel parameters.
# /sbin/sysctl -p
Add the following lines to the "/etc/security/limits.conf" file. on both nodes:-
#vi /etc/security/limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
:wq
Add the following lines to the "/etc/pam.d/login" file, if it does not already exist.
# vi /etc/pam.d/login
session required pam_limits.so
:wq
Create the new groups and users on both nodes:-groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 1100 -g oinstall -G dba oracle
passwd oracle:<Provide oracle user's password>
Create the directories in which the Oracle software will be installed on both nodes:-# mkdir -p /u01/app/11.2.0/grid
# mkdir -p /u01/app/oracle/product/11.2.0/db_1
# chown -Rf oracle:oinstall /u01
# chmod -Rf 775 /u01/
On both node(Rac1 & Rac2) :-Install the following package from the Oracle grid media after you've defined groups.
I have transferred linux.x64_11gR2_grid_2.zip in /u01 location.
# cd /u01
# unzip linux.x64_11gR2_grid_2.zip
# cd grid/rpm
# rpm -Uvh cvuqdisk*
Change the setting of SELinux to permissive by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows on both nodes:-
SELINUX=permissive
# sestatus
# service iptables stop
# chkconfig iptables off
Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. If you want to deconfigure NTP do the following.
# service ntpd stop
Shutting down ntpd: [ OK ]
# chkconfig ntpd off
In node1:-
Login as oracle user and ad the following lines at the end of the "/home/oracle/.bash_profile" file.
[oracle@rac1]$ vi /home/oracle/.bash_profile
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=rac1.soumya.com; export ORACLE_HOSTNAME
ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/11.2.0/db_1; export DB_HOME
ORACLE_HOME=$DB_HOME; export ORACLE_HOME
ORACLE_SID=rac1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
alias grid_env='. /home/oracle/grid_env'
alias db_env='. /home/oracle/db_env'
:wq
Re-execute the the bash shell :-
#. ./home/oracle/.bash_profile
Create a file called "/home/oracle/grid_env" with the following contents in node1:-
[oracle@rac1 ]$ vi /home/oracle/grid_env
ORACLE_HOME=$GRID_HOME; export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
:wq
Create a file called "/home/oracle/db_env" with the following contents in node1:-
[oracle@rac1 ]$ vi /home/oracle/db_env
ORACLE_SID=RAC1; export ORACLE_SID
ORACLE_HOME=$DB_HOME; export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
:wq
[oracle@rac1 ]$ chmod 775 /home/oracle/db_env
[oracle@rac1 ]$ chmod 775 /home/oracle/grid_env
Once the "/home/oracle/grid_env" has been run, you will be able to switch between environments as follows.
$ grid_env
$ echo $ORACLE_HOME
/u01/app/11.2.0/grid
$ db_env
$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/db_1
We've made a lot of changes, so it's worth doing a reboot of the servers at this point to make sure all the changes have taken effect.
# reboot
On Storage Node(Racstorage):-
Hostname : racstorage.soumya.com
IP Address eth0: 192.168.2.102 (public address)
Default Gateway eth0: 192.168.2.1 (public address)
# service iptables stop
# chkconfig iptables off
# sestatus
make sure selinux is disabled too.
# vi /etc/hosts
#Public IP
192.168.2.110 rac1.soumya.com rac1
192.168.2.111 rac2.soumya.com rac2
192.168.2.102 racstorage.soumya.com racstorage
:wq
Create Shared Disks:-First we need to set up some NFS shares. In this case we will do this on a different server(racstorage.soumya.com).
mkdir /shared_config
mkdir /shared_grid
mkdir /shared_home
mkdir /shared_data
Add the following lines to the "/etc/exports" file. on racstorage node.
/shared_config *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/shared_grid *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/shared_home *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/shared_data *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
# chkconfig nfs on
# service nfs restart
On both RAC1 node and RAC2 node create the directories in which the Oracle software will be installed:-# mkdir -p /u01/app/11.2.0/grid
# mkdir -p /u01/app/oracle/product/11.2.0/db_1
# mkdir -p /u01/oradata
# mkdir -p /u01/shared_config
# chown -Rf oracle:oinstall /u01/app /u01/app/oracle /u01/oradata /u01/shared_config
# chmod -Rf 775 /u01/app /u01/app/oracle /u01/oradata /u01/shared_config
Add the following lines to the "/etc/fstab" file of node1 and node2:-#vi /etc/fstab
racstorage.soumya.com:/shared_config /u01/shared_config nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
racstorage.soumya.com:/shared_grid /u01/app/11.2.0/grid nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
racstorage.soumya.com:/shared_home /u01/app/oracle/product/11.2.0/db_1 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
racstorage.soumya.com:/shared_data /u01/oradata nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
:wq
Mount the NFS shares on both servers(node1 and node2):-# mount /u01/shared_config
# mount /u01/app/11.2.0/grid
# mount /u01/app/oracle/product/11.2.0/db_1
# mount /u01/oradata
Make sure the permissions on the shared directories are correct.(on node1 and node2):-# chown -R oracle:oinstall /u01/shared_config
# chown -R oracle:oinstall /u01/app/11.2.0/grid
# chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1
# chown -R oracle:oinstall /u01/oradata
Start both RAC nodes, login to RAC1 as the oracle user and start the Oracle installer.
$ cd /u01/grid
$ sh runInstaller
Steps of grid installation:-1.Select the "Install and Configure Grid Infrastructure for a Cluster" option, then click the "Next" button.
2.Select the "Advanced Installation" option, then click the "Next" button.
3.Select the the required language support, then click the "Next" button.
4.Enter cluster information and uncheck the "Configure GNS" option, then click the "Next" button.
cluster-name:-rac-cluster
scan name:- rac-scan.soumya.com [This is hostname of rac-scan ip]
scan port:1521
5.On the "Specify Node Information" screen, click the "Add" button.
6.Enter the details of the second node in the cluster, then click the "OK" button.
Hostname:- rac2.soumya.com
Virtual IP Name:- rac2-vip.soumya.com
7.Click the "SSH Connectivity..." button and enter the password for the "oracle" user. Click the "Setup" button to to configure SSH connectivity, and the "Test" button to test it once it is complete. Click the "Next" button.
8.Check the public and private networks are specified correctly, then click the "Next" button.
9.In network interface usage screen click on next.
10.In storage option screen Select the "Shared File System" option, then click the "Next" button.
11.In OCR Storage screen select External Redundancy and provide OCR File location.
path location : /u01/shared_config/ocr_configuration
12.In Voting disk screen choose external Redundancy
path location: /u01/shared_config/voting_disk
13.On failure isolation screen select option "Do not use Intelligent Platform management interface (IPMI)
14.On operating system group screen select group as "dba" for three given groups.While pressing Next it might promot a warning
as "Possible invalid choice for OSASM, OSDBA , OSOPER etc... group". Please select Yes to coninue.
15.On installation location screen select path for Oracle Base :-/u01/app/oracle
and path for software location :- /u01/app/11.2.0/grid
16.On Create Inventory screen path for intentory Directory is : /u01/app/oraInventory
17.while the prerequisite checks complete. If you have any issues, either fix them or check the "Ignore All" checkbox and click the "Next" button. If there are no issues, you will move directly to the summary screen. If you are happy with the summary information, click the "Finish" button
18.Wait while the setup takes place.
When prompted, run the configuration scripts on each node.Run them one after one:-
/u01/app/oraInventory/orainstRoot.sh
/u01/app/11.2.0/grid/root.sh
Wait for the configuration assistants to complete.
[INS-20802]Oracle Cluster verification utility failed.
We expect the verification phase to fail with an error relating to the SCAN, assuming you are not using DNS.
Provided this is the only error, it is safe to ignore this and continue by clicking the "Next" button.
Click the "Close" button to exit the installer.
Install Binaries and create the Database:-
In node1:-
Start all the RAC nodes, login to RAC1 as the oracle user and start the Oracle installer.
[oracle@rac1 database]$ cd /u01/linux.x64_11gR2_database_1of2_2/database
[oracle@rac1 database]$ ./runInstaller
Uncheck the security updates checkbox and click the "Next" button.
Accept the "Create and configure a database" option by clicking the "Next" button.
Accept the "Server Class" option by clicking the "Next" button.
Make sure both nodes are selected, then click the "Next" button.
Accept the "Typical install" option by clicking the "Next" button.
oracle base :/u01/app/oracle
software location: /u01/app/oracle/product/11.2.0/db_1
storage type: file system
database file location : /u01/oradata
Database edition : enterprise edition
osdba group: dba
global database name : rac.soumya.com
enter administrative password: ******
Wait for the prerequisite check to complete. If there are any problems either fix them, or check the "Ignore All" checkbox and click the "Next" button.
Thats it..We have successfully setup 2 node Rac Cluster.