Installing Oracle RAC 11g R2 on RHEL 6
Oracle Grid
Infrastructure 11g Release 2 (11.2)
With Oracle grid infrastructure 11g release 2 (11.2),
the Automatic Storage Management (ASM) and Oracle Clusterware software is
packaged together in a single binary distribution and installed into a single
home directory, which is referred to as the Grid Infrastructure home. You must
install the grid infrastructure in order to use Oracle RAC 11g release
2. Configuration assistants start after the installer interview process that
will be responsible for configuring ASM and Oracle Clusterware. While the
installation of the combined products is called Oracle grid infrastructure,
Oracle Clusterware and Automatic Storage Manager remain separate products.
Step 1: Cabling the Server and Installing RHEL6 (Enable
IPv6 support must be set to OFF)
Step 2: Download the oracle GRID software and Oracle
11g R2 from Oracle Support. Installation Types and Associated Zip Files
Installation Type |
Zip File |
Oracle Database (includes Oracle Database and Oracle RAC)
Note: you must download both zip files to install Oracle Database.
|
p10098816_112020_ platform _1of7.zip
p10098816_112020_ platform _2of7.zip
|
Oracle Grid Infrastructure (includes Oracle ASM, Oracle Clusterware, and Oracle Restart)
|
p10098816_112020_ platform _3of7.zip
|
Oracle Database Client
|
p10098816_112020_ platform _4of7.zip
|
Oracle Gateways
|
p10098816_112020_ platform _5of7.zip
|
Oracle Examples
|
p10098816_112020_ platform _6of7.zip
|
Deinstall
|
p10098816_112020_ platform _7of7.zip
|
Step 3: Checking SCAN for the cluster:
For high availability and scalability, Oracle recommends that you configure
the SCAN name for round-robin resolution to three IP addresses. At a minimum,
the SCAN must resolve to at least one address. The SCAN virtual IP name is
similar to the names used for a node's virtual IP address, such as racnode1-vip. However, unlike a virtual
IP, the SCAN is associated with the entire cluster, rather than an individual
node, and can be associated with multiple IP addresses, not just one address.
During installation of the Oracle grid infrastructure, a listener is created
for each of the SCAN addresses. Clients that access the Oracle RAC database
should use the SCAN or SCAN address, not the VIP name or address. If an
application uses a SCAN to connect to the cluster database, the network
configuration files on the client computer do not need to be modified when
nodes are added to or removed from the cluster. Note that SCAN addresses,
virtual IP addresses, and public IP addresses must all be on the same subnet.
The SCAN should be configured so that it is resolvable either by using Grid
Naming Service (GNS) within the cluster or by using the traditional method of
assigning static IP addresses using Domain Name Service (DNS) resolution. Network Hardware Requirements
The following is a list of hardware requirements for network configuration:
- Each Oracle RAC node must have at least two
network adapters or network interface cards (NICs) — one for the public network
interface and one for the private network interface (the interconnect). To use
multiple NICs for the public network or for the private network, Oracle
recommends that you use NIC bonding. Use separate bonding for the public and
private networks (i.e. bond0 for
the public network and bond1 for
the private network), because during installation each interface is defined as
a public or private interface. NIC bonding is not covered in this article.
- The public interface names associated with the
network adapters for each network must be the same on all nodes, and the
private interface names associated with the network adaptors should be the same
on all nodes.
- For example, with our two-node cluster, you cannot
configure network adapters on racnode1
with eth0 as the public
interface, but on racnode2 have eth1 as the public interface. Public
interface names must be the same, so you must configure eth0 as public on both nodes. You should
configure the private interfaces on the same network adapters as well. If eth1 is the private interface for racnode1, then eth1 must be the private interface for racnode2.
- For the public network, each network adapter
must support TCP/IP.
- For the private network, the interconnect must
support the user datagram protocol (UDP) using high-speed network adapters and
switches that support TCP/IP (minimum requirement 1 Gigabit Ethernet).
- UDP is the default interconnect protocol for Oracle
RAC, and TCP is the interconnect protocol for Oracle Clusterware. You must use
a switch for the interconnect. Oracle recommends that you use a dedicated
switch.
- Oracle does not support token-rings or crossover
cables for the interconnect.
- For the private network, the endpoints of all
designated interconnect interfaces must be completely reachable on the network.
There should be no node that is not connected to every private network
interface. You can test if an interconnect interface is reachable using ping.
- During installation of Oracle grid
infrastructure, you are asked to identify the planned use for each network
interface that OUI detects on your cluster node. You must identify each
interface as a public interface, a private interface, or not
used and you must use the same private interfaces for both Oracle
Clusterware and Oracle RAC.
You can bond separate interfaces to a common
interface to provide redundancy, in case of a NIC failure, but Oracle
recommends that you do not create separate interfaces for Oracle Clusterware
and Oracle RAC. If you use more than one NIC for the private interconnect, then
Oracle recommends that you use NIC bonding. Note that multiple private
interfaces provide load balancing but not failover, unless bonded.
Starting with Oracle Clusterware 11g release
2, you no longer need to provide a private name or IP address for the
interconnect. IP addresses on the subnet you identify as private are assigned
as private IP addresses for cluster member nodes. You do not need to configure
these addresses manually in a hosts directory. If you want name resolution for
the interconnect, then you can configure private IP names in the hosts file or
the DNS. However, Oracle Clusterware assigns interconnect addresses on the
interface defined during installation as the private interface (eth1, for example), and to the subnet used
for the private subnet.
In practice, and for the purpose of this guide, I
will continue to include a private name and IP address on each node for the RAC
interconnect. It provides self-documentation and a set of end-points on the
private network I can use for troubleshooting purposes:
192.168.2.151 racnode1-priv 192.168.2.152 racnode2-priv |
In a production environment that uses iSCSI for
network storage, it is highly recommended to configure a redundant third
network interface (eth2, for
example) for that storage traffic using a TCP/IP offload Engine (TOE) card. For
the sake of brevity, this article will configure the iSCSI network storage
traffic on the same network as the RAC private interconnect (eth1). Combining the iSCSI storage traffic
and cache fusion traffic for Oracle RAC on the same network interface works
great for an inexpensive test system (like the one described in this article)
but should never be considered for production.
The basic idea of a TOE is to offload the
processing of TCP/IP protocols from the host processor to the hardware on the
adapter or in the system. A TOE is often embedded in a network interface card
(NIC) or a host bus adapter (HBA) and used to reduce the amount of TCP/IP
processing handled by the CPU and server I/O subsystem and improve overall
performance.
[grid@va-stg-orac01
patch]$ nslookup va-stg-orac
Server: 10.0.50.20
Address: 10.0.50.20#53
Name: va-stg-orac.thexchange.com
Address: 10.0.40.60
Name: va-stg-orac.thexchange.com
Address: 10.0.40.59
Name: va-stg-orac.thexchange.com
Address: 10.0.40.58
Step 4: Checking the /etc/hosts for network configuration setting
On Node 1:
[grid@va-stg-orac01
patch]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
#::1 localhost
localhost.localdomain localhost6 localhost6.localdomain6 – Disables IPv6
# Public Network
10.0.40.61 va-stg-orac01.thexchange.com va-stg-orac01
10.0.40.62 va-stg-orac02.thexchange.com va-stg-orac02
# Private
Interconnect
172.16.20.61 va-stg-orac01-priv.thexchange.com va-stg-orac01-priv
172.16.20.62 va-stg-orac02-priv.thexchange.com va-stg-orac02-priv
172.16.21.61 va-stg-orac01-priv.thexchange.com va-stg-orac01-priv
172.16.21.62 va-stg-orac02-priv.thexchange.com va-stg-orac02-priv
# Public Virtual IP
(VIP) addresses
10.0.40.56 va-stg-orac01-vip.thexchange.com va-stg-orac01-vip
10.0.40.57 va-stg-orac02-vip.thexchange.com va-stg-orac02-vip
# For Netbackup
10.0.60.35 va-tm01
[grid@va-stg-orac01
patch]$
On Node 2:
[oracle@va-stg-orac02
~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
#::1 localhost
localhost.localdomain localhost6 localhost6.localdomain6 - Disables IPv6
# Public Network
10.0.40.61 va-stg-orac01.thexchange.com va-stg-orac01
10.0.40.62 va-stg-orac02.thexchange.com va-stg-orac02
# Private
Interconnect
172.16.20.61 va-stg-orac01-priv.thexchange.com va-stg-orac01-priv
172.16.20.62 va-stg-orac02-priv.thexchange.com va-stg-orac02-priv
172.16.21.61 va-stg-orac01-priv.thexchange.com va-stg-orac01-priv
172.16.21.62 va-stg-orac02-priv.thexchange.com va-stg-orac02-priv
# Public Virtual IP
(VIP) addresses
10.0.40.56 va-stg-orac01-vip.thexchange.com va-stg-orac01-vip
10.0.40.57 va-stg-orac02-vip.thexchange.com va-stg-orac02-vip
ON Node 1
[grid@va-stg-orac01
patch]$ sudo ifconfig -a
bond0 Link encap:Ethernet HWaddr 60:EB:69:8F:96:04
inet addr:10.0.40.61 Bcast:10.0.40.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER
MULTICAST MTU:1500 Metric:1
RX packets:573017338 errors:0
dropped:0 overruns:0 frame:0
TX packets:379767606 errors:0
dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:450029685976 (419.1
GiB) TX bytes:189494905108 (176.4 GiB)
bond0:1 Link encap:Ethernet HWaddr 60:EB:69:8F:96:04
inet addr:10.0.40.59 Bcast:10.0.40.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER
MULTICAST MTU:1500 Metric:1
bond0:2 Link encap:Ethernet HWaddr 60:EB:69:8F:96:04
inet addr:10.0.40.58 Bcast:10.0.40.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER
MULTICAST MTU:1500 Metric:1
bond0:3 Link encap:Ethernet HWaddr 60:EB:69:8F:96:04
inet addr:10.0.40.56 Bcast:10.0.40.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER
MULTICAST MTU:1500 Metric:1
bond1 Link encap:Ethernet HWaddr 60:EB:69:8F:96:05
inet addr:172.16.20.61 Bcast:172.16.20.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER
MULTICAST MTU:1500 Metric:1
RX packets:4080634083 errors:0
dropped:16142 overruns:10597 frame:0
TX packets:3968361302 errors:0
dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2633289948475 (2.3 TiB) TX bytes:2596341223405 (2.3 TiB)
bond2 Link encap:Ethernet HWaddr 60:EB:69:8F:96:06
inet addr:172.16.21.61 Bcast:172.16.21.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER
MULTICAST MTU:1500 Metric:1
RX packets:256569 errors:0 dropped:0
overruns:0 frame:0
TX packets:233232 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:32161831 (30.6 MiB) TX bytes:29227586 (27.8 MiB)
eth0 Link encap:Ethernet HWaddr 60:EB:69:8F:96:04
UP BROADCAST RUNNING SLAVE
MULTICAST MTU:1500 Metric:1
RX packets:387673765 errors:0
dropped:0 overruns:0 frame:0
TX packets:363026756 errors:0
dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:265527876151 (247.2
GiB) TX bytes:182661018019 (170.1 GiB)
Memory:91e20000-91e40000
eth1 Link encap:Ethernet HWaddr 60:EB:69:8F:96:05
UP BROADCAST RUNNING SLAVE
MULTICAST MTU:1500 Metric:1
RX packets:147490 errors:0 dropped:0
overruns:0 frame:0
TX packets:46594235 errors:0
dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:20538784 (19.5 MiB) TX bytes:10207616665 (9.5 GiB)
Memory:91e00000-91e20000
eth2 Link encap:Ethernet HWaddr 60:EB:69:8F:96:06
UP BROADCAST RUNNING SLAVE
MULTICAST MTU:1500 Metric:1
RX packets:127251 errors:0 dropped:0
overruns:0 frame:0
TX packets:115486 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:15779060 (15.0 MiB) TX bytes:14307174 (13.6 MiB)
Memory:91d20000-91d40000
eth3 Link encap:Ethernet HWaddr 60:EB:69:8F:96:07
BROADCAST MULTICAST MTU:1500
Metric:1
RX packets:0 errors:0 dropped:0
overruns:0 frame:0
TX packets:0 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Memory:91d00000-91d20000
eth4 Link encap:Ethernet HWaddr 60:EB:69:8F:96:04
UP BROADCAST RUNNING SLAVE
MULTICAST MTU:1500 Metric:1
RX packets:185343573 errors:0
dropped:0 overruns:0 frame:0
TX packets:16740850 errors:0
dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:184501809825 (171.8
GiB) TX bytes:6833887089 (6.3 GiB)
Memory:91a80000-91b00000
eth5 Link encap:Ethernet HWaddr 60:EB:69:8F:96:06
UP BROADCAST RUNNING SLAVE
MULTICAST MTU:1500 Metric:1
RX packets:129318 errors:0 dropped:0
overruns:0 frame:0
TX packets:117746 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16382771 (15.6 MiB) TX bytes:14920412 (14.2 MiB)
Memory:91a00000-91a80000
eth6 Link encap:Ethernet HWaddr 60:EB:69:8F:96:05
UP BROADCAST RUNNING SLAVE
MULTICAST MTU:1500 Metric:1
RX packets:4080486593 errors:0
dropped:16142 overruns:10597 frame:0
TX packets:3921767067 errors:0
dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2633269409691 (2.3 TiB) TX bytes:2586133606740 (2.3 TiB)
Memory:91980000-91a00000
eth7 Link encap:Ethernet HWaddr 00:19:99:99:19:09
BROADCAST MULTICAST MTU:1500
Metric:1
RX packets:0 errors:0 dropped:0
overruns:0 frame:0
TX packets:0 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Memory:91900000-91980000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436
Metric:1
RX packets:775528359 errors:0
dropped:0 overruns:0 frame:0
TX packets:775528359 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:171572713378 (159.7
GiB) TX bytes:171572713378 (159.7 GiB)
On Node 2
[oracle@va-stg-orac02
~]$ sudo ifconfig -a
bond0 Link encap:Ethernet HWaddr 60:EB:69:A5:FB:C2
inet addr:10.0.40.62 Bcast:10.0.40.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER
MULTICAST MTU:1500 Metric:1
RX packets:967549 errors:0 dropped:0
overruns:0 frame:0
TX packets:533767 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:136646353 (130.3 MiB) TX bytes:136679249 (130.3 MiB)
bond0:1 Link encap:Ethernet HWaddr 60:EB:69:A5:FB:C2
inet addr:10.0.40.57 Bcast:10.0.40.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER
MULTICAST MTU:1500 Metric:1
bond0:2 Link encap:Ethernet HWaddr 60:EB:69:A5:FB:C2
inet addr:10.0.40.60 Bcast:10.0.40.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER
MULTICAST MTU:1500 Metric:1
bond1 Link encap:Ethernet HWaddr 60:EB:69:A5:FB:C4
inet addr:172.16.20.62 Bcast:172.16.20.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER
MULTICAST MTU:1500 Metric:1
RX packets:31165217 errors:0 dropped:0
overruns:0 frame:0
TX packets:37201455 errors:0
dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:23755743953 (22.1 GiB) TX bytes:33818407212 (31.4 GiB)
bond2 Link
encap:Ethernet HWaddr 60:EB:69:A5:FB:C3
inet addr:172.16.21.62 Bcast:172.16.21.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER
MULTICAST MTU:1500 Metric:1
RX packets:19863 errors:0 dropped:0
overruns:0 frame:0
TX packets:18095 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2483868 (2.3 MiB) TX bytes:2270687 (2.1 MiB)
eth0 Link encap:Ethernet HWaddr 60:EB:69:A5:FB:C2
UP BROADCAST RUNNING SLAVE
MULTICAST MTU:1500 Metric:1
RX packets:652555 errors:0 dropped:0
overruns:0 frame:0
TX packets:452429 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:108975911 (103.9 MiB) TX bytes:119926546 (114.3 MiB)
Memory:91e20000-91e40000
eth1 Link encap:Ethernet HWaddr 60:EB:69:A5:FB:C3
UP BROADCAST RUNNING SLAVE
MULTICAST MTU:1500 Metric:1
RX packets:10024 errors:0 dropped:0
overruns:0 frame:0
TX packets:9124 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1263687 (1.2 MiB) TX bytes:1159267 (1.1 MiB)
Memory:91e00000-91e20000
eth2 Link encap:Ethernet HWaddr 60:EB:69:A5:FB:C4
UP BROADCAST RUNNING SLAVE
MULTICAST MTU:1500 Metric:1
RX packets:10202 errors:0 dropped:0
overruns:0 frame:0
TX packets:8975 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1351078 (1.2 MiB) TX bytes:1111526 (1.0 MiB)
Memory:91d20000-91d40000
eth3 Link encap:Ethernet HWaddr 60:EB:69:A5:FB:C5
BROADCAST MULTICAST MTU:1500
Metric:1
RX packets:0 errors:0 dropped:0
overruns:0 frame:0
TX packets:0 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Memory:91d00000-91d20000
eth4 Link encap:Ethernet HWaddr 60:EB:69:A5:FB:C2
UP BROADCAST RUNNING SLAVE
MULTICAST MTU:1500 Metric:1
RX packets:314994 errors:0 dropped:0
overruns:0 frame:0
TX packets:81338 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:27670442 (26.3 MiB) TX bytes:16752703 (15.9 MiB)
Memory:91a80000-91b00000
eth5 Link encap:Ethernet HWaddr 60:EB:69:A5:FB:C4
UP BROADCAST RUNNING SLAVE
MULTICAST MTU:1500 Metric:1
RX packets:31155015 errors:0
dropped:0 overruns:0 frame:0
TX packets:37192480 errors:0
dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:23754392875 (22.1 GiB) TX bytes:33817295686 (31.4 GiB)
Memory:91a00000-91a80000
eth6 Link encap:Ethernet HWaddr 60:EB:69:A5:FB:C3
UP BROADCAST RUNNING SLAVE
MULTICAST MTU:1500 Metric:1
RX packets:9839 errors:0 dropped:0
overruns:0 frame:0
TX packets:8971 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1220181 (1.1 MiB) TX bytes:1111420 (1.0 MiB)
Memory:91980000-91a00000
eth7 Link encap:Ethernet HWaddr 00:19:99:99:17:31
BROADCAST MULTICAST MTU:1500
Metric:1
RX packets:0 errors:0 dropped:0
overruns:0 frame:0
TX packets:0 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Memory:91900000-91980000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436
Metric:1
RX packets:4396315 errors:0 dropped:0
overruns:0 frame:0
TX packets:4396315 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2014440067 (1.8 GiB) TX bytes:2014440067 (1.8 GiB)
[oracle@va-stg-orac02
~]$
Network
Configuration
|
Identity
|
Home Node
|
Host Node
|
Given Name
|
Type
|
Address
|
Address Assigned By
|
Resolved By
|
Node 1 Public
|
Node 1
|
racnode1
|
va-stg-orac01
|
Public
|
10.0.40.61
|
Fixed
|
DNS, hosts file
|
Node 1 VIP
|
Node 1
|
racnode1
|
va-stg-orac01-vip
|
VIP
|
10.0.40.56
|
Fixed
|
DNS, hosts file
|
Node 1 Private
|
Node 1
|
Selected by Oracle Clusterware
|
va-stg-orac01-priv
|
Private
|
172.16.20.61
|
Fixed
|
DNS, hosts file, or none
|
Node 1 Private
|
Node 1
|
Selected by Oracle Clusterware
|
va-stg-orac01-priv
|
Private
|
172.16.21.61
|
Fixed
|
DNS, hosts file, or none
|
|
|
|
|
|
|
|
|
Node 2 Public
|
Node 2
|
racnode2
|
va-stg-orac02
|
Public
|
10.0.40.62
|
Fixed
|
DNS, hosts file
|
Node 2 VIP
|
Node 2
|
racnode2
|
va-stg-orac02-vip
|
VIP
|
10.0.40.57
|
Fixed
|
DNS, hosts file
|
Node 2 Private
|
Node 2
|
Selected by Oracle Clusterware
|
va-stg-orac02-priv
|
Private
|
172.16.20.62
|
Fixed
|
DNS, hosts file, or none
|
Node 2 Private
|
Node 2
|
Selected by Oracle Clusterware
|
va-stg-orac02-priv
|
Private
|
172.16.21.62
|
Fixed
|
DNS, hosts file, or none
|
|
|
|
|
|
|
|
|
SCAN VIP 1
|
none
|
Selected by Oracle Clusterware
|
va-stg-orac
|
virtual
|
10.0.40.58
|
Fixed
|
DNS
|
SCAN VIP 2
|
none
|
Selected by Oracle Clusterware
|
va-stg-orac
|
virtual
|
10.0.40.59
|
Fixed
|
DNS
|
SCAN VIP 3
|
none
|
Selected by Oracle Clusterware
|
va-stg-orac
|
virtual
|
10.0.40.60
|
Fixed
|
DNS
|
[grid@va-stg-orac01 etc]$ cat
sysctl.conf
# Kernel sysctl configuration
file for Red Hat Linux
#
# For binary values, 0 is
disabled, 1 is enabled. See sysctl(8)
and
# sysctl.conf(5) for more
details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route
verification
net.ipv4.conf.default.rp_filter =
1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route
= 0
# Controls the System Request
debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps
will append the PID to the core filename.
# Useful for debugging
multi-threaded applications.
kernel.core_uses_pid = 1
# Controls the use of TCP
syncookies
net.ipv4.tcp_syncookies = 1
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables
= 0
net.bridge.bridge-nf-call-iptables
= 0
net.bridge.bridge-nf-call-arptables
= 0
kernel.shmmax = 262144000000
net.ipv4.ip_local_port_range =
9000 65500
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
#
#
net.ipv6.conf.all.disable_ipv6=1
fs.file-max = 6815744
kernel.sem = 250 32000 100
[grid@va-stg-orac01 etc]$
[oracle@va-stg-orac02 ~]$ cat
/etc/sysctl.conf
# Kernel sysctl configuration
file for Red Hat Linux
#
# For binary values, 0 is
disabled, 1 is enabled. See sysctl(8)
and
# sysctl.conf(5) for more
details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route
verification
net.ipv4.conf.default.rp_filter =
1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route
= 0
# Controls the System Request
debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps
will append the PID to the core filename.
# Useful for debugging
multi-threaded applications.
kernel.core_uses_pid = 1
# Controls the use of TCP
syncookies
net.ipv4.tcp_syncookies = 1
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables
= 0
net.bridge.bridge-nf-call-iptables
= 0
net.bridge.bridge-nf-call-arptables
= 0
#kernel.shmmax = 536870912
kernel.shmmax = 262144000000
net.ipv4.ip_local_port_range =
9000 65500
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
#
#
net.ipv6.conf.all.disable_ipv6=1
fs.file-max = 6815744
kernel.sem = 250 32000 100
[oracle@va-stg-orac02 ~]$
Confirm the RAC Node Name is Not Listed in Loopback Address
Ensure that the node
name (racnode1 or racnode2) is not included for the loopback address in the
/etc/hosts file. If the machine name is listed in the in the loopback address
entry as below:
127.0.0.1 va-stg-orac01 localhost.localdomain localhost
it will need to be
removed as shown below:
127.0.0.1 localhost.localdomain localhost
If the RAC node name is
listed for the loopback address, you will receive the following error during
the RAC installation:
ORA-00603: ORACLE
server session terminated by fatal error
or
ORA-29702: error
occurred in Cluster Group Service operation
Cluster Time Synchronization Service
[grid@va-stg-orac01
etc]$ cat /etc/sysconfig/ntpd
# Drop root to id
'ntp:ntp' by default.
OPTIONS="-x -u
ntp:ntp -p /var/run/ntpd.pid -g"
[grid@va-stg-orac01
etc]$
[oracle@va-stg-orac02
~]$ cat /etc/sysconfig/ntpd
# Drop root to id
'ntp:ntp' by default.
OPTIONS="-x -u
ntp:ntp -p /var/run/ntpd.pid -g"
[oracle@va-stg-orac02
~]$
# service ntp restart
Then, restart the NTP
service.
# /sbin/service ntp
restart
Check no password
connectivity between the nodes.
[grid@va-stg-orac01
etc]$ ssh grid@va-stg-orac02
Last login: Fri
Jul 8 14:04:46 2011 from 10.0.20.105
[grid@va-stg-orac02 ~]$
Create Job Role Separation Operating System Privileges Groups, Users, and
Directories:
Perform the following
user, group, directory configuration, and setting shell limit tasks for the grid and oracle
users on both Oracle RAC nodes in the cluster.
cat /etc/passwd
oracle:x:505:503::/home/oracle:/bin/bash
grid:x:1100:503:Grid
Infrastructure Owner:/home/grid:/bin/bash
[grid@va-stg-orac01
etc]$ cat /etc/group
dba:x:502:rdouglas,grid,oracle
oinstall:x:503:grid,oracle
mchandrashekar:x:504:
asmadmin:x:1200:grid,oracle
asmdba:x:1201:grid,oracle
asmoper:x:1202:grid,oracle
oper:x:1301:grid,oracle
Create Job Role Separation Operating System Privileges Groups, Users, and
Directories
Perform the following user, group, directory configuration, and setting
shell limit tasks for the grid
and oracle users on both Oracle
RAC nodes in the cluster.
This section provides the instructions on how to create the operating system
users and groups to install all Oracle software using a Job Role Separation
configuration. The commands in this section should be performed on both Oracle
RAC nodes as root to create
these groups, users, and directories. Note that the group and user IDs must be
identical on both Oracle RAC nodes in the cluster. Check to make sure that the
group and user IDs you want to use are available on each cluster member node,
and confirm that the primary group for each grid infrastructure for a cluster
installation owner has the same name and group ID which for the purpose of this
guide is oinstall (GID 1000).
A Job Role Separation privileges configuration of Oracle is a configuration
with operating system groups and users that divide administrative access
privileges to the Oracle grid infrastructure installation from other
administrative privileges users and groups associated with other Oracle installations
(e.g. the Oracle database software). Administrative privileges access is
granted by membership in separate operating system groups, and installation
privileges are granted by using different installation owners for each Oracle
installation.
One OS user will be created to own each Oracle software product — "grid" for the Oracle grid
infrastructure owner and "oracle"
for the Oracle RAC software. Throughout this article, a user created to own the
Oracle grid infrastructure binaries is called the grid user. This user will own both the Oracle Clusterware
and Oracle Automatic Storage Management binaries. The user created to own the
Oracle database binaries (Oracle RAC) will be called the oracle user. Both Oracle software owners
must have the Oracle Inventory group (oinstall)
as their primary group, so that each Oracle software installation owner can
write to the central inventory (oraInventory), and so that OCR and Oracle
Clusterware resource permissions are set correctly. The Oracle RAC software
owner must also have the OSDBA group and the optional OSOPER group as secondary
groups.
This type of configuration is optional but highly recommend by Oracle for
organizations that need to restrict user access to Oracle software by
responsibility areas for different administrator users. For example, a small
organization could simply allocate operating system user privileges so that you
can use one administrative user and one group for operating system
authentication for all system privileges on the storage and database tiers.
With this type of configuration, you can designate the oracle user to be the sole installation
owner for all Oracle software (Grid infrastructure and the Oracle database
software), and designate oinstall
to be the single group whose members are granted all system privileges for
Oracle Clusterware, Automatic Storage Management, and all Oracle Databases on
the servers, and all privileges as installation owners. Other organizations,
however, have specialized system roles who will be responsible for installing
the Oracle software such as system administrators, network administrators, or
storage administrators. These different administrative users can configure a
system in preparation for an Oracle grid infrastructure for a cluster
installation, and complete all configuration tasks that require operating
system root privileges. When
grid infrastructure installation and configuration is completed successfully, a
system administrator should only need to provide configuration information and
to grant access to the database administrator to run scripts as root during an Oracle RAC installation.
The following O/S groups will be created to support job role separation:
Description
|
OS Group Name
|
OS Users
Assigned to this Group
|
Oracle
Privilege
|
Oracle Group
Name
|
Oracle Inventory and Software Owner
|
oinstall
|
grid, oracle
|
|
|
Oracle Automatic Storage Management Group
|
asmadmin
|
grid
|
SYSASM
|
OSASM
|
ASM Database Administrator Group
|
asmdba
|
grid, oracle
|
SYSDBA for ASM
|
OSDBA for ASM
|
ASM Operator Group
|
asmoper
|
grid
|
SYSOPER for ASM
|
OSOPER for ASM
|
Database Administrator
|
dba
|
oracle
|
SYSDBA
|
OSDBA
|
Database Operator
|
oper
|
oracle
|
SYSOPER
|
OSOPER
|
·
Oracle Inventory Group (typically oinstall)
Members of the OINSTALL
group are considered the "owners" of the Oracle software and are
granted privileges to write to the Oracle central inventory (oraInventory).
When you install Oracle software on a Linux system for the first time, OUI
creates the /etc/oraInst.loc
file. This file identifies the name of the Oracle Inventory group (by default, oinstall), and the path of the Oracle
Central Inventory directory.
By default, if an oraInventory group does not
exist, then the installer lists the primary group of the installation owner for
the grid infrastructure for a cluster as the oraInventory group. Ensure that
this group is available as a primary group for all planned Oracle software
installation owners. For the purpose of this guide, the grid and oracle
installation owners must be configured with oinstall
as their primary group.
·
The Oracle Automatic Storage Management Group
(typically asmadmin)
This is a required group. Create this group as a
separate group if you want to have separate administration privilege groups for
Oracle ASM and Oracle Database administrators. In Oracle documentation, the
operating system group whose members are granted privileges is called the OSASM group, and in code examples, where
there is a group specifically created to grant this privilege, it is referred
to as asmadmin.
Members of the OSASM
group can use SQL to connect to an Oracle ASM instance as SYSASM using operating system
authentication. The SYSASM
privilege that was introduced in Oracle ASM 11g release 1 (11.1) is now
fully separated from the SYSDBA
privilege in Oracle ASM 11g release 2 (11.2). SYSASM privileges no longer provide access
privileges on an RDBMS instance. Providing system privileges for the storage
tier using the SYSASM privilege
instead of the SYSDBA privilege
provides a clearer division of responsibility between ASM administration and
database administration, and helps to prevent different databases using the
same storage from accidentally overwriting each others files. The SYSASM privileges permit mounting and
dismounting disk groups, and other storage administration tasks.
·
The ASM Database Administrator group (OSDBA for
ASM, typically asmdba)
Members of the ASM Database Administrator group
(OSDBA for ASM) is a subset of the SYSASM
privileges and are granted read and write access to files managed by Oracle
ASM. The grid infrastructure installation owner (grid) and all Oracle Database software owners (oracle) must be a member of this group,
and all users with OSDBA
membership on databases that have access to the files managed by Oracle ASM
must be members of the OSDBA
group for ASM.
·
Members of the ASM Operator Group (OSOPER for
ASM, typically asmoper)
This is an optional group. Create this group if you
want a separate group of operating system users to have a limited set of Oracle
ASM instance administrative privileges (the SYSOPER for ASM privilege),
including starting up and stopping the Oracle ASM instance. By default, members
of the OSASM group also have all
privileges granted by the SYSOPER
for ASM privilege.
To use the ASM Operator group to create an ASM
administrator group with fewer privileges than the default asmadmin group, then you must choose the
Advanced installation type to install the Grid infrastructure software. In this
case, OUI prompts you to specify the name of this group. In this guide, this
group is asmoper.
If you want to have an OSOPER for ASM group, then
the grid infrastructure for a cluster software owner (grid) must be a member of this group.
·
Database Administrator (OSDBA, typically dba)
Members of the OSDBA
group can use SQL to connect to an Oracle instance as SYSDBA using operating system
authentication. Members of this group can perform critical database
administration tasks, such as creating the database and instance startup and
shutdown. The default name for this group is dba.
The SYSDBA system privilege
allows access to a database instance even when the database is not open.
Control of this privilege is totally outside of the database itself.
The SYSDBA
system privilege should not be confused with the database role DBA. The DBA
role does not include the SYSDBA
or SYSOPER system privileges.
·
Database Operator (OSOPER, typically oper)
Members of the OSOPER
group can use SQL to connect to an Oracle instance as SYSOPER using operating system
authentication. Members of this optional group have a limited set of database
administrative privileges such as managing and running backups. The default
name for this group is oper. The
SYSOPER system privilege allows
access to a database instance even when the database is not open. Control of
this privilege is totally outside of the database itself. To use this group,
choose the Advanced installation type to install the Oracle database software.
Create Groups and User for Grid Infrastructure
Lets start this section by creating the recommended OS groups and user for
Grid Infrastructure on both Oracle RAC nodes:
[root@racnode1 ~]# groupadd -g 1000 oinstall [root@racnode1 ~]# groupadd -g 1200 asmadmin [root@racnode1 ~]# groupadd -g 1201 asmdba [root@racnode1 ~]# groupadd -g 1202 asmoper [root@racnode1 ~]# useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid [root@racnode1 ~]# id grid uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper) ------------------------------------------------- [root@racnode2 ~]# groupadd -g 1000 oinstall [root@racnode2 ~]# groupadd -g 1200 asmadmin [root@racnode2 ~]# groupadd -g 1201 asmdba [root@racnode2 ~]# groupadd -g 1202 asmoper [root@racnode2 ~]# useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid [root@racnode2 ~]# id grid uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper) |
Set the password for the grid
account on both Oracle RAC nodes:
[root@racnode1 ~]# passwd grid
Changing password for user grid. New UNIX password: xxxxxxxxxxx Retype new UNIX password: xxxxxxxxxxx passwd: all authentication tokens updated successfully. [root@racnode2 ~]# passwd grid Changing password for user grid. New UNIX password: xxxxxxxxxxx Retype new UNIX password: xxxxxxxxxxx passwd: all authentication tokens updated successfully. |
Log in
to both Oracle RAC nodes as the grid user account and create the following
login script (.bash_profile):
|
When
setting the Oracle environment variables for each Oracle RAC node in the
login script, make certain to assign each RAC node a unique Oracle SID for
ASM:
- racnode1: ORACLE_SID=+ASM1
- racnode2: ORACLE_SID=+ASM2
|
|
[grid@va-stg-orac01
etc]$ su - grid
Password:
[grid@va-stg-orac01 ~]$
cat .bash_profile
# Get the aliases and
functions
if [ -f ~/.bashrc ];
then
. ~/.bashrc
fi
alias ls="ls
-FA"
ORACLE_SID=+ASM1;
export ORACLE_SID
ORACLE_BASE=/u01/app/grid;
export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid;
export ORACLE_HOME
GRID_HOME=/u01/app/11.2.0/grid;
export GRID_HOME
CRS_HOME=/u01/app/11.2.0/grid;
export CRS_HOME
ORACLE_PATH=/u01/app/oracle/dba_scripts/common/sql;
export ORACLE_PATH
ORACLE_TERM=xterm;
export ORACLE_TERM
NLS_DATE_FORMAT="DD-MON-YYYY
HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin;
export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data;
export ORA_NLS11
PATH=/u01/app/11.2.0/grid/bin
PATH=${PATH}:/u01/app/11.2.0/grid/OPatch
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/oracle/dba_scripts/common/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native;
export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
[grid@va-stg-orac01 ~]$
[oracle@va-stg-orac02
~]$ su - grid
Password:
[grid@va-stg-orac02 ~]$
cat .bash_profile
# Get the aliases and
functions
if [ -f ~/.bashrc ];
then
. ~/.bashrc
fi
alias ls="ls
-FA"
ORACLE_SID=+ASM2;
export ORACLE_SID
ORACLE_BASE=/u01/app/grid;
export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid;
export ORACLE_HOME
GRID_HOME=/u01/app/11.2.0/grid;
export ORACLE_HOME
CRS_HOME=/u01/app/11.2.0/grid;
export ORACLE_HOME
ORACLE_PATH=/u01/app/oracle/dba_scripts/common/sql;
export ORACLE_PATH
ORACLE_TERM=xterm;
export ORACLE_TERM
NLS_DATE_FORMAT="DD-MON-YYYY
HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin;
export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data;
export ORA_NLS11
PATH=/u01/app/11.2.0/grid/bin
PATH=${PATH}:/u01/app/11.2.0/grid/OPatch
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/oracle/dba_scripts/common/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native;
export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
[grid@va-stg-orac02 ~]$
Create Groups and User for Oracle Database Software
Next, create the the recommended OS groups and user for the Oracle database
software on both Oracle RAC nodes:
[root@racnode1 ~]# groupadd -g 1300 dba
[root@racnode1 ~]# groupadd -g 1301 oper [root@racnode1 ~]# useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle [root@racnode1 ~]# id oracle uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) ------------------------------------------------- [root@racnode2 ~]# groupadd -g 1300 dba [root@racnode2 ~]# groupadd -g 1301 oper [root@racnode2 ~]# useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle [root@racnode2 ~]# id oracle uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) |
Set the password for the oracle
account:
[root@racnode1 ~]# passwd oracle
Changing password for user oracle. New UNIX password: xxxxxxxxxxx Retype new UNIX password: xxxxxxxxxxx passwd: all authentication tokens updated successfully. [root@racnode2 ~]# passwd oracle Changing password for user oracle. New UNIX password: xxxxxxxxxxx Retype new UNIX password: xxxxxxxxxxx passwd: all authentication tokens updated successfully. |
Log in
to both Oracle RAC nodes as the oracle user account and create the
following login script (.bash_profile):
|
When
setting the Oracle environment variables for each Oracle RAC node in the
login script, make certain to assign each RAC node a unique Oracle SID:
- va-stg-orac01: ORACLE_SID=STGTDB1
- va-stg-orac01: ORACLE_SID= STGTDB2
|
|
[grid@va-stg-orac01 ~]$
su - oracle
Password:
[oracle@va-stg-orac01
~]$ cat .bash_profile
# .bash_profile
# Get the aliases and
functions
if [ -f ~/.bashrc ];
then
. ~/.bashrc
fi
alias ls="ls
-FA"
ORACLE_SID=STGTDB1;
export ORACLE_SID
ORACLE_UNQNAME=STGTDB;
export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle;
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1;
export ORACLE_HOME
ORACLE_PATH=/u01/app/oracle/dba_scripts/common/sql:$ORACLE_HOME/rdbms/admin;
export ORACLE_PATH
ORACLE_TERM=xterm;
export ORACLE_TERM
NLS_DATE_FORMAT="DD-MON-YYYY
HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin;
export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data;
export ORA_NLS11
PATH=$ORACLE_HOME/bin
PATH=${PATH}:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/oracle/dba_scripts/common/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native;
export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
[oracle@va-stg-orac01
~]$
[grid@va-stg-orac02 ~]$
su - oracle
Password:
[oracle@va-stg-orac02
~]$ cat .bash_profile
alias ls="ls
-FA"
# Get the aliases and
functions
if [ -f ~/.bashrc ];
then
. ~/.bashrc
fi
ORACLE_SID=STGTDB2;
export ORACLE_SID
ORACLE_UNQNAME=STGTDB;
export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle;
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1;
export ORACLE_HOME
AGENT_HOME=$ORACLE_BASE/product/agent11g;
export AGENT_HOME
ORACLE_PATH=/u01/app/oracle/dba_scripts/common/sql:$ORACLE_HOME/rdbms/admin;
export ORACLE_PATH
ORACLE_TERM=xterm;
export ORACLE_TERM
NLS_DATE_FORMAT="DD-MON-YYYY
HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin;
export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data;
export ORA_NLS11
PATH=$ORACLE_HOME/bin
PATH=${PATH}:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/oracle/dba_scripts/common/bin
PATH=${PATH}:/u01/app/oracle/product/agent11g/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native;
export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
[oracle@va-stg-orac02
~]$
Verify That the User nobody Exists
[oracle@va-stg-orac01
~]$ id nobody
uid=99(nobody)
gid=99(nobody) groups=99(nobody)
[oracle@va-stg-orac01
~]$
[oracle@va-stg-orac02
~]$ id nobody
uid=99(nobody)
gid=99(nobody) groups=99(nobody)
[oracle@va-stg-orac02
~]$
Create the Oracle Base Directory Path
The final step is to configure an Oracle base path compliant with an Optimal
Flexible Architecture (OFA) structure and correct permissions. This will need
to be performed on both Oracle RAC nodes in the cluster as root.
This guide assumes that the /u01
directory is being created in the root file system. Please note that this is
being done for the sake of brevity and is not recommended as a general
practice. Normally, the /u01
directory would be provisioned as a separate file system with either hardware
or software mirroring configured.
[root@ va-stg-orac01 ~]# mkdir -p /u01/app/grid
[root@ va-stg-orac01 ~]# mkdir -p /u01/app/11.2.0/grid [root@ va-stg-orac01 ~]# chown -R grid:oinstall /u01 [root@ va-stg-orac01 ~]# mkdir -p /u01/app/oracle [root@ va-stg-orac01 ~]# chown oracle:oinstall /u01/app/oracle [root@ va-stg-orac01 ~]# chmod -R 775 /u01 ------------------------------------------------------------- [root@va-stg-orac02 ~]# mkdir -p /u01/app/grid [root@va-stg-orac02 ~]# mkdir -p /u01/app/11.2.0/grid [root@va-stg-orac02 ~]# chown -R grid:oinstall /u01 [root@va-stg-orac02 ~]# mkdir -p /u01/app/oracle [root@va-stg-orac02 ~]# chown oracle:oinstall /u01/app/oracle [root@va-stg-orac02 ~]# chmod -R 775 /u01 |
At the end of this section, you should have the following on both Oracle RAC
nodes:
·
An Oracle central inventory group, or
oraInventory group (oinstall),
whose members that have the central inventory group as their primary group are
granted permissions to write to the oraInventory directory.
·
A separate OSASM group (asmadmin), whose members are granted the
SYSASM privilege to administer Oracle Clusterware and Oracle ASM.
·
A separate OSDBA for ASM group (asmdba), whose members include grid and oracle,
and who are granted access to Oracle ASM.
·
A separate OSOPER for ASM group (asmoper), whose members include grid, and who are granted limited Oracle
ASM administrator privileges, including the permissions to start and stop the
Oracle ASM instance.
·
An Oracle grid installation for a cluster
owner (grid), with the
oraInventory group as its primary group, and with the OSASM (asmadmin), OSDBA for ASM (asmdba) and OSOPER for ASM (asmoper) groups as secondary groups.
·
A separate OSDBA group (dba), whose members are granted the SYSDBA
privilege to administer the Oracle Database.
·
A separate OSOPER group (oper), whose members include oracle, and who are granted limited Oracle
database administrator privileges.
·
An Oracle Database software owner (oracle), with the oraInventory group as
its primary group, and with the OSDBA (dba),
OSOPER (oper), and the OSDBA for
ASM group (asmdba) as their secondary groups.
·
An OFA-compliant mount point /u01 owned by grid:oinstall before installation.
·
An Oracle base for the grid /u01/app/grid owned by grid:oinstall with 775 permissions, and
changed during the installation process to 755 permissions. The grid
installation owner Oracle base directory is the location where Oracle ASM
diagnostic and administrative log files are placed.
·
A Grid home /u01/app/11.2.0/grid owned by grid:oinstall with 775 (drwxdrwxr-x)
permissions. These permissions are required for installation, and are changed
during the installation process to root:oinstall
with 755 permissions (drwxr-xr-x).
·
During installation, OUI creates the Oracle Inventory
directory in the path /u01/app/oraInventory.
This path remains owned by grid:oinstall,
to enable other Oracle software owners to write to the central inventory.
·
An Oracle base /u01/app/oracle owned by oracle:oinstall with 775 permissions.
Set Resource Limits for the Oracle Software Installation Users
To improve the performance of the software on Linux systems, you must
increase the following resource limits for the Oracle software owner users (grid, oracle):
Shell Limit
|
Item in
limits.conf
|
Hard Limit
|
Maximum number of open file descriptors
|
nofile
|
65536
|
Maximum number of processes available to a single user
|
nproc
|
16384
|
Maximum size of the stack segment of the process
|
stack
|
10240
|
To make these changes, run the following as root:
1. On
each Oracle RAC node, add the following lines to the /etc/security/limits.conf file (the
following example shows the software account owners oracle and grid):
[oracle@va-stg-orac01 grid]$ cat /etc/security/limits.conf # /etc/security/limits.conf # #Each line describes a limit for a user in the form: # #<domain> <type> <item> <value> # #Where: #<domain> can be: # - an user name # - a group name, with @group syntax # - the wildcard *, for default entry # - the wildcard %, can be also used with %group syntax, # for maxlogin limit # #<type> can have the two values: # - "soft" for enforcing the soft limits # - "hard" for enforcing hard limits # #<item> can be one of the following: # - core - limits the core file size (KB) # - data - max data size (KB) # - fsize - maximum filesize (KB) # - memlock - max locked-in-memory address space (KB) # - nofile - max number of open files # - rss - max resident set size (KB) # - stack - max stack size (KB) # - cpu - max CPU time (MIN) # - nproc - max number of processes # - as - address space limit (KB) # - maxlogins - max number of logins for this user # - maxsyslogins - max number of logins on the system # - priority - the priority to run user process with # - locks - max number of file locks the user can hold # - sigpending - max number of pending signals # - msgqueue - max memory used by POSIX message queues (bytes) # - nice - max nice priority allowed to raise to values: [-20, 19] # - rtprio - max realtime priority # #<domain> <type> <item> <value> # #* soft core 0 #* hard rss 10000 #@student hard nproc 20 #@faculty soft nproc 20 #@faculty hard nproc 50 #ftp hard nproc 0 #@student - maxlogins 4 # End of file grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 [oracle@va-stg-orac01 grid]$
[oracle@va-stg-orac02 ~]$ cat /etc/security/limits.conf # /etc/security/limits.conf # #Each line describes a limit for a user in the form: # #<domain> <type> <item> <value> # #Where: #<domain> can be: # - an user name # - a group name, with @group syntax # - the wildcard *, for default entry # - the wildcard %, can be also used with %group syntax, # for maxlogin limit # #<type> can have the two values: # - "soft" for enforcing the soft limits # - "hard" for enforcing hard limits # #<item> can be one of the following: # - core - limits the core file size (KB) # - data - max data size (KB) # - fsize - maximum filesize (KB) # - memlock - max locked-in-memory address space (KB) # - nofile - max number of open files # - rss - max resident set size (KB) # - stack - max stack size (KB) # - cpu - max CPU time (MIN) # - nproc - max number of processes # - as - address space limit (KB) # - maxlogins - max number of logins for this user # - maxsyslogins - max number of logins on the system # - priority - the priority to run user process with # - locks - max number of file locks the user can hold # - sigpending - max number of pending signals # - msgqueue - max memory used by POSIX message queues (bytes) # - nice - max nice priority allowed to raise to values: [-20, 19] # - rtprio - max realtime priority # #<domain> <type> <item> <value> # #* soft core 0 #* hard rss 10000 #@student hard nproc 20 #@faculty soft nproc 20 #@faculty hard nproc 50 #ftp hard nproc 0 #@student - maxlogins 4 # End of file grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 [oracle@va-stg-orac02 ~]$ |
2. On
each Oracle RAC node, add or edit the following line in the /etc/pam.d/login file, if it does not
already exist:
[oracle@va-stg-orac01 grid]$ cat >> /etc/pam.d/login <<EOF
session required pam_limits.so EOF [oracle@va-stg-orac01 grid]$ cat /etc/pam.d/login #%PAM-1.0 auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so auth include system-auth account required pam_nologin.so account include system-auth password include system-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session required pam_loginuid.so session optional pam_console.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open session required pam_namespace.so session optional pam_keyinit.so force revoke session include system-auth -session optional pam_ck_connector.so [oracle@va-stg-orac01 grid]$ [root@racnode2 ~]# cat >> /etc/pam.d/login <<EOF session required pam_limits.so EOF [oracle@va-stg-orac02 ~]$ cat /etc/pam.d/login #%PAM-1.0 auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so auth include system-auth account required pam_nologin.so account include system-auth password include system-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session required pam_loginuid.so session optional pam_console.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open session required pam_namespace.so session optional pam_keyinit.so force revoke session include system-auth -session optional pam_ck_connector.so [oracle@va-stg-orac02 ~]$ |
3. Depending
on your shell environment, make the following changes to the default shell
startup file in order to change ulimit settings for all Oracle installation
owners (note that these examples show the users oracle and grid):
For the Bourne, Bash, or Korn shell, add the
following lines to the /etc/profile
file by running the following:
[root@racnode1 ~]# cat >> /etc/profile <<EOF
if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF [root@racnode2 ~]# cat >> /etc/profile <<EOF if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF |
Logging In to a Remote System Using X Terminal
If you intend to install the Oracle grid infrastructure and Oracle RAC
software from a Windows workstation with an X11 display server installed, then
perform the following actions:
1.
Start the X11 display server software on the client
workstation.
2. Configure
the security settings of the X server software to permit remote hosts to
display X applications on the local system.
3. From
the client workstation, log in to the server where you want to install the
software as the Oracle grid infrastructure for a cluster software owner (grid) or the Oracle RAC software (oracle).
4. As
the software owner (grid, oracle), set the DISPLAY environment:
Configure the Linux Servers for Oracle
Network
Configuration |