Build Your Own Oracle Infrastructure: Part 7 – Build Oracle RAC Servers.

Now it’s time to start the process of building our first Oracle RAC system. 
To do that, we will need two Oracle Database 12c server nodes running Oracle Linux.
This installment shows how to build that rather efficiently using a cool capability of Oracle VM.

Quick links to all the tasks:

Task #1: Install Oracle Linux on RACNODE1_VM.

In Oracle VM Manager, click on the Servers and VMs tab, highlight RACNODE1_VM, then click on the green arrow head Start icon to start the VM:

This will start the Oracle Linux 6 installation because this VM has the V52218-01.iso file loaded in its virtual CD/DVD drive and that device is listed first in the Boot Order. With the Install or Upgrade and existing system option highlighted, press Enter:

Skip the media test:

Click Next:

Choose your language, then click Next:

Choose your keyboard layout, then click Next:

Choose Basic Storage Devices, then click Next:

Click the ‘Yes, discard any data’ button, then click Next:

Enter the hostname “racnode1.mynet.com”, then click Configure Network:

If you’re following this series, then you should be getting good at configuring the network settings. So just use these values to configure the 3 network interfaces:

Field eth0 (Public) eth1 (Storage) eth2 (Private)
IP Address 200.200.10.11 200.200.20.11 200.200.30.11
Netmask (auto-populates) 24 24 24
Gateway 200.200.10.1 200.200.20.1 N/A
Primary DNS 200.200.10.1 N/A N/A
Secondary DNS 8.8.8.8 N/A N/A

With the network configured, choose your timezone then click Next:

Choose a root password, then click Next:

Choose Use All Space, check Review and modify partitioning layout, then click Next:

Using the right pointing arrow key, move the Xen Virtual Block Device (40960 MB) over to the Install Target Devices pane, then click Next:

Edit the following screen using these values, then click Next:

Logical Volume Size (MB)
lv_root 34312
lv_swap 6144

Click Format:

Click Write changes to disk:

Check Install boot loader on /dev/xvdb. Select Oracle Linux Server 6 (Label) and /dev/mapper/vg_racnode1-lv_root (Device). Click Next:

Select Database Server and Customize now, then click Next:

Use the following to complete the next screen, then click Next:

Category Option
Base System Leave the default options
Servers Leave the default options
Web Services Leave the defaults options
Databases Uncheck everything
System Management Leave the default options
Virtualization Leave the default options
Desktops Check everything
Applications Check Internet Browser
Development Leave the default options
UEK3 kernel repo Leave the default options
Lanugages Leave the default options

The Oracle Linux installation will start installing 1249 packages:

Eventually the installation will complete and you’ll see a rather splendid congratulations screen. Do NOT click Reboot yet:

Go back to Oracle VM Manager. Click on the Servers and VMs tab, highlight RACNODE1_VM, then click on the pencil icon to edit RACNODE1_VM. Click on the Disks tab. Click on the Eject a CDROM icon:

This will remove the V52218-01.iso from the CD/DVD drive, so when you reboot the Oracle Linux installation won’t start over. Click OK. Then return to the ‘congratulations’ screen and click Reboot. You may need to close the console session and open a new one. When the system comes back up, you’ll see this ‘welcome’ screen. Click Forward:

Accept the License Agreement and click Forward:

Choose the option to register at a later time and click Forward:

Click the ‘No thanks’ button, then click Forward:

Rather annoyingly, the harassment continues. Ignore the scaremongering and click Forward:

Don’t create a user here. We’ll be doing that later. Click Forward:

Set the system date and time if necessary. Uncheck the option to synchronize date and time over the network. We’ll be using Oracle’s Cluster Time Synchronization Service later on. If the time is wrong, then it’ll be Oracle’s fault not NTP’s fault. See what I did there? 🙂 Click Forward:

Uncheck Enable kdump, then click Finish:

Click the Yes to agree to a reboot:

Click OK to start the system reboot:

When the system comes back up, you’ll see the familiar login screen:

That’s Oracle Linux 6.6 installed. Time to configure it for Oracle Database 12c RAC duties.

Task #2: Configure Oracle Linux on racnode1.

For reference, the official Oracle Database 12c Release 1 RAC installation guide can be found here.
Our installation has been simplified into the following 12 steps. That’s right, our very own 12 step programme. ?

Task #2a: Update Oracle Linux.

Let’s grab the latest and greatest packages and update the Linux kernel. To do that we need to run a yum update. To do that, we need to create a yum repository first. With the yum repository in place, run the update:

[root@racnode1 ~]# yum update

This will take a while. It will ask you to confirm progress in a couple of places, so don’t go to lunch or it’ll be waiting for you when you get back.

Note, this update will upgrade the version of Oracle Linux from 6.6 to 6.7.

Task #2b: Install the Oracle 12c Release 1 Pre-Installation Package.

This package makes a number of changes to Linux which aids in setting things up the way you want them. It’s actually on the Oracle Linux 6 DVD in /server/Packages and can be installed using this command:

[root@racnode1 Packages]# rpm -i oracle-rdbms-server-12cR1-preinstall-1.0-12.el6.x86_64.rpm

Alternatively, you could install it using yum with this command:

[root@racnode1 ~]# yum install oracle-rdbms-server-12cR1-preinstall.x86_64

Either way, check it’s installed using this:

[root@racnode1 ~]# rpm -qa | grep oracle-rdbms-server
oracle-rdbms-server-12cR1-preinstall-1.0-14.el6.x86_64

A number of changes are made to the /etc/password and /etc/group files. The UIDs and GIDs it creates may not be what you want. After you have installed the package, you can use this script to correct the setup and add the additional users and groups you’ll need later.

As a final post package installation step, add these entries to the /etc/security/limits.conf file:

####################################
# for oracle user
####################################
oracle   soft   nofile    8192
oracle   hard   nofile    65536
oracle   soft   nproc     2048
oracle   hard   nproc     16384
oracle   soft   stack     10240
oracle   hard   stack     32768
oracle   soft   core      unlimited
oracle   hard   core      unlimited
oracle   hard   memlock   5500631
####################################
# for grid user
####################################
grid    soft    nofile    8192
grid    hard    nofile    65536
grid    soft    nproc     2048
grid    hard    nproc     16384
grid    soft    stack     10240
grid    hard    stack     32768
grid    soft    core      unlimited
grid    hard    core      unlimited

Now that the oracle and grid users exist, change their passwords.

Task #2c: Edit User Profiles.

Add this line to the .bash_profile for both oracle and grid:

umask 022

This next change is a personal preference thing, but I always unalias the ls and vi commands. I don’t need vim in my life and I find the different colors for different types of file completely unnecessary. Sanity can be returned to your default bash shell by adding the following lines to the .bash_profile for both oracle and grid:

unalias ls
unalias vi

Task #2d: Disable SELinux.

SELinux can cause havoc when trying to instantiate ASM disks and it needs to be disabled. Instructions for doing so can be found here.

Task #2e: Disable the Linux Firewall.

Generally speaking, firewalls are a good thing.

In a production environment, you’d want to work with your Network Administrator and have them open up the ports necessary for everything to work.

However, in our environment we can trust that we won’t hack ourselves and we’ll go ahead and disable the Linux firewall.

Task #2f: Disable Network Manager.

Network Manager has an annoying habit of jumping all over your /etc/resolv.conf file. Thus destroying your neatly crafted configuration. You need to disable Network Manager to prevent this from happening.

Task #2g: Install the iSCSI Initiator.

This can be selected as an installation option or you can add it now using this yum command:

[root@racnode1 ~]# yum install iscsi-initiator-utils

[root@racnode1 ~]# rpm -qa | grep iscsi
iscsi-initiator-utils-6.2.0.873-14.0.1.el6.x86_64

Task #2h: Install ASM Support.

Installing ASM support is done in two stages. First, install the ASMLib package referenced here. Second, use yum to install ASM Support:

[root@racnode1 ~]# rpm -i oracleasmlib-2.0.12-1.el6.x86_64.rpm
[root@racnode1 ~]# yum install oracleasm-support

[root@racnode1 ~]# rpm -qa | grep oracleasm
oracleasmlib-2.0.12-1.el6.x86_64
oracleasm-support-2.1.8-1.el6.x86_64

Task #2i: Install cvuqdisk.

Installing cvuqdisk will enable the Cluster Verification Utility to detect shared storage. Allegedly. It’s included in the 12c Grid Infrastructure downloads referenced here.

You’ll need to copy the two Grid Infrastructure zip files to racnode1. FileZilla does the job. I located these files within the home directory of the grid user:

[root@racnode1 ~]# cd ~grid/media/gi_12102

[root@racnode1 gi_12102]# ls -l
drwxr-xr-x 7 grid oinstall       4096 Dec 16 22:10 grid
-rw-r--r-- 1 grid oinstall 1747043545 Dec 12 15:33 linuxamd64_12102_grid_1of2.zip
-rw-r--r-- 1 grid oinstall  646972897 Dec 12 15:32 linuxamd64_12102_grid_2of2.zip

Unzipping these files created the grid directory. Within the grid directory is an rpm directory and that’s where you’ll find the cvuqdisk rpm:

[root@racnode1 gi_12102]# cd grid/rpm

[root@racnode1 rpm]# ls -l
-rwxr-xr-x 1 grid oinstall 8976 Jun 30  2014 cvuqdisk-1.0.9-1.rpm

Install the rpm using these commands:

[root@racnode1 rpm]# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP

[root@racnode1 rpm]# rpm -iv cvuqdisk-1.0.9-1.rpm
Preparing packages for installation...
cvuqdisk-1.0.9-1

Task #2j: Build the /u01 file system.

RACNODE1_VM had 2 virtual disks allocated to it. The /dev/xvdb disk was used for the installation of Oracle Linux. The /dev/xvdc disk will be used to build the /u01 file system. That’s where all the Oracle software will be installed. The following series of commands will partition the /dev/xvdc disk, build and mount the /u01 file system:

[root@racnode1 ~]# fdisk /dev/xvdc

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p

Partition number (1-4): 1
First cylinder (1-3916, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-3916, default 3916):
Using default value 3916

Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

[root@racnode1 ~]# mkfs -t ext4 -m 0 /dev/xvdc1

[root@racnode1 ~]# cd /
[root@racnode1 /]# mkdir /u01
[root@racnode1 /]# mount /dev/xvdc1 /u01

[root@racnode1 /]# df -h
Filesystem            Size  Used  Avail  Use% Mounted on
/dev/mapper/vg_racnode1-lv_root
                       35G   11G    23G   31% /

tmpfs                 2.0G   72K   2.0G    1% /dev/shm
/dev/xvdb1            477M  132M   316M   30% /boot
/dev/xvdc1             30G   44M    30G    1% /u01

Finally, edit the /etc/fstab file so the /u01 file system will be mounted each time the system restarts:

[root@racnode1 ~]# vi /etc/fstab

Add this line:

/dev/xvdc1              /u01                    ext4    defaults        0 0

Task #2k: Add Oracle software directories.

Now the /u01 file system is mounted, we can create the directories we’ll use to install the 12c Grid Infrastructure and Oracle Database 12c software:

[root@racnode1 ~]# mkdir -p /u01/app/12.1.0/grid
[root@racnode1 ~]# mkdir -p /u01/app/grid
[root@racnode1 ~]# mkdir -p /u01/app/oracle
[root@racnode1 ~]# chown -R grid:oinstall /u01
[root@racnode1 ~]# chown oracle:oinstall /u01/app/oracle
[root@racnode1 ~]# chmod -R 775 /u01/

Task #2l: Edit the /etc/hosts file.

Although we will be using DNS to resolve most things within our various networks, it’s very helpful to have all the addressing documented in each RAC node’s /etc/hosts file. All the entries required for every server’s /etc/hosts file can be found here.

Task #3: Clone RACNODE1_VM.

Now that we have the racnode1 server fully configured, we need a second server configured in a similar way. The hard way would be to create another VM then repeat all the steps up to this point to configure a second server. The easy way would be to clone RACNODE1_VM. Let’s go that route!

In Oracle VM Manager, click on the Servers and VMs tab, highlight RACNODE1_VM, then click the Stop icon (the red square). This will shutdown racnode1 and stop the VM.

Highlight RACNODE1_VM, right click then select the Clone or Move option:

Select Create a clone of this VM, then click Next:

Use the following values to complete the next screen, then click OK:

Field Value
Clone to a Virtual Machine
Clone Count 4
Name Index 2
Clone Name RACNODE_VM
Target Server Pool ServerPool1

This has the effect of creating 4 more VMs starting with RACNODE_VM.2 and ending with RACNODE_VM.5, all identical to RACNODE1_VM:

Note, the screen shows a 4096 MB memory allocation. This was increased to 6144 MB on a subsequent re-build Click here for further details.

Highlight RACNODE_VM.2 and click the pencil icon to edit the VM. Change the Name to RACNODE2_VM:

Click the Disks tab and change the names of the 2 virtual disks to RACNODE2_OS and RACNODE2_u01, then click OK:

Click the arrow head to the left of RACNODE2_VM to display its properties:

Task #4: Modify racnode2.

At this point, it’s time to start the VMs. In Oracle VM Manager, click on the Servers and VMs tab, highlight RACNODE1_VM, then click on the green arrow head Start icon to start the VM. Once RACNODE1_VM is up and running, repeat this procedure to start RACNODE2_VM. Connect to the console of RACNODE2_VM by clicking on the Launch Console icon.

Note, you may run into problems starting VMs. Click here for potential solutions/workarounds if you see an error similar to this:

Error: Device 1 (vif) could not be connected. Hotplug scripts not working

After RACNODE2_VM has started up, the first thing you’ll notice is the login screen announces you’re looking at racnode1.mynet.com. RACNODE2_VM is an identical clone of RACNODE1_VM, so it’s hardly surprising the server running inside RACNODE2_VM thinks it’s racnode1. Fortunately this is easy to fix by following the next couple of steps.

Task #4a: Fix the networking.

When cloning RACNODE1_VM, Oracle VM Manager was smart enough to allocate 3 new MAC addresses to the 3 vNICs allocated to each clone. However, the NIC configuration files in Linux don’t know that and will have to be edited manually.

Login to the ‘fake’ racnode1 and locate the NIC configuration files:

[root@racnode1 ~]# cd /etc/sysconfig/network-scripts

[root@racnode1 network-scripts]# ls -l ifcfg-eth*
-rw-r--r--. 3 root root 320 Dec 11 17:36 ifcfg-eth0
-rw-r--r--  3 root root 269 Dec 13 16:13 ifcfg-eth1
-rw-r--r--  3 root root 248 Dec 13 16:13 ifcfg-eth2
  • ifcfg-eth0 is the configuration file for the public network interface.
  • ifcfg-eth1 is the configuration file for the storage network interface.
  • ifcfg-eth2 is the configuration file for the private interconnect network interface.

Each file contains the exact same entries as the equivalent file on the real racnode1. Some of these parameters are consistent across all the servers (e.g. gateways and DNS) and some are specific to an individual server. We need to change the ones which are specific to an individual server. Those parameters are UUID, IPADDR and HWADDR.

The UUID is a Universal Unique IDentifier. It’s a number that Linux gives to each NIC. The entry in the ifcfg-eth* files looks like this:

UUID=eaba99ea-c88e-4cf2-b990-bee55e752e91

To get a new UUID, use this command:

[root@racnode1 ~]# uuidgen eth0

Substitute eth0 for eth1 and eth2 to generate new UUIDs for those interfaces. Once you have 3 new UUIDs, one for each NIC, update the UUID entry in the ifcfg-eth* files.

The IPADDR is the IP address assigned to a given NIC. The entry in the ifcfg-eth* files looks like this:

IPADDR=200.200.10.11

You can get the correct IP addresses for racnode2’s eth0, eth1 and eth2 interfaces from the /etc/hosts file. If clicking the link is too challenging, here’s the cheat sheet:

Server NIC IP Address
racnode2 eth0 200.200.10.12
racnode2 eth1 200.200.20.12
racnode2 eth2 200.200.30.12

Using these IP addresses, update the IPADDR entry in the ifcfg-eth* files.

The HWADDR is basically the MAC address which Oracle VM Manager allocates to the VM’s vNICs. You can see the MAC addresses allocated to RACNODE2_VM in a few different ways.

The vm.cfg contains the MAC addresses for each vNIC in eth0, eth1 and eth2 order. The location of the vm.cfg file is referenced here.

Alternatively, go back to this screen in Oracle VM Manager and click on the Networks tab:

Once again, the Ethernet Network name maps to vNICs like this:

Ethernet Network Linux NIC MAC Address
Management_Public eth0 00:21:F6:D2:45:A0
Shared_Storage_Public eth1 00:21:F6:E6:F6:71
GI_Interconnect_Private eth2 00:21:F6:2F:DE:C5

Using the MAC addresses you obtain from your environment, update the HWADDR entry in the ifcfg-eth* files.

To bring up the network interfaces using the updated and correct values for UUID, IPADDR and HWADDR, each interface must be stopped and re-started. Using eth0 as an example, stopping and re-starting the interface is done with these commands:

[root@racnode1 ~]# ifdown eth0
[root@racnode1 ~]# ifup eth0

Once all 3 interfaces have been re-started, you can view their status using this command:

[root@racnode1 ~]# ifconfig -a

Task #4b: Change the hostname.

The method for changing an Oracle Linux hostname is documented here.

Once the networking and hostname issues have been fixed, I strongly recommend a reboot. If the system comes back up with the correct name (racnode2.mynet.com) and the correct network configuration, you’re ready to move onto the dark arts discussed in Part 8 – SSH, DNS and CVU. See you there!

If you have any comments or questions about this post, please use the Contact form here.