Feb 04

Build Your Own Oracle Infrastructure: Part 10 – Install Oracle GI & Database Software

Build Your Own Oracle Infrastructure: Part 10 – Install Oracle GI & Database Software.

P10-logo

 

After building and configuring the hardware infrastructure, we’re finally ready to install some Oracle software.

If everything in the prior posts has been completed successfully, the software installations should be a breeze.

In truth, installing this software isn’t difficult. The hard part is configuring the underlying infrastructure correctly. You made it to Part 10 so all power to you!

 

In Part 10, we’ll install Oracle 12c Grid Infrastructure and Database software. There are 3 main tasks to complete:

 

  1. Install the OEM Cloud Control Management Agent.
  2. Run the Grid Infrastructure Installer.
  3. Run the Database Installer.

 

Task #1: Install the OEM Cloud Control Management Agent.

To avoid the hassle of adding targets manually to Cloud Control later, we’ll install the Management Agent now so it’ll just pick up targets as they materialize.

Step #1: Create a Management Agent Software Directory.

We need to create a base directory for the Management Agent software. It’s required that the base directory and all its parent directories are owned by the installing user (oracle). Therefore, we can’t install the Management Agent into /u01/app/oracle because /u01/app is owned by the grid user. So, on both racnode1 and racnode2, create a directory for the Management Agent software installation using these commands:

[root@racnode1 ~]# mkdir -p /u01/oracle/agent
[root@racnode1 ~]# chown -R oracle:oinstall /u01/oracle
[root@racnode1 ~]# chmod 755 /u01/oracle/agent

Step #2: Manually Create an Oracle Inventory.

Since the Management Agent software will be the first Oracle software installed on the RAC server nodes, there won’t be a pre-existing Oracle Inventory it can write to. That causes a problem when Cloud Control tries to deploy the Management Agent. To get around that, do this:

[root@racnode1 ~]# touch /etc/oraInst.loc
[root@racnode1 ~]# chmod 644 /etc/oraInst.loc
[root@racnode1 ~]# mkdir /u01/app/oraInventory
[root@racnode1 ~]# chown grid:oinstall /u01/app/oraInventory
[root@racnode1 ~]# chmod 775 /u01/app/oraInventory

Then edit /etc/oraInst.loc to include these lines:

inventory_loc=/u01/app/oraInventory
inst_group=oinstall

Step #3: Edit /etc/hosts on the OEM Cloud Control Server.

If you haven’t already, add these lines to the /etc/hosts file on the OEM Cloud Control server, oraemcc.mynet.com:

200.200.10.11    racnode1.mynet.com            racnode1
200.200.20.12    racnode2.mynet.com            racnode2

 

A fully documented /etc/hosts file can be viewed here.

Step #4: Use Cloud Control.

For this walkthrough we’ll use racnode2 as the example. You could deploy Management Agents to both racnode servers at the same time or install them one at a time. Your call.

 

Log into Cloud Control, click on SetupAdd Target ⇒ Add Targets Manually:

P10-S1

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Click Add Host:

P10-S2

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Click on the Add menu, choose Manually then enter hostname (racnode2.mynet.com) and platform (Linux x86-64):

P10-S3

Click to Open Full Size

 

 

 

 

 

 

 

 

Click Next, then use these values to populate the screen:

 

Field Value
Installation Base Directory /u01/oracle/agent
Instance Directory /u01/oracle/agent/agent_inst

 

In addition, create a new Named Credential for the oracle OS user then click OK:

P10-S4

Click to Open Full Size

 

 

 

 

 

 

 

 

 

With all the fields populated, the screen should look similar to this:

P10-S5

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Click Next:

P10-S6

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Click Deploy Agent:

P10-S7

Click to Open Full Size

 

 

 

 

 

 

 

 

The deploy proceeds until it starts whining about running a root script. This can be ignored for now. Click the Continue menu then choose Continue, All hosts:

P10-S8

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Now you can run the root.sh script as specified on the screen (/u01/oracle/agent/core/12.1.0.5.0/root.sh):

 

[root@racnode2 12.1.0.5.0]# ./root.sh
Finished product-specific root actions.
/etc exist
Creating /etc/oragchomelist file...

 

Then click the Done button and you’re done. Management Agent Deployed!

You can check all is well with the Management Agent by running these commands as the oracle user:

 

[oracle@racnode2 ~]$ cd /u01/oracle/agent/agent_inst/bin
[oracle@racnode2 bin]$ ./emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
Agent Version          : 12.1.0.5.0
OMS Version            : 12.1.0.5.0
Protocol Version       : 12.1.0.1.0
Agent Home             : /u01/oracle/agent/agent_inst
Agent Log Directory    : /u01/oracle/agent/agent_inst/sysman/log
Agent Binaries         : /u01/oracle/agent/core/12.1.0.5.0
Agent Process ID       : 4519
Parent Process ID      : 4258
Agent URL              : https://racnode2.mynet.com:3872/emd/main/
Local Agent URL in NAT : https://racnode2.mynet.com:3872/emd/main/
Repository URL         : https://oraemcc.mynet.com:4903/empbs/upload
Started at             : 2016-01-26 13:55:19
Started by user        : oracle
Operating System       : Linux version 3.8.13-118.2.2.el6uek.x86_64 (amd64)
Last Reload            : (none)
Last successful upload                       : 2016-02-02 21:50:25
Last attempted upload                        : 2016-02-02 21:50:25
Total Megabytes of XML files uploaded so far : 8.58
Number of XML files pending upload           : 0
Size of XML files pending upload(MB)         : 0
Available disk space on upload filesystem    : 32.51%
Collection Status                            : Collections enabled
Heartbeat Status                             : Ok
Last attempted heartbeat to OMS              : 2016-02-02 21:52:44
Last successful heartbeat to OMS             : 2016-02-02 21:52:44
Next scheduled heartbeat to OMS              : 2016-02-02 21:53:44
---------------------------------------------------------------
Agent is Running and Ready

 

Task #2: Run the Grid Infrastructure Installer.

With the Grid Infrastructure software copied to racnode1 and unzipped, ensure your DISPLAY environment variable is set to the IP address or hostname of your workstation (mine is called HPWKSTN):

Note, the Grid Infrastructure software is installed by the grid user.

[grid@racnode1 grid]$ export DISPLAY=HPWKSTN:0.0

 

Test this is working by running an X Windows program, making sure the display opens on your workstation. Most people tend to use xclock as do I from time to time. I prefer xeyes though because it’s funnier:

 

[grid@racnode1 grid]$ xeyes

P10-S9

 

 

 

 

 

 

With the display setup correctly, invoke the GI installer:

[grid@racnode1 grid]$ ./runInstaller

 

Select Install and Configure Oracle Grid Infrastructure for a Cluster, then click Next:

P10-S10

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Select Configure a Standard cluster, then click Next:

P10-S11

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Select Advanced Installation, then click Next:

P10-S12

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Use these values to populate the next screen, then click Next:

 

Field Value
Cluster Name cluster1
SCAN Name cluster1-scan.mynet.com
SCAN Port 1521
Configure GNS Uncheck this box

 

P10-S13

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Use these values to populate the next screen, then click SSH connectivity:

 

Public Hostname Virtual Hostname
racnode1.mynet.com racnode1-vip.mynet.com
racnode2.mynet.com racnode2-vip.mynet.com

 

P10-S14

Click to Open Full Size

 

 

 

 

 

 

 

 

 

After clicking SSH connectivity, enter the OS password of the grid user, then click Setup. Let the installer re-do the SSH setup. It uses RSA keys. Again, let Oracle take care of this. If the installation goes wrong due to a user equivalency problem, it won’t be your fault! Once SSH is re-configured, click Next.

 

Use these values to populate the next screen, then click Next:

 

Interface Name Subnet Use for
eth0 200.200.10.0 Public
eth1 200.200.20.0 Do Not Use
eth2 200.200.30.0 Private

 

Note, although we are using eth1 to access shared storage, Oracle Grid Infrastructure has no business on this network so it must be flagged as “Do Not Use”.

 

P10-S15

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Select Use Standard ASM for storage, then click Next:

P10-S16

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Click the Change Discovery Path button:

P10-S17

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Change the Disk Discovery Path to /dev/oracleasm/disks/* then click OK:

P10-S18

 

 

 

 

 

 

 

Use these values to populate the next screen, then click Next:

 

Field Value
Disk group name CRS
Redundancy External
Disk Path Check /dev/oracleasm/disks/ASMDISK01

 

P10-S19

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Click Use same password for these accounts, enter a password for SYSASM twice, then click Next:

P10-S20

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Select Do not use Intelligent Platform Management Interface (IPMI):

P10-S21

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Click the check box to Register with Enterprise Manager (EM) Cloud Control. Use these values to populate the screen, then click Next:

 

Field Value
OMS host oraemcc.mynet.com
OMS port 4903
EM Admin User Name SYSMAN
EM Admin Password (referenced here)

 

P10-S22

Click to Open Full Size

 

 

 

 

 

 

 

 

 

The 3 OS groups should be pre-populated with these values. Click Next:

 

OS Group OS Group Name
Oracle ASM Administrator (OSASM) Group asmadmin
Oracle ASM DBA (OSDBA for ASM) Group asmdba
Oracle ASM Operator (OSOPER for ASM) Group (Optional) asmoper

 

P10-S23

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Use these values to populate the next screen, then click Next:

 

Field Value
Oracle base /u01/app/grid
Software location /u01/app/12.1.0/grid

 

P10-S24

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Make sure the Automatically run configuration scripts check box is unchecked. We’ll be running root scripts as the root user from the command line later. Click Next:

P10-S25

Click to Open Full Size

 

 

 

 

 

 

 

 

 

This next screen shows the results of the prerequisite checks. As you can see, the swap space issue came up which we know we can ignore. The screen also shows a Device Checks for ASM failure. What this boils down to is the installer’s inability to detect shared storage on both the RAC nodes.

P10-S27

Click to Open Full Size

 

 

 

 

 

 

 

 

Here is the full text of the failure:

Device Checks for ASM - This is a prerequisite check to verify that the specified 
devices meet the requirements for ASM.

Check Failed on Nodes: [racnode2,  racnode1]
Verification result of failed node: racnode2
Expected Value : cvuqdisk-1.0.9-1
Actual Value   : cvuqdisk-1.0.9-1
 Details:
 - 
PRVF-9802 : Attempt to get 'udev' information from node "racnode2" failed No UDEV rule
found for device(s) specified  - Cause:  An attempt to read the ''udev'' permission or
rule file failed, or the permission or rule file did not contain any rules for the 
specified device or devices.  
- Action:  Make sure that the ''udev'' permission or rule directory is created, the 
''udev'' permission or rule file is available and accessible by the user running the 
check and that the permission or rule file contains the correct ''udev'' rule for the 
device or devices being checked.
Back to Top

Verification result of failed node: racnode1
Expected Value : cvuqdisk-1.0.9-1
Actual Value   : cvuqdisk-1.0.9-1
 Details:
 - 
PRVF-9802 : Attempt to get 'udev' information from node "racnode1" failed No UDEV rule
found for device(s) specified  - Cause:  An attempt to read the ''udev'' permission or
rule file failed, or the permission or rule file did not contain any rules for the 
specified device or devices.  
- Action:  Make sure that the ''udev'' permission or rule directory is created, the 
''udev'' permission or rule file is available and accessible by the user running the 
check and that the permission or rule file contains the correct ''udev'' rule for the 
device or devices being checked.

 

What this essentially means is a check was made for the presence of the CVUQDISK package on both racnode1 and racnode2 (remember we installed that here). The installer is looking for version 1.0.9-1 and it found version 1.0.9-1, but apparently that wasn’t good enough. Further checks were made for udev rules regarding the shared disk selected earlier (ASMDISK01) and the installer couldn’t find those either. I actually tried this installation both with and without udev rules and it didn’t make any difference to the reporting of this ‘failure’. I even tried re-installing the CVUQDISK package on both nodes and that also made no difference. My thought was if this was such a significant problem we’d find out soon enough because the GI installation would fail. Hint – it doesn’t fail so ignore this ‘failure’ if you see it.

 

Click Yes:

P10-S26

 

 

 

 

 

 

 

 

On the Summary screen, click Install:

P10-S28

Click to Open Full Size

 

 

 

 

 

 

 

 

 

The installation thunders along:

P10-S29

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Until a window pops up inviting you to run a root.sh script on both nodes.

 

Note, it is important that you run this script to completion in the order of the nodes listed, i.e. racnode1 first and racnode2 second.

 

P10-S30

Click to Open Full Size

 

 

 

 

 

 

 

 

The root.sh scripts take quite a while to run and their output is quite verbose. You can see what it looks like here. Once the root.sh script completes on racnode2, click OK. The installation will continue. Again, this part of the installation takes some time:

P10-S31

Click to Open Full Size

 

 

 

 

 

 

 

 

 

When the installation completes, you’ll see this screen:

P10-S32

Click to Open Full Size

 

 

 

 

 

 

 

 

 

That’s Grid Infrastructure installed. Next, install the database software.

 

Task #3: Run the Database Installer.

Using FileZilla or your favorite file copy utility, copy the 2 Oracle Database software zip files to racnode1 and unzip them.

Note, the database software is installed by the oracle user.

[oracle@racnode1 ~]$ mkdir -p ./media/db_12.1.0.2
[oracle@racnode1 ~]$ cd ./media/db_12.1.0.2
[oracle@racnode1 db_12.1.0.2]$ unzip linuxamd64_12102_database_1of2.zip
[oracle@racnode1 db_12.1.0.2]$ unzip linuxamd64_12102_database_2of2.zip
[oracle@racnode1 db_12.1.0.2]$ cd database

Ensure your DISPLAY environment variable is set to the IP address or hostname of your workstation (mine is called HPWKSTN):

[oracle@racnode1 database]$ export DISPLAY=HPWKSTN:0.0

 

With the display setup correctly, invoke the database installer:

[oracle@racnode1 database]$ ./runInstaller

 

Uncheck the I wish to receive security updates via My Oracle Support option, then click Next:

P10-S33

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Choose to remain uninformed by clicking Yes:

P10-S34

 

 

 

 

 

 

 

 

 

Select the Install database software only option, then click Next:

P10-S35

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Select the Oracle Real Application Clusters database installation option, then click Next:

P10-S36

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Make sure both nodes are selected. If you want the installer to re-do the SSH (user equivalency) setup, click the SSH connectivity button. When complete, click Next:

P10-S37

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Select your preferred language:

P10-S38

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Select Enterprise Edition, then click Next:

P10-S40

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Use the following values to populate the next screen, then click Next:

 

Field Value
Oracle base /u01/app/oracle
Software location /u01/app/oracle/product/12.1.0/dbhome_1

 

P10-S41

Click to Open Full Size

 

 

 

 

 

 

 

 

 

The next screen should pre-populate with the correct OS groups:

 

OS Group OS Group Name
Database Administrator (OSDBA) group dba
Database Operator (OSOPER) group (Optional) oper
Database Backup and Recovery (OSBACKUPDBA) group backupdba
Data Guard administrative (OSDGDBA) group dgdba
Encryption Key Management administrative (OSKMDBA) group kmdba

 

Click Next:

P10-S42

Click to Open Full Size

 

 

 

 

 

 

 

 

 

As with the Grid Infrastructure installation, the installer complains about swap space. We can safely ignore this prerequisite check failure by checking Ignore All and clicking Next:

P10-S43

Click to Open Full Size

 

 

 

 

 

 

 

 

 

Click Yes:

P10-S44

 

 

 

 

 

 

 

 

On the Summary screen, click Install:

P10-S45

Click to Open Full Size

 

 

 

 

 

 

 

 

 

The familiar progress bar screen appears:

P10-S46

Click to Open Full Size

 

 

 

 

 

 

 

 

 

After a short while, this window appears inviting you to run the root.sh script on racnode1 and racnode2. Always run the root.sh script to completion in the order the nodes are listed. When complete, click OK:

P10-S47

Click to Open Full Size

 

 

 

 

 

 

 

 

 

The root.sh script output looks like this:


[root@racnode1 ~] cd /u01/app/oracle/product/12.1.0/dbhome_1
[root@racnode1 dbhome_1]# ./root.sh
Performing root user operation.
The following environment variables are set as:
   ORACLE_OWNER= oracle
   ORACLE_HOME= /u01/app/oracle/product/12.1.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

[root@racnode2 ~] cd /u01/app/oracle/product/12.1.0/dbhome_1
[root@racnode2 dbhome_1]# ./root.sh
Performing root user operation.
The following environment variables are set as:
   ORACLE_OWNER= oracle
   ORACLE_HOME= /u01/app/oracle/product/12.1.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

Finally, the successful completion screen is displayed:

P10-S48

Click to Open Full Size

 

 

 

 

 

 

 

 

 

We now have everything we need in place to create an Oracle RAC database. Obviously it’s important to test and verify that what you have installed is actually up and running correctly. We’ll run a series of tests after we create a RAC database in Part 11. See you there!

 

If you have any comments or questions about this post, please use the Contact form here.