Oracle 7 Installation & Configuration

Introduction.

Getting ready to install, configure and use Oracle Database 18c and 19c meant rebuilding my two stand alone database servers with Oracle Linux 7. In time honored tradition of changing things for no good reason, setting up Oracle Linux 7 is slightly different to version 6 in a number of important ways. Not annoying at all then. So here we’ll run through installing and configuring Oracle Linux 7 Update 6 (OL7.6).

The two servers we’ll be rebuilding are called orasvr01 and orasvr02. The original build of orasvr01 can be found here. The rebuild will be similar, but this time around we’ll use Openfiler for the database storage. The orasvr01 server will use regular file system storage and orasvr02 will use ASM. The root and /u01 file systems will be allocated from the VM_Filesystems_Repo storage repository, just as they were before.

For the most part, installing and configuring OL7.6 will be the same for both orasvr01 and orasvr02, so we’ll mainly focus on orasvr01. The differences will come when setting up the Openfiler disks for file systems (orasvr01) and ASM (orasvr02). I’ll explain those differences in the relevant sections below. Let’s get started!

Quick links to all the tasks:

Task #1: Create the VM.

In OVM Manager, click the Create Virtual Machine icon. Ensure the “Create a new VM (Click ‘Next’ to continue)” radio button is selected, then click Next. Use these values to populate the next screen:

Click Next. Add the Management_Public and Shared_Storage_Public networks so your screen looks like this:

Click Next. Add two virtual disks for the operating system and /u01 file system. Then add a CD/DVD containing the ISO for Oracle Linux 7 Update 6. Your screen should look like this:

Click Next. Now change the boot order so when you start the VM for the first time, it will boot from the Linux ISO in the virtual CD/DVD drive. Your screen should look like this:

Click Finish and you’re done.

Task #2: Install Oracle Linux.

In OVM Manager start ORASVR01_VM, wait for a few seconds then connect to the console. You’ll see this opening screen:

Once the installation kicks off, the CD/DVD drive will be mounted and the media checked:

After a few moments, the Welcome screen will appear where you will choose your language:

I had the separated double mouse pointer issue again, but was able to ‘fix’ it by moving the mouse pointer to a corner of the screen and getting the two pointers to superimpose. Once they are, don’t move the pointer too quickly or they’ll separate again.

Choose your language then click Continue. The Installation Summary screen appears next. Use the values below or choose your own:

Category Option Value Comments
LOCALIZATION
DATE & TIME Americas/Chicago timezone Your choice
LANGUAGE SUPPORT English (United States) Your choice
KEYBOARD English (US) Your choice
SOFTWARE
INSTALLATION SOURCE Local Media OL 7.6 ISO in the CD/DVD drive
SOFTWARE SELECTION Infrastructure Server Plus Add-Ons: System Administration Tools
SYSTEM
INSTALLATION DESTINATION Automatic partitioning selected Use the 40 GiB xvda to install the OS
NETWORK & HOSTNAME Wired (eth0) connected Only configure eth0 (public) and set the hostname
KDUMP Kdump is disabled Uncheck Enable kdump
SECURITY POLICY No profile selected Set Apply security policy to OFF

 

When you have configured each option, your screen should look like this:

Click Begin Installation. This will display the CONFIGURATION USER SETTINGS screen:

Click the ROOT PASSWORD icon. This will display a new screen allowing you to enter a password for the root user:

Enter your new root password, then click Done. This will return you to the CONFIGURATION USER SETTINGS screen where you can monitor the progress of the Oracle Linux package installation:

Once all the packages have installed you’ll see this screen:

Before you click the Reboot button, we need to eject the Oracle Linux ISO from the virtual CD/DVD drive. Otherwise rebooting will start the installation process again. In OVM Manager, edit the ORASVR01_VM virtual machine and click the Disks tab:

Click the Eject icon to remove the ISO file from the CD/DVD drive. Your screen will look like this:

Return to the VM console and click Reboot. This is where life may get a little interesting. I tried this process multiple times. Sometimes clicking the Reboot button worked. Sometimes, but not often. Other times the reboot just hung, so I had to stop and restart the VM using OVM Manager. Sometimes even that didn’t work and I had to resort to killing the VM in OVM Manager, then re-starting it. Another time I had to ‘destroy’ the VM and re-create it using xm commands on the OVM server. It sounds worse than it is. It’s a quick and simple procedure documented here.

Eventually, the reboot happens and you’ll see a Linux login prompt:

Don’t get too excited. There are a few configuration changes we need to make before OL7.6 is ready for prime time. Some of these could have been configured via the INSTALLATION SUMMARY screen, but I wanted to explicitly cover how some Linux administration has changed in version 7. Clicking around in the installation GUI won’t show you that. So without further ado, fire up Putty and login as root.

Task #3: Run an Update.

Oracle claim the yum repository is all ready to go in Oracle Linux 7. Well, sort of. Run a yum update:

[root@orasvr01 ~]# yum update 

There will probably be plenty of things to update which is fine. Look away though and you might miss this message:

IMPORTANT: A legacy Oracle Linux yum server repo file was found. Oracle Linux yum server repository 
configurations have changed which means public-yum-ol7.repo will no longer be updated. New repository 
configuration files have been installed but are disabled. To complete the transition, run this script 
as the root user:
  
/usr/bin/ol_yum_configure.sh 

Fair enough, let’s run that and see what it does:

[root@orasvr01 ~]# /usr/bin/ol_yum_configure.sh
Repository ol7_UEKR5 already enabled
Repository ol7_latest already enabled 

Looks like we’re all set.

Task #4: Install X Windows.

Putty has its place, but I prefer working with a windows interface meant for adults. The command to install xterm can be found here. I use X-Win32 as my X Windows PC based server. It complained about not seeing xauth on the server side, so I installed that as well using this command:

[root@orasvr01 ~]# yum install xauth

Task #5: Disable SE Linux.

The simple edit to disable SE Linux can be found here.

Task #6: Turn Off Linux Firewall.

Managing the firewall has changed in Oracle Linux 7. By default, the firewall is provided via a daemon (firewalld) and is controlled by the systemctl command. Go here for the steps to disable the firewall in Oracle Linux 7.

Task #7: Configure Storage Networking (eth1).

The orasvr01 and orasvr02 servers have 2 NICs each. We’ve already configured the public interface (eth0). Now it’s time to configure the NIC which will connect the server to the storage coming from Openfiler (eth1).

Before we do that, replace the /etc/hosts file with our standard one which lists all our infrastructure IP addresses. That file can be found here.

Each NIC has a configuration file located in /etc/sysconfig/network-scripts. The file name follows the pattern ifcfg-ethN, where N is the number of the NIC you’re interested in. In this case, the file we want to edit is ifcfg-eth1. This is what OVM/Oracle Linux installer gave us by default:

[root@orasvr01 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth1
UUID=4eb42d7b-adcf-4389-9e6e-a006b3011424
DEVICE=eth1
ONBOOT=no

Edit ifcfg-eth1 so it looks like this:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth1
UUID=4eb42d7b-adcf-4389-9e6e-a006b3011424
DEVICE=eth1
ONBOOT=yes
IPADDR=200.200.20.17
PREFIX=24
GATEWAY=200.200.10.1
DNS1=200.200.10.1

Next, check the NIC configuration for eth1. As you can see, not much going on:

[root@orasvr01 network-scripts]# ifconfig -a eth1
eth1: flags=4163  mtu 1500
        ether 00:21:f6:ef:3f:da  txqueuelen 1000  (Ethernet)
        RX packets 311  bytes 14326 (13.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 

Next, check the status of the network. Note eth0 gets a mention, but not eth1:

[root@orasvr01 network-scripts]# systemctl status network
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled)
   Active: active (exited) since Mon 2019-07-01 12:06:10 CDT; 2h 15min ago
     Docs: man:systemd-sysv-generator(8)
  Process: 1057 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS)
Jul 01 12:06:09 orasvr01.mynet.com systemd[1]: Starting LSB: Bring up/down networking…
Jul 01 12:06:10 orasvr01.mynet.com network[1057]: Bringing up loopback interface:  [  OK  ]
Jul 01 12:06:10 orasvr01.mynet.com network[1057]: Bringing up interface eth0:  [  OK  ]
Jul 01 12:06:10 orasvr01.mynet.com systemd[1]: Started LSB: Bring up/down networking.

Start the eth1 interface:

[root@orasvr01 network-scripts]# ifup eth1

Now check the NIC configuration again:

[root@orasvr01 network-scripts]# ifconfig -a eth1
eth1: flags=4163  mtu 1500
        inet 200.200.20.17  netmask 255.255.255.0  broadcast 200.200.20.255
        inet6 fe80::908:e76e:8411:9051  prefixlen 64  scopeid 0x20
        ether 00:21:f6:ef:3f:da  txqueuelen 1000  (Ethernet)
        RX packets 371  bytes 17086 (16.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 63  bytes 9898 (9.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Signs of life! The eth1 interface is now up. Re-check the status of the network:

[root@orasvr01 network-scripts]# systemctl status network
● network.service - LSB: Bring up/down networking    
    Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled)
    Active: active (exited) since Mon 2019-07-01 12:06:10 CDT; 2h 42min ago
      Docs: man:systemd-sysv-generator(8)
   Process: 1057 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS)
Jul 01 12:06:09 orasvr01.mynet.com systemd[1]: Starting LSB: Bring up/down networking…
Jul 01 12:06:10 orasvr01.mynet.com network[1057]: Bringing up loopback interface:  [  OK  ]
Jul 01 12:06:10 orasvr01.mynet.com network[1057]: Bringing up interface eth0:  [  OK  ]
Jul 01 12:06:10 orasvr01.mynet.com systemd[1]: Started LSB: Bring up/down networking.

Note the output still only references the eth0 NIC. Re-start the network then re-check its status:

[root@orasvr01 network-scripts]# systemctl restart network

[root@orasvr01 network-scripts]# systemctl status network
 ● network.service - LSB: Bring up/down networking
    Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled)
    Active: active (exited) since Mon 2019-07-01 14:51:59 CDT; 3s ago
      Docs: man:systemd-sysv-generator(8)
   Process: 1884 ExecStop=/etc/rc.d/init.d/network stop (code=exited, status=0/SUCCESS)
   Process: 2092 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS)
 Jul 01 14:51:58 orasvr01.mynet.com systemd[1]: Starting LSB: Bring up/down networking…
 Jul 01 14:51:59 orasvr01.mynet.com network[2092]: Bringing up loopback interface:  [  OK  ]
 Jul 01 14:51:59 orasvr01.mynet.com network[2092]: Bringing up interface eth0:  Connection successfully activated (D-Bus ac…ion/7)
 Jul 01 14:51:59 orasvr01.mynet.com network[2092]: [  OK  ]
 Jul 01 14:51:59 orasvr01.mynet.com network[2092]: Bringing up interface eth1:  Connection successfully activated (D-Bus ac…ion/8)
 Jul 01 14:51:59 orasvr01.mynet.com network[2092]: [  OK  ]
 Jul 01 14:51:59 orasvr01.mynet.com systemd[1]: Started LSB: Bring up/down networking.
 Hint: Some lines were ellipsized, use -l to show in full.

The output now references eth1 and all seems well. We should now be able to ping the openfiler-storage IP address using the eth1 interface:

[root@orasvr01 network-scripts]# ping -I eth1 openfiler-storage
PING openfiler-storage (200.200.20.6) from 200.200.20.17 eth1: 56(84) bytes of data.
64 bytes from openfiler-storage (200.200.20.6): icmp_seq=1 ttl=64 time=0.393 ms
64 bytes from openfiler-storage (200.200.20.6): icmp_seq=2 ttl=64 time=0.174 ms
64 bytes from openfiler-storage (200.200.20.6): icmp_seq=3 ttl=64 time=0.175 ms

Yes! Get in! Storage networking is sorted. Onto the next task.

Task #8: Add Users & Groups.

The easiest way to setup the users and groups necessary to run Oracle Database instances on your server is to use Oracle’s preinstallation package. Since we’re going to use Oracle Database 12c Release 2, 18c and 19c we may as well go for the highest version available. That’ll be the one for Oracle Database 19c then:

[root@orasvr01 ~]# yum install oracle-database-preinstall-19c

Amongst other things, this package creates the oracle user and a bunch of groups using a default UID and default GIDs.

In /etc/passwd:

oracle:x:54321:54321::/home/oracle:/bin/bash

In /etc/group:

oinstall:x:54321:oracle
dba:x:54322:oracle
oper:x:54323:oracle
backupdba:x:54324:oracle
dgdba:x:54325:oracle
kmdba:x:54326:oracle
racdba:x:54330:oracle

It does not create a grid user or the various ASM groups you’ll need to install Grid Infrastructure. To add those and to fix the default IDs, I used this script (modify for your own needs). The script makes the necessary changes and returns this result:

oracle user id:  uid=1000(oracle) gid=1000(oinstall) groups=1000(oinstall),1007(asmdba),1001(dba),1002(oper),1003(backupdba),1004(dgdba),1005(kmdba),1006(racdba)

grid user id:  uid=1001(grid) gid=1000(oinstall) groups=1000(oinstall),1007(asmdba),1008(asmadmin),1009(asmoper),1001(dba),1006(racdba)

Task #9: Modify Shell & Resource Limits.

For both the oracle and grid users the value of umask must be any one of these 22, 022, 0022.

[oracle@orasvr01 ~]$ umask
0022

If it’s not the correct value, set it explicitly in the ~/.bash_profile file:

umask 022

User resource limits are usually defined in /etc/security/limits.conf. When using the Oracle pre-installation package, these limits are created for the oracle user in this file instead:

/etc/security/limits.d/oracle-database-preinstall-19c.conf

These limits need to be replicated for the grid user, so add two sets of entries to /etc/security/limits.conf. Your values may be different depending upon your hardware configuration:

# resource limits for oracle user:
oracle   soft   nofile   1024
oracle   hard   nofile   65536
oracle   soft   nproc    16384
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768
oracle   hard   memlock  134217728
oracle   soft   memlock  134217728

# resource limits for grid user:
grid   soft   nofile   1024
grid   hard   nofile   65536
grid   soft   nproc    16384
grid   hard   nproc    16384
grid   soft   stack    10240
grid   hard   stack    32768
grid   hard   memlock  134217728
grid   soft   memlock  134217728

You can check these are operational by using this simple script.

Finally, if it so pleases you and it does me, change the insane alias defaults for ls, vi and grep in the oracle and grid user’s ~/.bash_profile:

unalias ls
unalias vi
unalias grep

Task #10: Configure iSCSI Storage.

There are options you can choose during the installation of Oracle Linux which will install the necessary iscsi packages. However, this is how you do it manually.

First check if the iscsi packages are installed:

[root@orasvr01 ~]# rpm -qa | grep iscsi
[root@orasvr01 ~]#

Nope, so let’s install them:

[root@orasvr01 ~]# yum install iscsi-initiator-utils

[root@orasvr01 ~]# rpm -qa | grep iscsi
iscsi-initiator-utils-iscsiuio-6.2.0.874-10.0.9.el7.x86_64
iscsi-initiator-utils-6.2.0.874-10.0.9.el7.x86_64

Next, enable and start the iscsid daemon:

[root@orasvr01 ~]# systemctl enable iscsid
Created symlink from /etc/systemd/system/multi-user.target.wants/iscsid.service to /usr/lib/systemd/system/iscsid.service.

[root@orasvr01 ~]# systemctl start iscsid

[root@orasvr01 ~]# systemctl status iscsid
● iscsid.service - Open-iSCSI
    Loaded: loaded (/usr/lib/systemd/system/iscsid.service; enabled; vendor preset: disabled)
    Active: active (running) since Thu 2019-07-04 17:45:10 CDT; 7min ago
      Docs: man:iscsid(8)
            man:iscsiadm(8)
  Main PID: 23489 (iscsid)
    Status: "Ready to process requests"
    CGroup: /system.slice/iscsid.service
            └─23489 /sbin/iscsid -f -d2
Jul 04 17:45:10 orasvr01.mynet.com systemd[1]: Starting Open-iSCSI…
Jul 04 17:45:10 orasvr01.mynet.com iscsid[23489]: iscsid: InitiatorName=iqn.1988-12.com.oracle:274b38e4651d
Jul 04 17:45:10 orasvr01.mynet.com iscsid[23489]: iscsid: InitiatorAlias=orasvr01.mynet.com
Jul 04 17:45:10 orasvr01.mynet.com iscsid[23489]: iscsid: Max file limits 1024 4096
Jul 04 17:45:10 orasvr01.mynet.com systemd[1]: Started Open-iSCSI.

I already carved up the /dev/sdc disk device in Openfiler (Western Digital 300 GB VelociRaptor SATA 3 drive) into two sets of 10 volumes. A set will be allocated to each server. Here’s a summary of the volume allocation:

Host iSCSI Target Disk Device (GB) File System/ASM Disk
orasvr01 iqn.2006-01.com.openfiler:orasvr02vg-vol01 /dev/sda (20) /u02
iqn.2006-01.com.openfiler:orasvr02vg-vol02 /dev/sdb (20) /u03
iqn.2006-01.com.openfiler:orasvr02vg-vol03 /dev/sdc (20) /u04
iqn.2006-01.com.openfiler:orasvr02vg-vol04 /dev/sdd (20) /u05
iqn.2006-01.com.openfiler:orasvr02vg-vol05 /dev/sde (20) /u06
iqn.2006-01.com.openfiler:orasvr02vg-vol06 /dev/sdf (20) /u07
iqn.2006-01.com.openfiler:orasvr02vg-vol13 /dev/sdg (10) N/A
iqn.2006-01.com.openfiler:orasvr02vg-vol15 /dev/sdh (1) N/A
iqn.2006-01.com.openfiler:orasvr02vg-vol16 /dev/sdi (1) N/A
iqn.2006-01.com.openfiler:orasvr02vg-vol17 /dev/sdj (1) N/A
orasvr02 iqn.2006-01.com.openfiler:orasvr01vg-vol07 /dev/sdj (20) DATA_000
iqn.2006-01.com.openfiler:orasvr01vg-vol08 /dev/sdi (20) DATA_001
iqn.2006-01.com.openfiler:orasvr01vg-vol09 /dev/sdh (20) DATA_002
iqn.2006-01.com.openfiler:orasvr01vg-vol10 /dev/sdg (20) RECO_000
iqn.2006-01.com.openfiler:orasvr01vg-vol11 /dev/sdf (20) RECO_001
iqn.2006-01.com.openfiler:orasvr01vg-vol12 /dev/sde (20) RECO_002
iqn.2006-01.com.openfiler:orasvr01vg-vol14 /dev/sdd (10) REDO_000
iqn.2006-01.com.openfiler:orasvr01vg-vol18 /dev/sdc (1) N/A
iqn.2006-01.com.openfiler:orasvr01vg-vol19 /dev/sdb (1) N/A
iqn.2006-01.com.openfiler:orasvr01vg-vol20 /dev/sda (1) N/A

First, let’s discover the iSCSI targets allocated to orasvr01:

[root@orasvr01 ~]# iscsiadm -m discovery -t sendtargets -p openfiler-storage
200.200.20.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol17
200.200.10.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol17
200.200.20.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol16
200.200.10.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol16
200.200.20.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol15
200.200.10.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol15
200.200.20.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol13
200.200.10.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol13
200.200.20.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol06
200.200.10.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol06
200.200.20.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol05
200.200.10.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol05
200.200.20.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol04
200.200.10.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol04
200.200.20.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol03
200.200.10.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol03
200.200.20.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol02
200.200.10.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol02
200.200.20.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol01
200.200.10.6:3260,1 iqn.2006-01.com.openfiler:orasvr01vg-vol01

As before, orasvr01 sees the iSCSI targets on both the public network (200.200.10.x) and the storage network (200.200.20.x). I have no idea why, but needless to say we’re only interested in the targets on the storage network.

Next, we need to log into each iSCSI target:

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol01 -p 200.200.20.6 -l
Logging in to iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol01, portal: 200.200.20.6,3260
Login to [iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol01, portal: 200.200.20.6,3260] successful.

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol02 -p 200.200.20.6 -l
Logging in to iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol02, portal: 200.200.20.6,3260
Login to [iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol02, portal: 200.200.20.6,3260] successful.

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol03 -p 200.200.20.6 -l
Logging in to iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol03, portal: 200.200.20.6,3260
Login to [iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol03, portal: 200.200.20.6,3260] successful.

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol04 -p 200.200.20.6 -l
Logging in to iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol04, portal: 200.200.20.6,3260
Login to [iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol04, portal: 200.200.20.6,3260] successful.

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol05 -p 200.200.20.6 -l
Logging in to iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol05, portal: 200.200.20.6,3260
Login to [iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol05, portal: 200.200.20.6,3260] successful.

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol06 -p 200.200.20.6 -l
Logging in to iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol06, portal: 200.200.20.6,3260
Login to [iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol06, portal: 200.200.20.6,3260] successful.

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol13 -p 200.200.20.6 -l
Logging in to iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol13, portal: 200.200.20.6,3260
Login to [iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol13, portal: 200.200.20.6,3260] successful.

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol15 -p 200.200.20.6 -l
Logging in to iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol15, portal: 200.200.20.6,3260
Login to [iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol15, portal: 200.200.20.6,3260] successful.

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol16 -p 200.200.20.6 -l
Logging in to iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol16, portal: 200.200.20.6,3260
Login to [iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol16, portal: 200.200.20.6,3260] successful.

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol17 -p 200.200.20.6 -l
Logging in to iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol17, portal: 200.200.20.6,3260
Login to [iface: default, target: iqn.2006-01.com.openfiler:orasvr01vg-vol17, portal: 200.200.20.6,3260] successful.

This has the effect of Oracle Linux creating disk devices for each iSCSI target. We can see the initial iSCSI target to disk device mapping here:

[root@orasvr01 ~]# ls -l /dev/disk/by-path | grep iscsi
lrwxrwxrwx 1 root root 9 Jul  4 18:48 ip-192.168.1.6:3260-iscsi-iqn.2006-01.com.openfiler:orasvr01vg-vol01-lun-0 -> ../../sda
lrwxrwxrwx 1 root root 9 Jul  4 18:49 ip-192.168.1.6:3260-iscsi-iqn.2006-01.com.openfiler:orasvr01vg-vol02-lun-0 -> ../../sdb
lrwxrwxrwx 1 root root 9 Jul  4 18:49 ip-192.168.1.6:3260-iscsi-iqn.2006-01.com.openfiler:orasvr01vg-vol03-lun-0 -> ../../sdc
lrwxrwxrwx 1 root root 9 Jul  4 18:49 ip-192.168.1.6:3260-iscsi-iqn.2006-01.com.openfiler:orasvr01vg-vol04-lun-0 -> ../../sdd
lrwxrwxrwx 1 root root 9 Jul  4 18:58 ip-192.168.1.6:3260-iscsi-iqn.2006-01.com.openfiler:orasvr01vg-vol05-lun-0 -> ../../sde
lrwxrwxrwx 1 root root 9 Jul  4 18:58 ip-192.168.1.6:3260-iscsi-iqn.2006-01.com.openfiler:orasvr01vg-vol06-lun-0 -> ../../sdf
lrwxrwxrwx 1 root root 9 Jul  4 18:59 ip-192.168.1.6:3260-iscsi-iqn.2006-01.com.openfiler:orasvr01vg-vol13-lun-0 -> ../../sdg
lrwxrwxrwx 1 root root 9 Jul  4 18:59 ip-192.168.1.6:3260-iscsi-iqn.2006-01.com.openfiler:orasvr01vg-vol15-lun-0 -> ../../sdh
lrwxrwxrwx 1 root root 9 Jul  4 18:59 ip-192.168.1.6:3260-iscsi-iqn.2006-01.com.openfiler:orasvr01vg-vol16-lun-0 -> ../../sdi
lrwxrwxrwx 1 root root 9 Jul  4 18:59 ip-192.168.1.6:3260-iscsi-iqn.2006-01.com.openfiler:orasvr01vg-vol17-lun-0 -> ../../sdj

Next, we need to configure automatic iSCSI client login so the server will log into the iSCSI targets each time the system is started or rebooted:

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol01 -p 192.168.1.6 --op update -n node.startup -v automatic

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol02 -p 192.168.1.6 --op update -n node.startup -v automatic

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol03 -p 192.168.1.6 --op update -n node.startup -v automatic

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol04 -p 192.168.1.6 --op update -n node.startup -v automatic

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol05 -p 192.168.1.6 --op update -n node.startup -v automatic

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol06 -p 192.168.1.6 --op update -n node.startup -v automatic

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol13 -p 192.168.1.6 --op update -n node.startup -v automatic

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol15 -p 192.168.1.6 --op update -n node.startup -v automatic

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol16 -p 192.168.1.6 --op update -n node.startup -v automatic

[root@orasvr01 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:orasvr01vg-vol17 -p 192.168.1.6 --op update -n node.startup -v automatic

Task #11: Partition Disks.

For now, I’ll only partition the six 20 GB disk devices. The basic sequence of steps would be the same for each. Here’s how to do the first one:

[root@orasvr01 ~]# fdisk /dev/sda
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): n
Partition type:    
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): 
Using default value 41943039
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

This is what you end up with:

[root@orasvr01 ~]# fdisk –l /dev/sda
Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xfba01026

Device Boot      Start         End      Blocks   Id  System
/dev/sda1         2048    41943039    20970496   83  Linux

Task #12: Configure Persistent Disk Device Names for iSCSI Targets.

Each time the server is booted, the iSCSI targets could be assigned a different device name. That’s a problem because it means your database files could be seen to change file system mount points. To ensure a given iSCSI target always maps to the same disk device, we can use udev rules. This has changed a little between Oracle Linux 6 and Oracle Linux 7.

Step 1 is to obtain the unique iSCSI id of each disk device we need a udev rule for. To do that, use the scsi_id command. It’s location has changed in OL7 and its path is no longer part of the root user’s default path. Here’s a quick alias and usage of the command:

[root@orasvr01 ~]# alias scsi_id='/usr/lib/udev/scsi_id'

[root@orasvr01 ~]# scsi_id -g -u -d /dev/sda
14f504e46494c455236547044684f2d336333412d51514561

[root@orasvr01 ~]# scsi_id -g -u -d /dev/sdb
14f504e46494c45526d31426172632d647932672d3450636f

[root@orasvr01 ~]# scsi_id -g -u -d /dev/sdc
14f504e46494c45523647754338332d563959522d4343746c

[root@orasvr01 ~]# scsi_id -g -u -d /dev/sdd
14f504e46494c4552716c344377502d474541532d4b56704d

[root@orasvr01 ~]# scsi_id -g -u -d /dev/sde
14f504e46494c4552736b3773624e2d4932594b2d76426b65

[root@orasvr01 ~]# scsi_id -g -u -d /dev/sdf
14f504e46494c45523471744464762d3532784b2d37314169

Step 2 is to create a udev rules script in /etc/udev/rules.d directory. The script can be called anything you like so long as it starts with a number and ends with “.rules”. It’s always important to name your script something meaningful. Here’s my file with the relevant syntax, one line per iSCSI target:

[root@orasvr01 rules.d]# ls -l
-rw-r--r-- 1 root root 1332 Jul  5 11:16 99-openfilerdevices.rules

[root@orasvr01 rules.d]# cat 99-openfilerdevices.rules
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c455236547044684f2d336333412d51514561", SYMLINK+="orasvr01vg-vol01", OWNER="root", GROUP="disk", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c45526d31426172632d647932672d3450636f", SYMLINK+="orasvr01vg-vol02", OWNER="root", GROUP="disk", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c45523647754338332d563959522d4343746c", SYMLINK+="orasvr01vg-vol03", OWNER="root", GROUP="disk", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c4552716c344377502d474541532d4b56704d", SYMLINK+="orasvr01vg-vol04", OWNER="root", GROUP="disk", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c4552736b3773624e2d4932594b2d76426b65", SYMLINK+="orasvr01vg-vol05", OWNER="root", GROUP="disk", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c45523471744464762d3532784b2d37314169", SYMLINK+="orasvr01vg-vol06", OWNER="root", GROUP="disk", MODE="0660"

A little explanation:

Parameter Value Comment
KERNEL sd?1 Defaults to a wildcard pattern match for the disk device
SUBSYSTEM block /dev/sd?1 are block devices
PROGRAM /usr/lib/udev/scsi_id … Path to the scsi_id executable
RESULT (iSCSI ID) Unique iSCSI identifier returned by scsi_id
SYMLINK+ (Name) The symbolic link name which points to the disk device
OWNER root By default the root user owns disk devices
GROUP disk By default the OS group is disk
MODE 0660 Default permissions mask for the disk device

Step 3 is to test the resolution of each line in the rules file. This is important because running a test actually creates the symbolic link. We already know what the current iSCSI target to disk device mappings are:

iSCSI Target Disk Device
iqn.2006-01.com.openfiler:orasvr01vg-vol01 /dev/sda
iqn.2006-01.com.openfiler:orasvr01vg-vol02 /dev/sdb
iqn.2006-01.com.openfiler:orasvr01vg-vol03 /dev/sdc
iqn.2006-01.com.openfiler:orasvr01vg-vol04 /dev/sdd
iqn.2006-01.com.openfiler:orasvr01vg-vol05 /dev/sde
iqn.2006-01.com.openfiler:orasvr01vg-vol06 /dev/sdf

Taking /dev/sda as an example:

[root@orasvr01 ~]# ls -l /dev | grep sda
brw-rw---- 1 root disk      8,   0 Jul  4 19:35 sda
brw-rw---- 1 root disk      8,   1 Jul  4 19:35 sda1

We see they are block devices (b), their permissions mask is 0660 (rw-rw—-), they’re owned by root and belong to the group, disk.

Let’s run the test for /dev/sda1:

[root@orasvr01 ~]# ls -l /dev | grep orasvr01
(no output)

[root@orasvr01 ~]# udevadm test /block/sda/sda1

The output is quite verbose, but can be seen in its entirety here. Once the test completes, check to see if a symbolic link has shown up:

[root@orasvr01 ~]# ls -l /dev | grep orasvr01
lrwxrwxrwx 1 root root           4 Jul  5 12:08 orasvr01vg-vol01 -> sda1

Hurrah! By referencing the symbolic link and trusting that it always points to the correct disk device we’re all set to either build file systems on orasvr01 or create ASM Disks on orasvr02. Don’t forget to run the test for all the other disk devices to ensure their symbolic links get created.

Task #13a: Create File Systems (orasvr01).

First we need to build the /u01 file system whose storage is coming from the VM_Filesystems_Repo storage repository. Linux disk devices which come from OVM follow the naming convention /dev/xvd<letter>, where letter starts with a, then b and so on. The /dev/xvda disk has already been used for the Linux OS, so we should have /dev/xvdb waiting for us. Let’s check:

[root@orasvr01 ~]# fdisk -l /dev/xvdb
Disk /dev/xvdb: 64.4 GB, 64424509440 bytes, 125829120 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Let’s create a primary partition, then build the file system:

[root@orasvr01 ~]# fdisk /dev/xvdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x49a8eb2a.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-125829119, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-125829119, default 125829119): 
Using default value 125829119
Partition 1 of type Linux and of size 60 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@orasvr01 ~]# mkfs -t ext4 -m 0 /dev/xvdb1
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done                            
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3932160 inodes, 15728384 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2164260864
480 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
         32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
         4096000, 7962624, 11239424

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

Let’s go ahead and build file systems using the symbolic links which point to the 6 disk devices whose storage is coming from Openfiler:

[root@orasvr01 ~]# mkfs -t ext4 -m 0 /dev/orasvr01vg-vol01
[root@orasvr01 ~]# mkfs -t ext4 -m 0 /dev/orasvr01vg-vol02
[root@orasvr01 ~]# mkfs -t ext4 -m 0 /dev/orasvr01vg-vol03
[root@orasvr01 ~]# mkfs -t ext4 -m 0 /dev/orasvr01vg-vol04
[root@orasvr01 ~]# mkfs -t ext4 -m 0 /dev/orasvr01vg-vol05
[root@orasvr01 ~]# mkfs -t ext4 -m 0 /dev/orasvr01vg-vol06

Next, create the mount point directories for these file systems:

[root@orasvr01 ~]# cd /
[root@orasvr01 /]# mkdir /u01 /u02 /u03 /u04 /u05 /u06 /u07

[root@orasvr01 /]# ls -l
lrwxrwxrwx.   1 root root    7 Jun 29 11:35 bin -> usr/bin
dr-xr-xr-x.   4 root root 4096 Jun 29 12:40 boot
drwxr-xr-x   20 root root 3780 Jul  5 15:05 dev
drwxr-xr-x.  88 root root 8192 Jul  4 17:44 etc
drwxr-xr-x.   5 root root   44 Jul  4 13:55 home
lrwxrwxrwx.   1 root root    7 Jun 29 11:35 lib -> usr/lib
lrwxrwxrwx.   1 root root    9 Jun 29 11:35 lib64 -> usr/lib64
drwxr-xr-x.   2 root root    6 Apr 10  2018 media
drwxr-xr-x.   2 root root    6 Apr 10  2018 mnt
drwxr-xr-x.   3 root root   16 Jun 29 11:37 opt
dr-xr-xr-x  164 root root    0 Jul  1 12:05 proc
dr-xr-x---.   5 root root 4096 Jul  5 10:49 root
drwxr-xr-x   28 root root  820 Jul  4 17:44 run
lrwxrwxrwx.   1 root root    8 Jun 29 11:35 sbin -> usr/sbin
drwxr-xr-x.   2 root root    6 Apr 10  2018 srv
dr-xr-xr-x   13 root root    0 Jul  4 15:35 sys
drwxrwxrwt.   7 root root 4096 Jul  5 03:10 tmp
drwxr-xr-x    2 root root    6 Jul  5 15:07 u01
drwxr-xr-x    2 root root    6 Jul  5 15:07 u02
drwxr-xr-x    2 root root    6 Jul  5 15:07 u03
drwxr-xr-x    2 root root    6 Jul  5 15:07 u04
drwxr-xr-x    2 root root    6 Jul  5 15:07 u05
drwxr-xr-x    2 root root    6 Jul  5 15:07 u06
drwxr-xr-x    2 root root    6 Jul  5 15:07 u07
drwxr-xr-x.  13 root root 4096 Jun 29 11:35 usr
drwxr-xr-x.  20 root root 4096 Jun 29 12:30 var

Next, add the relevant entries to the /etc/fstab file:

/dev/xvdb1              /u01                    ext4    defaults        0 0
/dev/orasvr01vg-vol01   /u02                    ext4    defaults        0 0
/dev/orasvr01vg-vol02   /u03                    ext4    defaults        0 0
/dev/orasvr01vg-vol03   /u04                    ext4    defaults        0 0
/dev/orasvr01vg-vol04   /u05                    ext4    defaults        0 0
/dev/orasvr01vg-vol05   /u06                    ext4    defaults        0 0
/dev/orasvr01vg-vol06   /u07                    ext4    defaults        0 0

Finally, mount all the file systems and check they’re available:

[root@orasvr01 /]# mount –a
[root@orasvr01 /]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             3.9G     0  3.9G   0% /dev
tmpfs                3.9G     0  3.9G   0% /dev/shm
tmpfs                3.9G  8.6M  3.9G   1% /run
tmpfs                3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/ol-root   35G  1.8G   34G   5% /
/dev/xvda1          1014M  235M  780M  24% /boot
tmpfs                797M     0  797M   0% /run/user/0
/dev/xvdb1            59G   53M   59G   1% /u01
/dev/sda1             20G   45M   20G   1% /u02
/dev/sdb1             20G   45M   20G   1% /u03
/dev/sdc1             20G   45M   20G   1% /u04
/dev/sdd1             20G   45M   20G   1% /u05
/dev/sde1             20G   45M   20G   1% /u06
/dev/sdf1             20G   45M   20G   1% /u07

That’s it! We’re now ready to copy the Oracle Database code sets to /u01, install them and build databases using the storage in /u02 through /u07. That’s a post for another time. Ciao!