
Data Guard is the gold standard as far as managing the replication of entire Oracle databases is concerned. It has significant advantages over storage based replication, many of which are documented here.
There are 4 ways to create and manage a Data Guard configuration and we’ll cover those in this installment of the Build Your Own Oracle Infrastructure series. We’ll also cover some basic testing and some more interesting configurations when integrating with Oracle RAC.
Quick links to all the tasks:
Task #1: Configure Data Guard Using SQL.
Task #1a. Data Guard Setup.
Step #1: PRIMARY – Enable FORCE LOGGING.
Step #2: PRIMARY – Add Standby Redo Log Files.
Step #3: PRIMARY – Set Instance Parameters.
Step #4: PRIMARY – Backup the Primary Database.
Step #5: PRIMARY – Create a Standby Control File.
Step #6: PRIMARY – Create a PFILE from the SPFILE.
Step #7: STANDBY – Copy and Edit the PFILE.
Step #8: STANDBY – Create Directories.
Step #9: STANDBY – Copy the Standby Control File.
Step #10: STANDBY – Copy the Primary Database Password File.
Step #11: STANDBY – Create a Listener for the Standby Instance.
Step #12: STANDBY/PRIMARY – Check Connectivity.
Step #13: STANDBY – Edit /etc/oratab.
Step #14: STANDBY – Create the SPFILE from the PFILE.
Step #15: STANDBY – Restore the Primary Database Backup.
Step #16: STANDBY – Enable Managed Recovery.
Task #1b. Check the Configuration.
Check #1: Check Recovery Mode.
Check #2: Check Database Roles.
Check #3: Check Redo Transport and Apply.
Task #1c. Test the Configuration.
Test #1: Primary Database CDB Structure Change.
Test #2: Primary Database Data Change and Switchover.
Test #3: Primary Database PDB Structure Change and Switchover.
Task #2: Configure Data Guard Using Oracle Enterprise Manager. (coming soon)
Task #3: Configure Data Guard Using Database Configuration Assistant.(coming soon)
Task #4: Configure Data Guard Using Data Guard Broker. (coming soon)
Task #1: Configure Data Guard Using SQL.
For this task we’ll use an Oracle Database 18c Release 3 CDB called D183 as our primary database. It runs on server orasvr01 whose OS is Oracle Linux 7. The database uses regular file systems for database storage and Oracle Managed Files (OMF) for Oracle file management.
Our physical standby database will be called D183DR and will run on server orasvr03. This server also runs Oracle Linux 7 and uses regular file systems for database storage. The file system configuration is slightly (and deliberately) different compared to orasvr01. The standby database will also use OMF.
Setting things up manually is the best way to understand what’s involved and what’s actually happening behind the scenes. With that in mind, it’s absolutely crucial we execute the correct commands on the correct servers and databases. The OS prompt will help identity which server we’re on and by modifying the SQL prompt, we can keep track of which database we’re logged into. To that end, you might want to add this line to your $ORACLE_HOME/sqlplus/admin/glogin.sql file:
set sqlprompt "_CONNECT_IDENTIFIER SQL> "
It’s a good idea to know what you’re starting with and what you plan to configure. So let’s review what the PRIMARY setup looks like:
Item | Details |
---|---|
Primary Database Server | orasvr01 |
Primary Database | D183 (DB_UNIQUE_NAME = D183) |
Oracle Home | /u01/app/oracle/product/18.3.0/dbhome_1 |
Data Files | /u03/oradata/D183/datafile/* |
Control Files | /u03/oradata/D183/controlfile/* |
/u07/oradata/fast_recovery_area/D183/controlfile/* | |
Temp Files | /u03/oradata/D183/datafile/* |
Online & Standby Redo Log Files | /u03/oradata/D183/onlinelog/* |
/u07/oradata/fast_recovery_area/D183/onlinelog/* | |
Seed PDB Files | /u03/oradata/D183/9F364D2DF48F78DBE0531100A8C0FDEE/datafile/* |
D183_PDB1 PDB Files | /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/* |
Archive Destination | USE_DB_RECOVERY_FILE_DEST |
db_recovery_file_dest | /u07/oradata/fast_recovery_area |
db_create_file_dest | /u03/oradata |
db_create_online_log_dest_1 | /u03/oradata |
db_create_online_log_dest_2 | /u07/oradata/fast_recovery_area |
local_listener | listener_d183 (Port 1522) |
Backup Location | /nas/backups/D183 (mounted via NFS) |
Similarly, let’s review what the STANDBY side will look like:
Item | Details |
---|---|
Standby Database Server | orasvr03 |
Standby Database | D183 (DB_UNIQUE_NAME = D183DR) |
Oracle Home | /u01/app/oracle/product/18.3.0/dbhome_1 |
Data Files | /u02/oradata/D183/datafile/* |
Control Files | /u02/oradata/D183/controlfile/* |
/u03/oradata/FRA/D183/controlfile/* | |
Temp Files | /u02/oradata/D183/datafile/* |
Online & Standby Redo Log Files | /u02/oradata/D183/onlinelog/* |
/u03/oradata/FRA/D183/onlinelog/* | |
Seed PDB Files | /u02/oradata/D183/9F364D2DF48F78DBE0531100A8C0FDEE/datafile/* |
D183_PDB1 PDB Files | /u02/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/* |
Archive Destination | USE_DB_RECOVERY_FILE_DEST |
db_recovery_file_dest | /u03/oradata/FRA |
db_create_file_dest | /u02/oradata |
db_create_online_log_dest_1 | /u02/oradata |
db_create_online_log_dest_2 | /u03/oradata/FRA |
local_listener | listener_d183dr (Port 1522) |
Task #1a. Data Guard Setup.
Step #1: PRIMARY – Enable FORCE LOGGING.
By default, some database operations are minimally logged while others can practically avoid logging altogether. In a Data Guard configuration, we always want what happens on the primary database to be replicated on the standby database. Hence, FORCE LOGGING needs to be enabled:
D183 SQL> select name, force_logging from v$database; NAME FORCE_LOGGING --------- --------------------------------------- D183 NO D183 SQL> alter database force logging; Database altered.
Step #2: PRIMARY – Add Standby Redo Log Files.
Strictly speaking, this step is only necessary if you expect your primary database to transition to the standby database role at some point. If so, then the primary database needs to have some standby redo log files. It is also an Oracle best practice. The size of the standby redo log files should be at least as large as the largest online redo log file of the source database. You also need to have one more standby redo log group per thread than the number of online redo log file groups. In a single instance database configuration there is only one thread. Multiple threads only pertain to Oracle RAC configurations. We’ll cover that later. Let’s find out how many standby redo log groups we’ll need and what size the members will need to be:
D183 SQL> select thread#, group#,members, ((bytes/1024)/1024) "MB" from gv$log; THREAD# GROUP# MEMBERS MB ---------- ---------- ---------- ---------- 1 1 2 200 1 2 2 200 1 3 2 200
Based upon this information, we know we’re going to need 4 standby redo log groups. This is how you create them and query what you end up with:
D183 SQL> alter database add standby logfile size 200M; Database altered. D183 SQL> alter database add standby logfile size 200M; Database altered. D183 SQL> alter database add standby logfile size 200M; Database altered. D183 SQL> alter database add standby logfile size 200M; Database altered. D183 SQL> set linesize 120 col member format a70 select sl.group#,lf.type,lf.member,(sl.bytes/1024)/1024 "Size MB" from v$standby_log sl, v$logfile lf where sl.group# = lf.group# order by sl.group#; GROUP# TYPE MEMBER Size MB ---------- -------- ---------------------------------------------------------------------- ---------- 4 STANDBY /u03/oradata/D183/onlinelog/o1_mf_4_h5lkqz7x_.log 200 4 STANDBY /u07/oradata/fast_recovery_area/D183/onlinelog/o1_mf_4_h5lkr2jw_.log 200 5 STANDBY /u03/oradata/D183/onlinelog/o1_mf_5_h5lkyw06_.log 200 5 STANDBY /u07/oradata/fast_recovery_area/D183/onlinelog/o1_mf_5_h5ll0vkf_.log 200 6 STANDBY /u03/oradata/D183/onlinelog/o1_mf_6_h5ll3x6y_.log 200 6 STANDBY /u07/oradata/fast_recovery_area/D183/onlinelog/o1_mf_6_h5ll4026_.log 200 7 STANDBY /u03/oradata/D183/onlinelog/o1_mf_7_h5ll90qc_.log 200 7 STANDBY /u07/oradata/fast_recovery_area/D183/onlinelog/o1_mf_7_h5ll93cw_.log 200
Step #3: PRIMARY – Set Instance Parameters.
Up to 10 additional instance parameters need to be set for the primary database. They configure the redo transport services for when the primary database has either the primary database role or the standby database role. Additional details can be found in the Oracle documentation.
Step #3a. DB_UNIQUE_NAME.
In a Physical Standby configuration, the primary database and the physical standby database will both have the same DB_NAME. However, we need to differentiate one from the other and that’s done with the DB_UNIQUE_NAME parameter. This should have been set correctly (to the same value as DB_NAME) when the database was created.
D183 SQL> show parameter db_unique_name NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_unique_name string D183
Step #3b. LOG_ARCHIVE_CONFIG.
This is a comma separated list of all the DB_UNIQUE_NAME values within the Data Guard configuration. In our case, the DB_UNIQUE_NAME of the primary database is D183 and for the physical standby database it will be D183DR.
D183 SQL> show parameter log_archive_config NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ log_archive_config string D183 SQL> alter system set log_archive_config='dg_config=(D183,D183DR)' scope=both; System altered. D183 SQL> show parameter log_archive_config NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ log_archive_config string dg_config=(D183,D183DR)
Step #3c. LOG_ARCHIVE_DEST_1.
This is the location where the primary database’s archived redo log files are written. We already know the archived redo log files are written to the directory specified by the parameter, DB_RECOVERY_FILE_DEST. Hence, the LOG_ARCHIVE_DEST_1 parameter is likely blank, but needs to be set correctly in a Data Guard configuration. All the different elements which make up the LOG_ARCHIVE_DEST_n parameter are specified in the Oracle documentation. Note, when setting this parameter use a single line to do so. Everything between the single quotes must be on a single line, even if it wraps on your screen:
D183 SQL> show parameter db_recovery_file_dest NAME TYPE VALUE ------------------------------------ ----------- ------------------------------- db_recovery_file_dest string /u07/oradata/fast_recovery_area db_recovery_file_dest_size big integer 12918M D183 SQL> show parameter log_archive_dest_1 NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ log_archive_dest_1 string D183 SQL> alter system set log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST valid_for=(ALL_LOGFILES,ALL_ROLES) db_unique_name=D183' scope=both; System altered. D183 SQL> show parameter log_archive_dest_1 NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ log_archive_dest_1 string location=USE_DB_RECOVERY_FILE_ DEST valid_for=(ALL_LOGFILES,A LL_ROLES) db_unique_name=D183
The VALID_FOR clause references a redo log type and a database role. The value ALL_LOGFILES means this archive log destination is valid for archiving either online redo log files or standby redo log files. The value ALL_ROLES means this destination is valid when the database has either the primary role or the standby role.
Step #3d. LOG_ARCHIVE_DEST_2.
This parameter tells the primary database where and how to send redo data to the standby database. This is quite a complicated parameter with plenty of options. Before we get to that, create a TNS connect string in the primary database’s tnsnames.ora file which we will use to connect to the standby database (D183DR). You will need to reference that connect string when setting the LOG_ARCHIVE_DEST_2 parameter:
D183DR = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = orasvr03.mynet.com)(PORT = 1522)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = D183DR.mynet.com) ) )
Again, when setting this parameter use a single line to do so. Everything between the single quotes must be on a single line, even if it wraps on your screen:
D183 SQL> show parameter log_archive_dest_2 NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ log_archive_dest_2 string D183 SQL> alter system set log_archive_dest_2 ='service=D183DR ASYNC NOAFFIRM db_unique_name=D183DR valid_for=(ALL_LOGFILES, ALL_ROLES)' scope=both; System altered. D183 SQL> show parameter log_archive_dest_2 NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ log_archive_dest_2 string service=D183DR ASYNC NOAFFIRM db_unique_name=D183DR valid_fo r=(ALL_LOGFILES, ALL_ROLES)
The ASYNC keyword determines if redo transport is synchronized between the primary database and the standby database. ASYNC is the default. The NOAFFIRM keyword determines the primary database will not wait for an acknowledgement that a standby database has received redo data and has written it to its standby redo log files. NOAFFIRM is the default. The SERVICE parameter component is set to the TNS connect string you just created in the primary database’s tnsnames.ora file. The DB_UNIQUE_NAME is set to the unique database name of the standby database (which doesn’t exist quite yet).
Step #3e. REMOTE_LOGIN_PASSWORDFILE.
This must be set to either EXCLUSIVE or SHARED if a password file is used to authenticate redo transport sessions. SHARED means one or more databases can use the password file. There are some limitations with this option and it’s rarely used. More common is EXCLUSIVE and means the password file can only be used by a single database. A couple of things to note. First, from Oracle Database 12c Release 2, any changes to the primary database’s password file are automatically propagated to standby databases (with the exception of far sync). Second, redo transport sessions can also be authenticated using the Secure Sockets Layer (SSL) protocol. To use that mechanism instead of a password file required Oracle Internet Directory and an Oracle Wallet.
D183 SQL> show parameter remote_login_passwordfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ remote_login_passwordfile string EXCLUSIVE
Step #3f. LOG_ARCHIVE_FORMAT.
This should be set anyway since the primary database is running in archivelog mode. If you want/need to change it, be aware it’s not a dynamic parameter. I’ve never had any problems leaving its value as the default:
D183 SQL> show parameter log_archive_format NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ log_archive_format string %t_%s_%r.arc
Step #3g. FAL_SERVER.
This needs to be set to the TNS connect string of the standby database and comes into play in the event the primary database becomes the standby database:
D183 SQL> show parameter fal_server NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ fal_server string D183 SQL> alter system set fal_server='D183DR' scope=both; System altered. D183 SQL> show parameter fal_server NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ fal_server string D183DR
Step #3h. STANDBY_FILE_MANAGEMENT.
When set to AUTO, this parameter ensures physical structure changes to the primary database (file creation/deletion) are replicated on the standby database. The default is MANUAL which we don’t want:
D183 SQL> show parameter standby_file_management NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ standby_file_management string MANUAL D183 SQL> alter system set standby_file_management=AUTO scope=both; System altered. D183 SQL> show parameter standby_file_management NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ standby_file_management string AUTO
Step #3i. DB_FILE_NAME_CONVERT.
Since we are using OMF, we do not need to set this parameter because the parameter DB_CREATE_FILE_DEST can handle file placement.
If we were not using OMF, we would need to set this parameter. This parameter ties in with STANDBY_FILE_MANAGEMENT in the sense the standby database needs to know where to replicate physical structure changes which occurred on the primary database. This is the method by which you navigate around the issue of the primary database and the standby database having different directory structures. The parameter is made up of directory path pairs. The first directory path is the path to the data files on the primary database and the second directory path is the path to the data files on the standby database. Obviously you can define more than one pair of directory paths if the primary database’s files are stored in more than one location. Which they will be in the case of a CDB. Note, this is not a dynamic parameter and will require an instance bounce for it to take effect:
Step #3j. LOG_FILE_NAME_CONVERT.
Even though we are using OMF, if this parameter is not set correctly, the standby database instantiation runs into some issues. The data files are created in the relevant directories correctly, but the online and standby redo log files do not get created. In addition, the standby database alert log complains bitterly about these missing files and keeps referring to the path they had on the primary database server. This is not a dynamic parameter so we can only update the SPFILE until the instance is re-started. OMF will append <DB_UNIQUE_NAME>/onlinelog to these directory paths:
D183 SQL> alter system set log_file_name_convert= '/u03/oradata','/u02/oradata', '/u07/oradata/fast_recover_area','/u03/oradata/FRA' scope=spfile; System altered.
Step #4: PRIMARY – Backup the Primary Database.
Literally any full primary database backup would work, but the more recent it is the less roll forward with archived redo logs you’ll need to do. In our environment, the RMAN backup files are written to /nas/backups/D183 which is an NFS share, accessible to both the primary and standby database servers. This eliminates the need to copy backup files from one server to the other. By using a preexisting backup we eliminate the overhead of an active duplication of the primary database. In a production environment, you probably don’t want to touch the primary database more than you have to.
Step #5: PRIMARY – Create a Standby Control File.
To create a standby database you need a standby control file. You generate one using the primary database. This will need to be copied to the standby server in a later step:
D183 SQL> alter database create standby controlfile as '/nas/backups/D183/tmp/D183_standby_ctrlfile.ctl'; Database altered. [oracle@orasvr01 tmp]$ pwd /nas/backups/D183/tmp [oracle@orasvr01 tmp]$ ls -l -rw-r----- 1 oracle oinstall 18825216 Feb 20 19:22 D183_standby_ctrlfile.ctl
Step #6: PRIMARY – Create a PFILE from the SPFILE.
Before the standby database instance can be started, its parameter file will need to be edited. So we need a text based version first and the easiest way to get that is to generate it from the primary database:
D183 SQL> create pfile='/nas/backups/D183/tmp/initD183DR.ora' from spfile; File created. [oracle@orasvr01 tmp]$ pwd /nas/backups/D183/tmp [oracle@orasvr01 tmp]$ ls -l init* -rw-r--r-- 1 oracle oinstall 2588 Feb 20 19:28 initD183DR.ora
Step #7: STANDBY – Copy and Edit the PFILE.
The text based PFILE needs to be copied to $ORACLE_HOME/dbs on the standby database server and edited to make it specific to the directory paths on orasvr03. Here is a list of the parameters I needed to change:
*.audit_file_dest='/u01/app/oracle/admin/D183DR/adump' *.control_files='/u02/oradata/D183DR/controlfile/D183DR_ctrl_1.ctl', '/u03/oradata/FRA/D183DR/controlfile/D183DR_ctrl_2.ctl' *.db_create_file_dest='/u02/oradata' *.db_create_online_log_dest_1='/u02/oradata' *.db_create_online_log_dest_2='/u03/oradata/FRA' *.db_unique_name='D183DR' *.db_recovery_file_dest='/u03/oradata/FRA' *.fal_server='D183' *.local_listener='(ADDRESS = (PROTOCOL = TCP)(HOST = orasvr03.raffnet.com)(PORT = 1522))' *.log_file_name_convert='/u03/oradata','/u02/oradata','/u07/oradata/fast_recovery_area','/u03/oradata/FRA'
Step #8: STANDBY – Create Directories.
In preparation for the standby database to be restored onto the standby database server, these directories need to be pre-created:
[oracle@orasvr03 ~]$ mkdir -p /u01/app/oracle/admin/D183DR/adump [oracle@orasvr03 ~]$ mkdir -p /u01/app/oracle/admin/D183DR/dpdump [oracle@orasvr03 ~]$ mkdir -p /u01/app/oracle/admin/D183DR/pfile [oracle@orasvr03 ~]$ mkdir -p /u02/oradata/D183DR/controlfile [oracle@orasvr03 ~]$ mkdir -p /u03/oradata/FRA/D183DR/controlfile
Step #9: STANDBY – Copy the Standby Control File.
Copy the standby control file created in Step #5 to the locations specified by the control_files parameter in Step #7:
[oracle@orasvr03 ~]$ cp /nas/backups/D183/tmp/D183_standby_ctrlfile.ctl /u02/oradata/D183DR/controlfile/D183DR_ctrl_1.ctl [oracle@orasvr03 ~]$ cp /nas/backups/D183/tmp/D183_standby_ctrlfile.ctl /u03/oradata/FRA/D183DR/controlfile/D183DR_ctrl_2.ctl
Step #10: STANDBY – Copy the Primary Database Password File.
Since I’m using NFS, I copied this file from the primary database server to a NAS location, then from the NAS location to the standby database server. Using scp to copy the file directly from one server to the other works just as well.
[oracle@orasvr01 ~]$ cd $ORACLE_HOME/dbs [oracle@orasvr01 dbs]$ cp orapwD183 /nas/backups/D183/tmp [oracle@orasvr03 ~]$ cd /u01/app/oracle/product/18.3.0/dbhome_1/dbs [oracle@orasvr03 dbs]$ cp /nas/backups/D183/tmp/orapwD183 ./orapwD183DR
Step #11: STANDBY – Create a Listener for the Standby Instance.
Either edit the listener.ora file in $ORACLE_HOME/network/admin and use lsnrctl to start the listener. Or use the Net Configuration Assistant (netca) to create the listener. When it’s up and running, check its status:
[oracle@orasvr03 ~]$ lsnrctl status LISTENER_D183DR LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 28-FEB-2020 12:23:23 Copyright (c) 1991, 2018, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr03.mynet.com)(PORT=1522))) STATUS of the LISTENER ------------------------ Alias LISTENER_D183DR Version TNSLSNR for Linux: Version 18.0.0.0.0 - Production Start Date 28-FEB-2020 12:22:40 Uptime 0 days 0 hr. 0 min. 42 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/oracle/product/18.3.0/dbhome_1/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/orasvr03/listener_d183dr/alert/log.xml Listening Endpoints Summary… (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=orasvr03.mynet.com)(PORT=1522))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1522))) The listener supports no services The command completed successfully
Step #12: STANDBY/PRIMARY – Check Connectivity.
Add TNS connect strings to the tnsnames.ora file in $ORACLE_HOME/network/admin on both the primary and standby servers:
D183.MYNET.COM = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = orasvr01.mynet.com)(PORT = 1522)) ) (CONNECT_DATA = (SERVICE_NAME = D183.mynet.com) ) ) D183DR.MYNET.COM = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = orasvr03.mynet.com)(PORT = 1522)) ) (CONNECT_DATA = (SERVICE_NAME = D183DR.mynet.com) ) )
Next, use tnsping to make sure the connection resolves correctly from primary to standby and from standby to primary:
[oracle@orasvr01 admin]$ tnsping d183dr TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 28-FEB-2020 12:28:38 Copyright (c) 1997, 2018, Oracle. All rights reserved. Used parameter files: /u01/app/oracle/product/18.3.0/dbhome_1/network/admin/sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = orasvr03.mynet.com)(PORT = 1522))) (CONNECT_DATA = (SERVICE_NAME = D183DR.mynet.com))) OK (0 msec) [oracle@orasvr03 admin]$ tnsping d183 TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 28-FEB-2020 12:28:04 Copyright (c) 1997, 2018, Oracle. All rights reserved. Used parameter files: /u01/app/oracle/product/18.3.0/dbhome_1/network/admin/sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = orasvr01.mynet.com)(PORT = 1522))) (CONNECT_DATA = (SERVICE_NAME = D183.mynet.com))) OK (0 msec)
Step #13: STANDBY – Edit /etc/oratab.
Add the following entry to the /etc/oratab file on the standby server:
D183DR:/u01/app/oracle/product/18.3.0/dbhome_1:N
Step #14: STANDBY – Create the SPFILE from the PFILE.
[oracle@orasvr03 ~]$ . oraenv ORACLE_SID = [] ? D183DR The Oracle base remains unchanged with value /u01/app/oracle [oracle@orasvr03 ~]$ cd $ORACLE_HOME/dbs [oracle@orasvr03 dbs]$ sqlplus / as sysdba SQL*Plus: Release 18.0.0.0.0 - Production on Fri Feb 28 12:34:52 2020 Version 18.3.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Connected to an idle instance. SQL> create spfile from pfile='./initD183DR.ora'; File created.
Step #15: STANDBY – Restore the Primary Database Backup.
Log into RMAN, start the instance and mount the (non-existent) database using the standby control files, then restore the database:
[oracle@orasvr03 ~]$ rman target / Recovery Manager: Release 18.0.0.0.0 - Production on Sat Feb 29 12:12:47 2020 Version 18.3.0.0.0 Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved. connected to target database (not started) RMAN> startup mount Oracle instance started database mounted Total System Global Area 2147481064 bytes Fixed Size 8898024 bytes Variable Size 603979776 bytes Database Buffers 1526726656 bytes Redo Buffers 7876608 bytes
As a sanity check, you can check to make sure the instance has been started using the standby control files you copied (if you’re paranoid like I am):
RMAN> select * from v$controlfile; using target database control file instead of recovery catalog STATUS ------- NAME -------------------------------------------------------------------------------- IS_ BLOCK_SIZE FILE_SIZE_BLKS CON_ID --- ---------- -------------- ---------- /u02/oradata/D183DR/controlfile/D183DR_ctrl_1.ctl NO 16384 1148 0 /u03/oradata/FRA/D183DR/controlfile/D183DR_ctrl_2.ctl NO 16384 1148 0
All looks good, so proceed with the database restore:
RMAN> restore database; Starting restore at 29-FEB-20 Starting implicit crosscheck backup at 29-FEB-20 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=260 device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=26 device type=DISK allocated channel: ORA_DISK_3 channel ORA_DISK_3: SID=261 device type=DISK allocated channel: ORA_DISK_4 channel ORA_DISK_4: SID=27 device type=DISK Crosschecked 42 objects Crosschecked 6 objects Finished implicit crosscheck backup at 29-FEB-20 Starting implicit crosscheck copy at 29-FEB-20 using channel ORA_DISK_1 using channel ORA_DISK_2 using channel ORA_DISK_3 using channel ORA_DISK_4 Crosschecked 2 objects Finished implicit crosscheck copy at 29-FEB-20 searching for all files in the recovery area cataloging files… no files cataloged using channel ORA_DISK_1 using channel ORA_DISK_2 using channel ORA_DISK_3 using channel ORA_DISK_4 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00010 to /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_sysaux_h5443fg3_.dbf channel ORA_DISK_1: restoring datafile 00012 to /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_users_h544foyw_.dbf channel ORA_DISK_1: reading from backup piece /nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1dupobv9_1_1.bkp channel ORA_DISK_2: starting datafile backup set restore channel ORA_DISK_2: specifying datafile(s) to restore from backup set channel ORA_DISK_2: restoring datafile 00009 to /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_system_h5443ffj_.dbf channel ORA_DISK_2: restoring datafile 00011 to /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_undotbs1_h5443fg5_.dbf channel ORA_DISK_2: reading from backup piece /nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1eupoc25_1_1.bkp channel ORA_DISK_3: starting datafile backup set restore channel ORA_DISK_3: specifying datafile(s) to restore from backup set channel ORA_DISK_3: restoring datafile 00003 to /u03/oradata/D183/datafile/o1_mf_sysaux_h53n98rc_.dbf channel ORA_DISK_3: reading from backup piece /nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1bupobv6_1_1.bkp channel ORA_DISK_4: starting datafile backup set restore channel ORA_DISK_4: specifying datafile(s) to restore from backup set channel ORA_DISK_4: restoring datafile 00001 to /u03/oradata/D183/datafile/o1_mf_system_h53myhfo_.dbf channel ORA_DISK_4: restoring datafile 00007 to /u03/oradata/D183/datafile/o1_mf_users_h53ngx9r_.dbf channel ORA_DISK_4: reading from backup piece /nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1cupobv7_1_1.bkp channel ORA_DISK_1: piece handle=/nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1dupobv9_1_1.bkp tag=TAG20200229T115411 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:47 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00004 to /u03/oradata/D183/datafile/o1_mf_undotbs1_h53ngg2g_.dbf channel ORA_DISK_1: restoring datafile 00013 to /u03/oradata/D183/datafile/o1_mf_users_h5m4qttn_.dbf channel ORA_DISK_1: reading from backup piece /nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1gupoc42_1_1.bkp channel ORA_DISK_2: piece handle=/nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1eupoc25_1_1.bkp tag=TAG20200229T115411 channel ORA_DISK_2: restored backup piece 1 channel ORA_DISK_2: restore complete, elapsed time: 00:00:48 channel ORA_DISK_2: starting datafile backup set restore channel ORA_DISK_2: specifying datafile(s) to restore from backup set channel ORA_DISK_2: restoring datafile 00008 to /u03/oradata/D183/datafile/o1_mf_undotbs1_h53ss2rg_.dbf channel ORA_DISK_2: reading from backup piece /nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1iupoc4p_1_1.bkp channel ORA_DISK_3: piece handle=/nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1bupobv6_1_1.bkp tag=TAG20200229T115411 channel ORA_DISK_3: restored backup piece 1 channel ORA_DISK_3: restore complete, elapsed time: 00:00:54 channel ORA_DISK_3: starting datafile backup set restore channel ORA_DISK_3: specifying datafile(s) to restore from backup set channel ORA_DISK_3: restoring datafile 00005 to /u03/oradata/D183/datafile/o1_mf_system_h53ss1r7_.dbf channel ORA_DISK_3: reading from backup piece /nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1hupoc46_1_1.bkp channel ORA_DISK_4: piece handle=/nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1cupobv7_1_1.bkp tag=TAG20200229T115411 channel ORA_DISK_4: restored backup piece 1 channel ORA_DISK_4: restore complete, elapsed time: 00:00:54 channel ORA_DISK_4: starting datafile backup set restore channel ORA_DISK_4: specifying datafile(s) to restore from backup set channel ORA_DISK_4: restoring datafile 00006 to /u03/oradata/D183/datafile/o1_mf_sysaux_h53ss0rr_.dbf channel ORA_DISK_4: reading from backup piece /nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1fupoc41_1_1.bkp channel ORA_DISK_1: piece handle=/nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1gupoc42_1_1.bkp tag=TAG20200229T115411 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:10 channel ORA_DISK_2: piece handle=/nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1iupoc4p_1_1.bkp tag=TAG20200229T115411 channel ORA_DISK_2: restored backup piece 1 channel ORA_DISK_2: restore complete, elapsed time: 00:00:09 channel ORA_DISK_3: piece handle=/nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1hupoc46_1_1.bkp tag=TAG20200229T115411 channel ORA_DISK_3: restored backup piece 1 channel ORA_DISK_3: restore complete, elapsed time: 00:00:16 channel ORA_DISK_4: piece handle=/nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1fupoc41_1_1.bkp tag=TAG20200229T115411 channel ORA_DISK_4: restored backup piece 1 channel ORA_DISK_4: restore complete, elapsed time: 00:00:25 Finished restore at 29-FEB-20
As a final step, recover the new standby database to bring it almost in sync with the primary database:
RMAN> recover database; Starting recover at 29-FEB-20 using channel ORA_DISK_1 using channel ORA_DISK_2 using channel ORA_DISK_3 using channel ORA_DISK_4 starting media recovery channel ORA_DISK_1: starting archived log restore to default destination channel ORA_DISK_1: restoring archived log archived log thread=1 sequence=39 channel ORA_DISK_1: reading from backup piece /nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1jupoc6f_1_1.bkp channel ORA_DISK_1: piece handle=/nas/backups/D183/2020-02-29/2020-02-29_11:52/D183_1jupoc6f_1_1.bkp tag=TAG20200229T115806 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:01 archived log file name=/u03/oradata/FRA/D183DR/archivelog/2020_02_29/o1_mf_1_39_h5obrowx_.arc thread=1 sequence=39 channel default: deleting archived log(s) archived log file name=/u03/oradata/FRA/D183DR/archivelog/2020_02_29/o1_mf_1_39_h5obrowx_.arc RECID=1 STAMP=1033647541 media recovery complete, elapsed time: 00:00:01 Finished recover at 29-FEB-20
Step #16: STANDBY – Enable Managed Recovery.
Everything is now in place, so it just remains to enable managed recovery. This will start the mechanism whereby redo data (in the form on archived redo log files) are transferred from the primary database server to the standby database server. Once there, they are applied to the standby database:
D183DR SQL> alter database recover managed standby database disconnect from session; Database altered.
Task #1b. Check the Configuration.
Now that the configuration is complete, let’s run through some queries to see what’s going on.
Check #1: Check Recovery Mode.
Let’s ask the primary database what it thinks is going on:
D183 SQL> select DB_UNIQUE_NAME,RECOVERY_MODE,SYNCHRONIZATION_STATUS,SYNCHRONIZED from v$archive_dest_status where DB_UNIQUE_NAME in ('D183','D183DR'); DB_UNIQUE_NAME RECOVERY_MODE SYNCHRONIZATION_STATUS SYN ------------------------------ ---------------------------------- ---------------------- --- D183 IDLE CHECK CONFIGURATION NO D183DR MANAGED REAL TIME APPLY CHECK CONFIGURATION NO
For a Maximum Performance Mode configuration (ASYNC redo transport), this looks good.
Check #2: Check Database Roles.
Let’s query each database to see what role it thinks it has in the configuration:
D183 SQL> select DB_UNIQUE_NAME, DBID, OPEN_MODE, PROTECTION_MODE, DATABASE_ROLE, SWITCHOVER_STATUS from v$database; DB_UNIQUE_NAME DBID OPEN_MODE PROTECTION_MODE DATABASE_ROLE SWITCHOVER_STATUS -------------- --------- ---------- ------------------- ---------------- ----------------- D183 615698974 READ WRITE MAXIMUM PERFORMANCE PRIMARY TO STANDBY D183DR SQL> select DB_UNIQUE_NAME, DBID, OPEN_MODE, PROTECTION_MODE, DATABASE_ROLE, SWITCHOVER_STATUS from v$database; DB_UNIQUE_NAME DBID OPEN_MODE PROTECTION_MODE DATABASE_ROLE SWITCHOVER_STATUS -------------- --------- ---------- ------------------- ---------------- ----------------- D183DR 615698974 MOUNTED MAXIMUM PERFORMANCE PHYSICAL STANDBY NOT ALLOWED
The SWITCHOVER_STATUS of NOT ALLOWED for the standby database is normal.
Check #3: Check Redo Transport and Apply.
Let’s force an online redo log file switch which will generate a new archived redo log file. That file should be copied to the Fast Recovery Area (FRA) on standby server and applied to the standby database.
First, check what’s in the FRA on the standby server (just the archived redo log file sequence #46) :
[oracle@orasvr03 2020_03_02]$ pwd /u03/oradata/FRA/D183DR/archivelog/2020_03_02 [oracle@orasvr03 2020_03_02]$ ls -l -rw-r----- 1 oracle oinstall 181873152 Mar 2 00:15 o1_mf_1_46_h5s946f8_.arc
Next, check what is the current online redo log file on the primary database and then force a log switch:
D183 SQL> select GROUP#, THREAD#, SEQUENCE#, STATUS
from v$log
order by 1;
GROUP# THREAD# SEQUENCE# STATUS
---------- ---------- ---------- ----------------
1 1 46 INACTIVE
2 1 47 CURRENT
3 1 45 INACTIVE
D183 SQL> alter system switch logfile;
System altered.
Next, check what’s in the FRA on the standby server:
[oracle@orasvr03 2020_03_02]$ pwd
/u03/oradata/FRA/D183DR/archivelog/2020_03_02
[oracle@orasvr03 2020_03_02]$ ls -l
-rw-r----- 1 oracle oinstall 181873152 Mar 2 00:15 o1_mf_1_46_h5s946f8_.arc
-rw-r----- 1 oracle oinstall 108407808 Mar 2 15:01 o1_mf_1_47_h5tx0w94_.arc
Next, check what’s been applied to the standby database:
D183DR SQL> select CLIENT_PROCESS,PROCESS,SEQUENCE#,STATUS
from v$managed_standby;
CLIENT_P PROCESS SEQUENCE# STATUS
-------- --------- ---------- ------------
N/A DGRD 0 ALLOCATED
ARCH ARCH 0 CONNECTED
N/A DGRD 0 ALLOCATED
ARCH ARCH 46 CLOSING
ARCH ARCH 44 CLOSING
ARCH ARCH 47 CLOSING
Archival RFS 0 IDLE
LGWR RFS 48 IDLE
UNKNOWN RFS 0 IDLE
UNKNOWN RFS 0 IDLE
UNKNOWN RFS 0 IDLE
LNS LNS 0 CONNECTED
LNS LNS 0 CONNECTED
N/A MRP0 48 APPLYING_LOG
The Managed Recovery Process (MRP0) has already applied archived redo log file sequence #47 and is waiting for sequence #48. Looks good!
Task #1c. Test the Configuration.
Things seem to be working, but let’s run a few simple tests just to make sure.
Test #1: Primary Database CDB Structure Change.
If we add a data file to the primary database, the corresponding data file should appear on the standby database.
D183 SQL> select t.name,substr(df.name,1,90) filename from v$tablespace t, v$datafile df where t.ts# = df.ts# and t.name = 'USERS' order by 1,2; NAME FILENAME ------------ --------------------------------------------------------------------------------------- USERS /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_users_h544foyw_.dbf USERS /u03/oradata/D183/datafile/o1_mf_users_h53ngx9r_.dbf D183DR SQL> select t.name,substr(df.name,1,90) filename from v$tablespace t, v$datafile df where t.ts# = df.ts# and t.name = 'USERS' order by 1,2; NAME FILENAME ------------ --------------------------------------------------------------------------------------- USERS /u02/oradata/D183DR/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_users_h5obj68f_.dbf USERS /u02/oradata/D183DR/datafile/o1_mf_users_h5objbs4_.dbf D183 SQL> alter tablespace users add datafile; Tablespace altered. D183 SQL> alter system switch logfile; System altered. D183 SQL> select t.name,substr(df.name,1,90) filename from v$tablespace t, v$datafile df where t.ts# = df.ts# and t.name = 'USERS' order by 1,2; NAME FILENAME ------------ --------------------------------------------------------------------------------------- USERS /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_users_h544foyw_.dbf USERS /u03/oradata/D183/datafile/o1_mf_users_h53ngx9r_.dbf USERS /u03/oradata/D183/datafile/o1_mf_users_h5od6r6z_.dbf D183DR SQL> select t.name,substr(df.name,1,90) filename from v$tablespace t, v$datafile df where t.ts# = df.ts# and t.name = 'USERS' order by 1,2; NAME FILENAME ------------ --------------------------------------------------------------------------------------- USERS /u02/oradata/D183DR/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_users_h5obj68f_.dbf USERS /u02/oradata/D183DR/datafile/o1_mf_users_h5objbs4_.dbf USERS /u02/oradata/D183DR/datafile/o1_mf_users_h5odw0x1_.dbf
As you can see, the new data file was created on the standby server. OMF handled the file placement and naming convention.
Test #2: Primary Database Data Change and Switchover.
We will now add some data to the primary database and verify it has been applied to the standby database by performing a switchover and querying the data. Since this is a CDB, let’s populate the PDB with some data just to make sure it’s joining in with the Data Guard fun.
First, let’s run a Data Pump Import into D183_PDB1 in D183:
[oracle@orasvr01 dp]$ impdp sfrancis@d183_pdb1 parfile=impdp_media_01242020.parfile Import: Release 18.0.0.0.0 - Production on Tue Mar 3 10:50:20 2020 Version 18.3.0.0.0 Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved. Password: <password> Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Master table "SFRANCIS"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded import done in AL32UTF8 character set and AL16UTF16 NCHAR character set export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set Warning: possible data loss in character set conversions Starting "SFRANCIS"."SYS_IMPORT_SCHEMA_01": sfrancis/@d183_pdb1 parfile=impdp_media_01242020.parfile Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . imported "MEDIA"."TITLES" 183.6 KB 1896 rows . . imported "MEDIA"."FORMATS" 7.585 KB 55 rows . . imported "MEDIA"."GENRES" 5.523 KB 4 rows . . imported "MEDIA"."MEDIA_TYPES" 6.242 KB 14 rows . . imported "MEDIA"."RECORDING_ARTISTS" 19.10 KB 628 rows . . imported "MEDIA"."RELEASES" 5.953 KB 3 rows Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/TRIGGER Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/STATISTICS/MARKER Job "SFRANCIS"."SYS_IMPORT_SCHEMA_01" successfully completed at Tue Mar 3 10:51:30 2020 elapsed 0 00:01:05
Next, we’ll login to the MEDIA account and check there’s some data:
[oracle@orasvr01 ~]$ sqlplus media@d183_pdb1 d183_pdb1 SQL> show con_name CON_NAME --------- D183_PDB1 d183_pdb1 SQL> show user USER is "MEDIA" d183_pdb1 SQL> select count(*) from TITLES; COUNT(*) ---------- 1896
Next, we’ll force a log switch on the primary and verify the standby database has caught up:
D183 SQL> alter system switch logfile; System altered. D183 SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 49 Next log sequence to archive 51 Current log sequence 51 D183DR SQL> select CLIENT_PROCESS,PROCESS,SEQUENCE#,STATUS from v$managed_standby; CLIENT_P PROCESS SEQUENCE# STATUS -------- --------- ---------- ------------ N/A DGRD 0 ALLOCATED ARCH ARCH 50 CLOSING N/A DGRD 0 ALLOCATED ARCH ARCH 46 CLOSING ARCH ARCH 48 CLOSING ARCH ARCH 47 CLOSING Archival RFS 0 IDLE LGWR RFS 51 IDLE UNKNOWN RFS 0 IDLE UNKNOWN RFS 0 IDLE UNKNOWN RFS 0 IDLE LNS LNS 0 CONNECTED LNS LNS 0 CONNECTED N/A MRP0 51 APPLYING_LOG
Archived redo log file sequence #50 has been transferred to the standby server and applied to the standby database. It’s now waiting to apply sequence #51. Looks good. Now it’s time to perform the switch over. This is always initiated from the primary database:
D183 SQL> alter database commit to switchover to physical standby with session shutdown; Database altered. D183DR SQL> alter database commit to switchover to primary with session shutdown; Database altered. D183DR SQL> alter database open; Database altered. D183 SQL> startup mount ORACLE instance started. Total System Global Area 3221221872 bytes Fixed Size 8901104 bytes Variable Size 771751936 bytes Database Buffers 2432696320 bytes Redo Buffers 7872512 bytes Database mounted. D183 SQL> alter database recover managed standby database using current logfile disconnect; Database altered.
Next we’ll check the database have the roles we expected:
D183DR SQL> select DB_UNIQUE_NAME, DBID, OPEN_MODE, PROTECTION_MODE, DATABASE_ROLE, SWITCHOVER_STATUS from v$database; DB_UNIQUE_NAME DBID OPEN_MODE PROTECTION_MODE DATABASE_ROLE SWITCHOVER_STATUS -------------- --------- ---------- ------------------- ---------------- ----------------- D183DR 615698974 READ WRITE MAXIMUM PERFORMANCE PRIMARY TO STANDBY D183 SQL> select DB_UNIQUE_NAME, DBID, OPEN_MODE, PROTECTION_MODE, DATABASE_ROLE, SWITCHOVER_STATUS from v$database; DB_UNIQUE_NAME DBID OPEN_MODE PROTECTION_MODE DATABASE_ROLE SWITCHOVER_STATUS -------------- --------- ---------- ------------------- ---------------- ----------------- D183 615698974 MOUNTED MAXIMUM PERFORMANCE PHYSICAL STANDBY NOT ALLOWED
The databases have switched roles, so the new primary is D183DR and the new standby is D183. Next we need to check the data we imported is actually in the PDB in the new primary:
D183DR SQL> alter pluggable database d183_pdb1 open; Pluggable database altered. D183DR SQL> alter session set container=d183_pdb1; Session altered. D183DR SQL> select table_name from dba_tables where owner = 'MEDIA'; TABLE_NAME -------------------------------------------------------------------------------- GENRES MEDIA_TYPES FORMATS RECORDING_ARTISTS RELEASES TITLES D183DR SQL> select count(*) from media.titles; COUNT(*) ---------- 1896
The data arrived intact, so that test worked as well. Things are looking good.
Test #3: Primary Database PDB Structure Change and Switchover.
As a final test, let’s add a tablespace to the PDB, verify the data file shows up on the standby database and then switch back to the original configuration (D183 as primary, D183DR as standby).
First, let’s check the data files belonging to D183_PDB1 in D183DR (new primary):
D183DR SQL> select name from v$datafile where con_id = 3; NAME ------------------------------------------------------------------------------------------- /u02/oradata/D183DR/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_system_h5ok0gwq_.dbf /u02/oradata/D183DR/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_sysaux_h5ok0gmb_.dbf /u02/oradata/D183DR/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_undotbs1_h5ok0gz3_.dbf /u02/oradata/D183DR/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_users_h5ok0gok_.dbf
Next, we’ll check the same set of files belonging to D183_PDB1 in D183 (new standby):
D183 SQL> select name from v$datafile where con_id = 3; NAME -------------------------------------------------------------------------------------------- /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_system_h5443ffj_.dbf /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_sysaux_h5443fg3_.dbf /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_undotbs1_h5443fg5_.dbf /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_users_h544foyw_.dbf
Next, we’ll add a tablespace to D183_PDB1 in D183DR, check the files then force a log switch (must be done from the CDB$ROOT container):
D183DR SQL> show con_name
CON_NAME
------------------------------
D183_PDB1
D183DR SQL> create tablespace media_data;
Tablespace created.
D183DR SQL> conn / as sysdba
Connected.
D183DR SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
D183DR SQL> alter system switch logfile;
System altered.
D183DR SQL> select name from v$datafile where con_id = 3;
NAME
----------------------------------------------------------------------------------------------------
/u02/oradata/D183DR/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_system_h5ok0gwq_.dbf
/u02/oradata/D183DR/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_sysaux_h5ok0gmb_.dbf
/u02/oradata/D183DR/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_undotbs1_h5ok0gz3_.dbf
/u02/oradata/D183DR/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_users_h5ok0gok_.dbf
/u02/oradata/D183DR/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_media_da_h5xf0tk1_.dbf
Next, we’ll check to see if the data file has been added to D183_PDB1 in D183 (new standby):
D183 SQL> select name from v$datafile where con_id = 3;
NAME
----------------------------------------------------------------------------------------------
/u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_system_h5443ffj_.dbf
/u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_sysaux_h5443fg3_.dbf
/u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_undotbs1_h5443fg5_.dbf
/u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_users_h544foyw_.dbf
/u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_media_da_h5xf109p_.dbf
That worked. I should note that for a few seconds I saw the following file on the standby server before it disappeared and became the OMF file referenced above:
/u01/app/oracle/product/18.3.0/dbhome_1/dbs/UNNAMED00015
It would be interesting to know what would happen if you added a really big file and the /u01 file system did not have sufficient space to accommodate an UNNAMED file. Answers on a postcard to…
Time to switch roles again and verify everything is back working in the original configuration. As usual, ensure the primary and standby databases are in sync. Force a log switch if you need to:
D183DR SQL> alter system switch logfile; System altered. D183DR SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 53 Next log sequence to archive 55 Current log sequence 55 D183 SQL> select CLIENT_PROCESS,PROCESS,SEQUENCE#,STATUS from v$managed_standby; CLIENT_P PROCESS SEQUENCE# STATUS -------- --------- ---------- ------------ ARCH ARCH 0 CONNECTED N/A DGRD 0 ALLOCATED N/A DGRD 0 ALLOCATED ARCH ARCH 0 CONNECTED ARCH ARCH 0 CONNECTED ARCH ARCH 54 CLOSING Archival RFS 0 IDLE LGWR RFS 55 IDLE LNS LNS 0 CONNECTED LNS LNS 0 CONNECTED UNKNOWN RFS 0 IDLE N/A MRP0 55 APPLYING_LOG
Archived redo log file sequence #54 has been applied to the standby and it’s now waiting to apply the next sequence to be generated (#55). As always, we initiate the switchover from the primary database:
D183DR SQL> alter database commit to switchover to physical standby with session shutdown; Database altered. D183 SQL> alter database commit to switchover to primary with session shutdown; Database altered. D183 SQL> alter database open; Database altered. D183DR SQL> startup mount ORACLE instance started. Total System Global Area 2147481064 bytes Fixed Size 8898024 bytes Variable Size 603979776 bytes Database Buffers 1526726656 bytes Redo Buffers 7876608 bytes Database mounted. D183DR SQL> alter database recover managed standby database using current logfile disconnect; Database altered.
Finally, let’s verify the respective database roles and open D183_PDB1 in D183 just to make sure it can verify the data file that was added while D183 was the standby database:
D183 SQL> select db_unique_name, database_role from v$database; DB_UNIQUE_NAME DATABASE_ROLE ------------------------------ ---------------- D183 PRIMARY D183DR SQL> select db_unique_name, database_role from v$database; DB_UNIQUE_NAME DATABASE_ROLE ------------------------------ ---------------- D183DR PHYSICAL STANDBY D183 SQL> alter pluggable database d183_pdb1 open; Pluggable database altered. D183 SQL> alter session set container=d183_pdb1; Session altered. D183 SQL> select file_name from dba_data_files; FILE_NAME ------------------------------------------------------------------------------------------- /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_system_h5443ffj_.dbf /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_sysaux_h5443fg3_.dbf /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_undotbs1_h5443fg5_.dbf /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_users_h544foyw_.dbf /u03/oradata/D183/9F38B30ACC493CC6E0531100A8C0D68A/datafile/o1_mf_media_da_h5xf109p_.dbf
So there you have it. How to configure Data Guard manually using SQL commands, different file system layouts and OMF.