I had been working today on AQ issue where messages were not moving from WAIT to READY State.This was on 11.2.0.3 database running on RHEL5u6(64 bit). I thought of quickly documenting it for easy reference When you enqueue record, you can specify delay after which record should be ready for dequeue. During this time, MSG_STATE is in ‘WAIT’ state, which changes to ‘READY’ state after current time reaches delay timestamp.All the stuck records were having delay_timestamp past current time.This can be verified by querying AQ$Queue_table.
Below example , you see that MSG_ID ‘D2DA56B9746E2E59E043096F93452D8A‘ has delay_timestamp of ’10-JAN-13 05.42.07.835125 AM’ which is less then current time ( 9:52 AM)
09:52:00 SQL > select msg_id,msg_state,enq_timestamp,delay_timestamp from AQ$TEST_QTABLE where rownum <5;
MSG_ID MSG_STATE ENQ_TIMESTAMP DELAY_TIMESTAMP
-------------------------------- ---------- ------------------------------ ------------------------------
D2DA56B94EF02E59E043096F93452D8A WAIT 09-JAN-13 07.37.07.595334 AM 23-FEB-13 07.37.07.595334 AM
D2DA56B9746E2E59E043096F93452D8A WAIT 09-JAN-13 07.42.07.835125 AM 10-JAN-13 05.42.07.835125 AM
D2DA56B94ED32E59E043096F93452D8A WAIT 09-JAN-13 07.37.05.274447 AM 23-FEB-13 07.37.05.274447 AM
D2F1EDF5112D59A1E043096F93451A42 READY 10-JAN-13 07.35.33.089265 AM 10-JAN-13 07.35.34.089265 AM
Along with this there were lot of messages in PROCESSED state.My first thought was that Retention for this queue was set to higher value, but on checking I found that it was set to 0. You can read my previous article on Retention and Processed messages in queue
V$AQ revealed that there were 2 more queues with Pending processed messages. As per documentation, Ready column should only show messages in READY state but here it included count for PROCESSED state too (Not sure if this is also bug)
Verified that aq_tm_processes parameter was not set (Oracle recommends to unset it in 10g and above so that oracle can auto-tune it) and there were two processes running
To check further, I checked alert log for any ORA errors and found few entries for ORA-01405.
ORA-01405: fetched column value is NULL
On checking trace directory for queue process tracefile, same ora-01405 error was logged
kwqdlScanDeqLog during stmt fetch: retval -1, errnum 1405, errbuf ORA-01405: fetched column value is NULL
Checked Oracle MySupport and was able to locate note ID 1162862.1 – ORA-01405: Fetched Column Value Is Null For An Advanced Queue
In Oracle 11.2 a new dequeue log table named as AQ$_QT_L has been introduced which is used for storing message identifiers of committed dequeued operations on the queue.Due to bug 12899768 there are few message id with ‘00000000000000000000000000000000’ in this queue table which causes PROCESSED messages to get stuck.
Note 12899768.8 indicates that this is bug in 11.2.0.2 and 11.2.0.3 database
Symptoms include
– ORA-01405 errors for qmon process – Lot of PROCESSED messages in queue – Messages with message id’s like ‘00000000000000000000000000000000’
We need to either apply patch for this bug or set event ‘10852 trace name context forever, level 16384’ On searching for this event, came across Note 1365655.1 – Dequeue by msgid of a message in WAIT state does not work properly in 11.2
This note matched our main problem where messages are not changing state from WAIT to ready. This note also suggested setting event 10852 followed by database restart. To correct this issue we followed below steps
a)Stopped the Apps to prevent enqueue/dequeue. You can also stop it using dbms_aqadm.stop_queue() but this would generate errors in app. b)deleted messages with msgid like ‘00000000000000000000000000000000’ from new queue table
delete from AQ$TEST_TABLE_L where msgid ='00000000000000000000000000000000';
commit;
c)Set event in spfile and restart database
conn / as sysdba
alter system set event='10852 trace name context forever, level 16384' scope=spfile;
After restart,messages with delay_timestamp older then current time changed state to READY and count in V$AQ started dropping indicating that qmon process was purging processed messages.
Online patching is new feature introduced in 11.1.0.6 and oracle started delivering patches with 11.2.0.2.
This patch allows you to apply patch to live database i.e we do not need to shutdown database. This feature is only available for RDBMS home and can be installed/enabled using opatch.
RDBMS Online Patching Aka Hot Patching [ID 761111.1] explains internals of how this patching works.
I tested it while working on Database corruption issue caused by Bug 10205230 ORA-600 / corruption possible during shutdown in RAC. This bug is present in 11.2.0.2 database which causes corruption when you try to use shutdown immediate/normal/transactional option to shutdown RAC database. As per MOS note 1318986.1, Data corruption occurs around shutdown one or more of the RAC instances
* One of the following ORA-600 asserts:
– ORA-600 [kclchkblk_3]
– ORA-600 [kclwcrs_6]
– ORA-600 [ktubko_1]
– ORA-600 [kcratr_scan_lostwrt]
– ORA-600[3020] on the standby database
Workaround is to perform local check point and do shutdown abort. Patch 10205230 was available for our platform and I downloaded using below command directly to server
mos_user_id – Metalink aka Mos user id (email id) h_passwd – Password
To query if patch is online, go to patch directory and run following
$opatch query -all online
Invoking OPatch 11.2.0.1.6
Oracle Interim Patch Installer version 11.2.0.1.6
Copyright (c) 2011, Oracle Corporation. All rights reserved.
Oracle Home : /home/oracle/product/11.2
Central Inventory : /home/oracle/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.6
OUI version : 11.2.0.2.0
Log file location : /home/oracle/product/11.2/cfgtoollogs/opatch/opatch2012-06-20_16-43-21PM.log
--------------------------------------------------------------------------------
Patch created on 2 Apr 2011, 12:57:14 hrs PST8PDT
Need to shutdown Oracle instances: false
Patch is roll-backable: true
Patch is a "Patchset Update": false
Patch is a rolling patch: true
Patch has sql related actions: false
Patch is an online patch: true
Patch is a portal patch: false
Patch is an "auto-enabled" patch: false
List of platforms supported:
226: Linux x86-64
List of bugs to be fixed:
10205230: TB_X64: HIT ORA-00600: [KCLWCRS_6]
This patch is a "singleton" patch.
This patch belongs to the "db" product family
List of executables affected:
ORACLE_HOME/bin/oracle
List of optional components:
oracle.rdbms: 11.2.0.2.0
List of optional actions:
Patch the Database instances with Online Patch hpatch/bug10205230.pch
Possible XML representation of the patch:
<ONEOFF REF_ID="10205230" ROLLBACK="T" XML_INV_LOC="oneoffs/10205230/" ACT_INST_VER="11.2.0.2.0" INSTALL_TIME="2012.Jun.20 16:43:21 PDT">
<DESC></DESC>
<REF_LIST>
<REF NAME="oracle.rdbms" VER="11.2.0.2.0" HOME_IDX="0"/>
</REF_LIST>
<BUG_LIST>
<BUG>10205230</BUG>
</BUG_LIST>
<FILE_LIST/>
</ONEOFF>
--------------------------------------------------------------------------------
OPatch succeeded.
“Patch is an online patch: true” indicates patch can be applied to live db.
Since we are patching RAC database, we will use second syntax
$ opatch apply online -connectString orcl01d1:sys:oracle123:prod03,orcl01d2:sys:oracle123:prod04
Invoking OPatch 11.2.0.1.6
Oracle Interim Patch Installer version 11.2.0.1.6
Copyright (c) 2011, Oracle Corporation. All rights reserved.
Oracle Home : /home/oracle/product/11.2
Central Inventory : /home/oracle/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.6
OUI version : 11.2.0.2.0
Log file location : /home/oracle/product/11.2/cfgtoollogs/opatch/opatch2012-06-20_16-53-36PM.log
The patch should be applied/rolled back in '-all_nodes' mode only.
Converting the RAC mode to '-all_nodes' mode.
Applying interim patch '10205230' to OH '/home/oracle/product/11.2'
Verifying environment and performing prerequisite checks...
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Backing up files...
Patching component oracle.rdbms, 11.2.0.2.0...
The patch will be installed on active database instances.
Installing and enabling the online patch 'bug10205230.pch', on database 'orcl01d1'.
Patching in all-node mode.
Updating nodes 'prod04'
Apply-related files are:
FP = "/home/oracle/product/11.2/.patch_storage/10205230_Apr_2_2011_12_57_14/rac/copy_files.txt"
DP = "/home/oracle/product/11.2/.patch_storage/10205230_Apr_2_2011_12_57_14/rac/copy_dirs.txt"
MP = "/home/oracle/product/11.2/.patch_storage/10205230_Apr_2_2011_12_57_14/rac/make_cmds.txt"
RC = "/home/oracle/product/11.2/.patch_storage/10205230_Apr_2_2011_12_57_14/rac/remote_cmds.txt"
Instantiating the file "/home/oracle/product/11.2/.patch_storage/10205230_Apr_2_2011_12_57_14/rac/copy_files.txt.instantiated" by replacing $ORACLE_HOME in "/home/oracle/product/11.2/.patch_storage/10205230_Apr_2_2011_12_57_14/rac/copy_files.txt" with actual path.
Propagating files to remote nodes...
Instantiating the file "/home/oracle/product/11.2/.patch_storage/10205230_Apr_2_2011_12_57_14/rac/copy_dirs.txt.instantiated" by replacing $ORACLE_HOME in "/home/oracle/product/11.2/.patch_storage/10205230_Apr_2_2011_12_57_14/rac/copy_dirs.txt" with actual path.
Propagating directories to remote nodes...
Installing and enabling the online patch 'bug10205230.pch', on database 'orcl01d2' on node 'prod04'.
Patch 10205230 successfully applied
Log file location: /home/oracle/product/11.2/cfgtoollogs/opatch/opatch2012-06-20_16-53-36PM.log
OPatch succeeded.
You can verify that patch is applied and enabled
SQL> oradebug patch list
Patch File Name State
================ =========
bug10205230.pch ENABLED
If you have multiple ORACLE databases running out of same RDBMS Home, you can enable patch for other database using enableonlinepatch option
Alert log is updated with following entries during patch application
Patch bug10205230.pch Installed - Update #1
Patch bug10205230.pch Enabled - Update #2
Thu Jun 21 02:13:17 2012
Online patch bug10205230.pch has been installed
Online patch bug10205230.pch has been enabled
Online patching is very important feature introduced in 11.2.0.2 as this allows you to apply patches without any downtime. Note that if you apply patch using normal opatch apply, it will not be enabled. You can confirm this by oradebug patch list command.
Oracle 11.2.0.2 has introduced new column “REASON” to V$SQL_SHARED_CURSORS. I came across this extremely useful column while debugging multiple child cursor issue (11.2.0.2 seems to have too many bugs which can cause multiple child cursors).
As per documentation , REASON column has been introduced in 11.2.0.2
REASON - CLOB Child number, id, and reason the cursor is not shared. The content of this column is structured using XML.
Reason column is helpful as we can easily interpret the reason why child cursors are not being shared. e.g
We can verify the same thing by checking PQ_SLAVE_MISMATCH column
select sql_id,PQ_SLAVE_MISMATCH,OPTIMIZER_MODE_MISMATCH from V$SQL_SHARED_CURSOR where sql_id='b9uz0akdcx58q' and child_number=151;
SQL_ID P O
------------- - -
b9uz0akdcx58q Y N
There is one more column PURGED_CURSOR which is new (I guess it came from 11.2.0.1) which tells if the cursor has been purged using dbms_shared_pool.purge package.
I had installed a 11.2.0.2 RAC setup few days back with incorrect timezone. It had to be PDT but I installed with UTC.
Starting/stopping clusteware with correct timezone didn’t solve the issue.
In 11.2.0.2 Oracle stores timezone information in file $GRID_HOME/crs/install/s_config_(hostname).txt. In my case file looked like this
cd /oragrid/product/11.2/crs/install
cat s_crsconfig_prod1.txt
<strong>TZ=UTC</strong>
NLS_LANG=AMERICAN_AMERICA.AL32UTF8
TNS_ADMIN=
ORACLE_BASE=
To resolve the issue we need to change TZ to US/Pacific on all nodes and restart clusterware. So entry would be like
<strong>TZ=US/Pacific</strong>
On Restarting clusteware , database and clusteware starts with correct timezone.
In case you wish to have different database timezone only for Oracle database, then it is possible using srvctl command. E.g
This would require database bounce. Also note that in case database is started manually it would not start with correct timezone. To unset the parameter use following command
Sharing a post from my friend “Carthik” on 11gR2 RAC in-place upgrade.
Overview ======== Starting oracle 11gR2 the Oracle database and Clusterware upgrades are done via the “Out of place upgrade”. And is the easier way to perform your upgrade. However, the intent of this blog is to explain how an “In-Place upgrade” of a RAC database is done in 11gR2, the advantages, disadvantages, pre-requisites and pain points involved in this method which is the traditional method of upgrading an oracle database. And I have chosen to upgrade a 2 Node 11.2.0.1 RAC Database to 11.2.0.2 RAC Database.
Clearly, the advantage is that you save space, Instead of installing a new Oracle Home.
The greatest disadvantage is that you need to back up the oracle home and run detach oracle home commands. This could potentially damage your oracle binaries. However, when done carefully it should not cause a problem.
The pain points Include: 1. Backing up the Oracle home 2. Restoring the Oracle Home from backup in-case of installation failure. 3. Attach the restored home, and then bring up the instance. Clearly, there is a lot of manual intervention, which is a major pain point.
The idea behind using an in-place upgrade is to save space. And this method of upgrade requires a significant amount time. The only time one should use this method is when you lack space on your server. However, you can use this method for your test and development environments to save space. Since this method was the traditional method of doing things, I thought it’s worth checking how it works in 11gR2.
NOTE: If you have an existing Oracle Clusterware installation, then you upgrade your existing cluster by performing an out-of-place upgrade. You cannot perform an in-place upgrade to the oracle Clusterware. However, you can perform an in-place upgrade for the database. This will be elaborated in this blog. Prerequisites for Oracle 11gR2 11.2.0.2 installation is to install patch 9655006 to the 11.2.0.1 GI home before upgrading to 11.2.0.2 from 11.2.0.1. See Bug 9413827 on MOS. For additional details you can refer to the Metalink article Pre-requisite for 11.2.0.1 to 11.2.0.2 ASM Rolling Upgrade Metalink Note : 1274629.1. Additionally ,Refer to “How to Manually Apply A Grid Infrastructure PSU Without Opatch Auto” Metalink Note 1210964.1.
Performing an In-place upgrade of a RAC DB from 11.2.0.1 to 11.2.0.2
In-order to upgrade a RAC Database from 11.2.0.1 to 11.2.0.2 you need to upgrade
1. The grid infrastructure first. 2. Then, the Oracle Database.
Environment Setup Details used in this post: — 2 Node RAC Red Hat Linux 5.5 with RACK Servers (R710). — Applies to any storage.
Latest OPatch It is recommended to we use the Latest Version of OPatch. Unzip the zip file and copy OPatch folder to $ORACLE_HOME And $GI_HOME by renaming the earlier OPatch directory. You can refer to how to download and Install OPatch Metalink ID 274526.1.
Pre- Requisite Patch:
First, let’s discuss about the mandatory patches required before upgrading to 11.2.0.2. Patch 9655006 is required in order for the upgrade to succeed, if not rootupgrade.sh will fail. Patch 9655006 is downloadable from http://www.metalink.oracle.com/ For information on Bug 9655006, refer to Metalink article ID 9655006.8 Download the patch and unzip it to a stage directory (it can be any directory), once you unzip the downloaded patch, 2 folders will be created. In this example I have unzipped the patch to /patches directory.
Now, let’s see how to patch the $GI_HOME with Patch 9655006.
Before the patch is installed, we need to perform a pre-req check. Let’s see how to do this. 1. [root@rac1 patches]# su – oracle
2. [oracle@rac1 ~]$ cd /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/
Invoking OPatch 11.2.0.1.6
Oracle Interim Patch Installer version 11.2.0.1.6
Copyright (c) 2011, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /opt/app/oracle/product/11.2.0/dbhome_1
Central Inventory : /opt/app/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.6
OUI version : 11.2.0.1.0
Log file location : /opt/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2011-08-16_19-36-09PM.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.
Patching the $GI_HOME 1. Stop the Instance running on Node1 srvctl stop instance -d upgrade -i upgrade1
2. As root User run the opatch auto from the $GI_HOME ./opatch auto /patches
Note: The Opatch auto takes care of the patching of both the Grid infrastructure Home and the Oracle Home with the mandatory patch .
Once the patching is done on Node1, start the instance on Node1
3. Starting the Instance on Node1 srvctl start instance -d upgrade -i upgrade1
Repeat the process of pre-req and patching on Node2
Once the mandatory patch is applied, we can proceed with the upgrade of the grid infrastructure home.
Patches required:
The software/Patch can be downloaded from My Oracle support: patch 10098816. Select p10098816_112020_Linux-x86-64_3of7.zip for grid infrastructure download. Once downloaded, unzip them.
Upgrading Grid Infrastructure:
Unzip the patches downloaded and invoke runInstaller from the unzipped grid folder. You will be taken to the welcome screen.
Choose Skip Software updates
Choose Upgrade Gird Infrastructure or Oracle ASM
Choose the Language
The Nodes present are selected by default, Click Next.
Leave the OS groups to Defaults
Choose the New Location where the Grid Infrastructure should be installed
The Pre-Requisite checks are performed, click next
The summary screen appears click next
Click on Install
Run rootupgrade.sh on both the nodes as specified in the screenshot
Upgrading the Database via In-place upgrade:
Patches Required: The software/Patch can be downloaded from My Oracle support: patch 10098816. Select p10098816_112020_Linux-x86-64_1of7.zip and p10098816_112020_Linux-x86-64_2of7.zip for database patch/software download. Once downloaded, unzip them.
In-Place upgrades (Things to do before performing and In-place upgrade)
When performing an in-place upgrade, which uses the same Oracle home location, an error messages appears stating that the installer detects Oracle Database software in the location that you specified.
Message: The installer has detected that the software location you have specified contains Oracle Database software release 11.2.0.1. Oracle recommends that when upgrading to 11.2.0.2, you perform an out-of-place installation of the software into a new Oracle home and then upgrade the database using the new software binaries.
Cause: The installer has detected that the software location you have specified contains Oracle Database software release 11.2.0.1.
Action: Either perform an in-place upgrade (Steps provided in this section), or perform an out-of-place upgrade
Performing an In-Place Upgrade for an Oracle RAC Database
To perform an in-place upgrade for Oracle RAC Database instances: 1. Back up the configuration data by backing up the following directories on all cluster nodes: o ORACLE_HOME/dbs o ORACLE_HOME/network/admin o ORACLE_HOME/hostname_dbname o ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_hostname_dbname
2. Run the following command on each of the nodes to detach the 11.2.0.1.0 Oracle RAC ORACLE_HOME: $ORACLE_HOME/oui/bin/runInstaller -detachHome ORACLE_HOME=11.2.0.1.0 software location
3. Rename the 11.2.0.1.0 Oracle RAC ORACLE_HOME directory on all the nodes to a temporary name.
4. Install release 11.2.0.2 Software Only on all nodes:
From the unzipped folder, invoke the run Installer from the database folder
The welcome screen appears, uncheck the security updates and click next
Choose Skip Software Updates
Select Install Database Software Only and click next
Choose Oracle Real Application Clusters DB Installation and Select the Nodes and click next
Choose the Language and click next
Select Enterprise Edition and click next
Specify the location of the old home, and click next
Provide the Operating System groups and click next
Once the Pre-requisite checks are performed, click next
The summary screen appears, verify the settings and click next
The installation proceeds
Once the installation is done, run Root.sh on both the nodes as instructed and click ok.
Now, on all nodes, restore into the 11.2.0.2 ORACLE_HOME software location the backed up configuration data files (from the backups you made of /dbs and network/admin), and also restore the following two directories: /hostname_dbname and /oc4j/j2ee/OC4J_DBConsole_hostname_dbname. Specify the actual name for hostname_dbname.
Database Upgrade using DBUA:
Run DBUA from the 11.2.0.2 ORACLE_HOME/bin directory on the local node and select the 11.2.0.1.0 Oracle RAC database instance to upgrade it to release 11.2.0.2.0.
The welcome screen appears once you invoke the DBUA, click next to proceed
DBUA Lists the databases that can be upgraded, select the one that you would like to upgrade
You can choose the Degree of parallelism and upgrading the time zone version and click next
Skip this screen by clicking next
The summary screen appears, click on finish for the upgrade to proceed.
The upgrade proceeds
NOTE: The only time one should use this method (in-place upgrade) is when you lack space on your server. However, you can use this method for your test and development environments to save space.
Since this method was the traditional method of doing things, Since this method is available, I thought it’s worth checking how it works in 11gR2. During the entire upgrade process, I never ran into any issues, apart from the slightly higher downtime in comparison to the out-of place upgrade. However, oracle doesn’t recommend this method. You can refer to the following metalink note 1291682.1.
This post is in continuation to Grid Infrastructure upgrade post written earlier. Ensure that you have downloaded the software and have upgraded the infrastructure database.
Following files contain database software for 11.2.0.2.
p10098816_112020_Linux-x86-64_1of7.zip
p10098816_112020_Linux-x86-64_2of7.zip
Please note that this is full release , so if you are installing new software, you can directly install 11.2.0.2 without need of first installing 11.2.0.1 database.Unzip the software and start the runInstaller from the database directory.
cd $SW_HOME/database
./runInstaller
We will be presented with OUI screen asking for MOS credentials. Please note that we would be installing software into new ORACLE_HOME, which is a new feature called out of place upgrade.
Next screen ask’s again for MOS credentials. This can be used to check if there are any patches which need’s to be applied before software can be upgraded. Since we have already applied the PSU2, we will choose skip Software updates
We are now presented with 3 options
a)Create and Configure database – installs software and creates a database using dbca
b)Install database software only – Only installs 11.2.0.2 software
c) Upgrade exisiting database – Installs software and launches dbua to upgrade database
We chose option b) i.e Install database software only
Next screen presents you with 3 options
a)Single Instance database installation
b)Oracle RAC Installation
c)Oracle RAC One Node database Installation
I will be discussing RAC one node in a future post. For our installation we are upgrading RAC database and require RAC software
Next screen gives option to choose between Enterprise edition and Standard Edition
Next screen asks for database software installation directory. Unlike Grid infrastructure, where it is mandatory to install in new ORACLE_HOME, RAC database software can be installed in existing home. But we will be choosing Out of place upgrade i.e install in new home.
Choose the OSDBA and OSOPER group
Next screen checks the pre-requisites. You can ask oracle to create fixup script. Refer to my earlier post for detail
Finally we are prompted to run root.sh from both nodes
After you have run root.sh from both nodes, you can then use dbua to upgrade the database. Note that we have not yet brought down the database. This is great benefit of using out of place upgrade . Second benefit is that you are not touching the existing binaries,so you are not required to take backup of binaries and can can easily rollback the changes (if required). You can also continue using the old binaries for databases which you cannot get downtime and can upgrade them later.
Set the ORACLE_HOME and PATH to include 11.2.0.2 software location and start dbua. Take backup of database before you start upgrade process using dbua
$dbua
We are presented with welcome screen. Press next
DBUA presents you with list of databases currently registered in /etc/oratab. We select db11g which we need to upgrade
Next screen asks for following options
a) Option to recompile invalid objects and degree of parallelism for running utlrp.sql. We keep default value of 3
b) Option to disable archiving during upgrade
c)Upgrade the timezone file. We have not selected it now and will do upgrade manually
Next screen presents with option to select Fast Recovery area and size
Next we will be presented with summary screen and asked to take backup of database. If you have not taken it till now, its good time to take backup. Also now database will be stopped and will be started from new oracle home.Please ensure that you have appropriate setting for JAVA_POOL_SIZE and SHARED_POOL_SIZE during upgrade or can use SGA_TARGET/MEMORY_TARGET to avoid ora-4031
Following screen shows database upgrade progress screen.
Once upgrade completes, it gives summary of upgrade process. As you can see it has warned for DST upgrade.
Oracle 11.2.0.2 contains version 14 file. Starting 11g you can have multiple database running out of single oracle home to have different timezone version files.
You can refer to Updating the RDBMS DST version in 11gR2 (11.2.0.1 and up) using DBMS_DST [ID 977512.1]
Giving summary of steps taken by us to upgrade from version 11 to version 14
---Check the current version of file ---
SQL> select version from V$TIMEZONE_FILE;
VERSION
----------
11
SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME; 2 3 4
PROPERTY_NAME VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION 11
DST_SECONDARY_TT_VERSION 0
DST_UPGRADE_STATE NONE
---Confirm if $ORACLE_HOME/oracore/zoneinfo contains timezlrg_14.dat i.e version 14 file
--Prepare for upgrade
SQL> exec DBMS_DST.BEGIN_PREPARE(14);
PL/SQL procedure successfully completed.
--Confirm status
SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME; 2 3 4
PROPERTY_NAME VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION 11
DST_SECONDARY_TT_VERSION 14
DST_UPGRADE_STATE PREPARE
---Re-running prepare statement will give below error.
SQL> exec DBMS_DST.BEGIN_PREPARE(14);
BEGIN DBMS_DST.BEGIN_PREPARE(14); END;
*
ERROR at line 1:
ORA-56920: a prepare or upgrade window or an on-demand or datapump-job loading of a secondary time zone data file is in
an active state
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DST", line 1340
ORA-06512: at line 1
-- truncate logging tables if they exist.
TRUNCATE TABLE SYS.DST$TRIGGER_TABLE;
TRUNCATE TABLE sys.dst$affected_tables;
TRUNCATE TABLE sys.dst$error_table;
-- log affected data
BEGIN
DBMS_DST.FIND_AFFECTED_TABLES
(affected_tables => 'sys.dst$affected_tables',
log_errors => TRUE,
log_errors_table => 'sys.dst$error_table');
END;
/
---Query table to check if any affected table
SQL> SQL> SELECT * FROM sys.dst$affected_tables;
no rows selected
SQL>
SQL> SELECT * FROM sys.dst$error_table;
no rows selected
-- End the prepare phase
SQL> EXEC DBMS_DST.END_PREPARE;
A prepare window has been successfully ended.
--Startup the database in upgrade mode. Ensure you have set cluster_database=false
SQL> EXEC DBMS_DST.BEGIN_UPGRADE(14);
An upgrade window has been successfully started.
PL/SQL procedure successfully completed.
-- To confirm whether it has been upgraded to version 14
SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME; 2 3 4
PROPERTY_NAME VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION 14
DST_SECONDARY_TT_VERSION 11
DST_UPGRADE_STATE UPGRADE
--Restart the database after removing cluster_database
This completes the database upgrade process. I would recommend dbua to upgrade the database as it takes care of copying the init.ora files,password file and also modified the OCR to point to new oracle home and upgraded srvctl version. Oracle Upgrade guide does not list any method documenting steps to be taken for srvctl. I referred following link i.e Oracle Database Upgrade documentation for the upgrade purpose
In case you use manual method and use srvctl modify database to point to new oracle home, you will get errors like below
srvctl stop database -d test11g
PRCD-1027 : Failed to retrieve database test11g
PRCD-1229 : An attempt to access configuration of database test11g was rejected because its version 11.2.0.1.0 differs from the program version 11.2.0.2.0. Instead run the program from /oracle/product/app/11.2.0/dbhome_1.
You can use following command to upgrade the srvctl version
This blog reflect our own views and do not necessarily represent the views of our current or previous employers.
The contents of this blog are from our experience, you may use at your own risk, however you are strongly advised to cross reference with Product documentation and test before deploying to production environments.
Recent Comments