Yesterday, I faced an interesting scenario while upgrading 2 Node RAC Database. I tried setting up CLUSTER_DATABASE=FALSE in spfile from Node 1,but it displayed value as TRUE after restarting database.Same was true for Database startup entries in Alert log. If I did the same setting in Node 2 and started database from node 2, it started in shared mode.I was using shared spfile (on OCFS) for both the systems.
CLUSTER_DATABASE is a Real Application Clusters parameter that specifies whether or not Real Application Clusters is enabled. It is mostly used for starting database in Exclusive mode during operations which will require updating dictionary. e.g Upgrading database,Enabling Archivelog and also for changing Database characterset.
I had ignored this error few times but today I wanted to find the cause and resolve it. To diagnose further, I set CLUSTER_DATABASE=FALSE from Node 1 with following command
<span style="font-size: small; font-family: courier new,courier;">ALTER SYSTEM SET CLUSTER_DATABASE=FALSE SCOPE=SPFILE; </span>
Then I used srvctl to start the database
<span style="font-size: small; font-family: courier new,courier;">[oracle@blrraclnx1 bdump]$ srvctl start database -d orcl </span>
Node 1 Alert log had CLUSTER_DATABASE=TRUE
<span style="font-size: small; font-family: courier new,courier;">cluster_database = TRUE cluster_database_instances= 2 db_create_file_dest = +DATA db_recovery_file_dest = +FRA db_recovery_file_dest_size= 4294967296 thread = 1 instance_number = 1 </span>
But on checking Alert Log from Node 2, I found CLUSTER_DATABASE=FALSE
<span style="font-size: small; font-family: courier new,courier;">cluster_database = FALSE cluster_database_instances= 1 db_create_file_dest = +DATA db_recovery_file_dest = +FRA db_recovery_file_dest_size= 4294967296 thread = 2 instance_number = 2 </span>
Moreover Instance 2 had crashed with following errors
<span style="font-size: small; font-family: courier new,courier;">Tue Sep 2 21:10:14 2008 lmon registered with NM - instance id 2 (internal mem no 1) Warning: cluster_database_instances (1) is <= my node id (1) This instance wants to mount exclusive when instance 0 has mounted shared. Exiting! Tue Sep 2 21:10:15 2008 USER: terminating instance due to error 29707 Instance terminated by USER, pid = 2807 </span>
Error says that Instance 0 (Node 1) has already mounted in SHARED Mode and this instance (Node 2) wants to start in Exclusive Mode.
To check the values I decided to use V$SPPARAMETER view. Till now I was using show parameter cluster_database command to check the values.
<span style="font-size: small; font-family: courier new,courier;">SQL> SELECT SID,NAME,VALUE,DISPLAY_VALUE FROM V$SPPARAMETER WHERE NAME='cluster_database'; SID NAME VALUE DISPLAY_VALUE ---------- ------------------------------ --------------- --------------- orcl1 cluster_database TRUE FALSE * cluster_database FALSE FALSE </span>
This is strange!! I had never specified any specific value for node 1. Anyways to resolve it, I used reset command to clear the orcl1 entry from spfile
<span style="font-size: small; font-family: courier new,courier;">SQL> alter system reset cluster_database sid='orcl1'; System altered. </span>
Then I re-checked the V$SPPARAMETER view
<span style="font-size: small; font-family: courier new,courier;">SQL> SELECT SID,NAME,VALUE FROM V$SPPARAMETER WHERE NAME='cluster_database'; SID NAME VALUE -------- ------------------ -------- * cluster_database FALSE </span>
Restarting the database next time allowed it to be started in Exclusive mode. Issue was resolved but question was “Why was CLUSTER_DATABASE variable different for Node 1”
I remember that as part of setup I had created Node 1 and added Node 2 later. There could be some missed steps there or it could be that this parameter was set explicitly with sid=’orcl1′ option.I really had no clue on why it was like that.If anyone has experienced this, then do let me know.
Recent Comments