Upgrading Grid Infrastructure 11.2.0.1 to 11.2.0.2 on RHEL4

11.2.0.2 Patchset was released few days back. I decided to upgrade a test RAC database to 11.2.0.2 yesterday and was able to do it successfully. I will be documenting the steps here for easy reference

Environment Details

2 node RAC on Red Hat Enterprise Linux AS release 4 (Nahant Update 8), 64 bit

To upgrade existing 11.2.0.1 Oracle Grid Infrastructure installations to Oracle Grid Infrastructure 11.2.0.2, you must first do at least one of the following:

– Patch the release 11.2.0.1 Oracle Grid Infrastructure home with the fix for bug 9413827.
– Install Oracle Grid Infrastructure Patch Set 1 (GIPS1) or Oracle Grid Infrastructure Patch Set 2 (GIPS2).

I will be using Oracle Grid Infrastructure Patch set 2.

Software and Patches

1) Download Patch 9655006 – 11.2.0.1.2 for Grid Infrastructure (GI) Patch Set Update from MOS

2) Latest Opatch for 11.2 – 6880880 Universal Installer: Patch OPatch 11.2

You can refer to How To Download And Install OPatch [ID 274526.1]

3)11.2.0.2 Patchset files. You can find method to download them directly to server here

p10098816_112020_Linux-x86-64_1of7.zip  – 11.2.0.2 Database Installation Files

p10098816_112020_Linux-x86-64_2of7.zip – 11.2.0.2 Database Installation Files

p10098816_112020_Linux-x86-64_3of7.zip – 11.2.0.2 Grid Installation Files

Lets get started.

Install Latest Opatch

Unzip the zip file and copy OPatch folder to $ORACLE_HOME by renaming the earlier OPatch directory. You can refer to How To Download And Install OPatch [ID 274526.1]

Apply Grid Infrastructure (GI) Patch Set Update –  9655006

It applies patch 9654983 and 9655006 to both Database and Grid home. We proceed by checking for any patch conflicts
[oracle@oradbdev01]~/software/11.2.0.2/psu% opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./9655006
Invoking OPatch 11.2.0.1.3

Oracle Interim Patch Installer version 11.2.0.1.3
Copyright (c) 2010, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /oracle/product/app/11.2.0/dbhome_1
Central Inventory : /oracle/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 11.2.0.1.3
OUI version       : 11.2.0.1.0
OUI location      : /oracle/product/app/11.2.0/dbhome_1/oui
Log file location : /oracle/product/app/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2010-10-02_06-03-34AM.log

Patch history file: /oracle/product/app/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch_history.txt

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.<span style="font-family: Georgia, 'Times New Roman', 'Bitstream Charter', Times, serif; font-size: small;"><span style="line-height: 19px; white-space: normal;">
</span></span>

We need to use opatch auto to patch both Grid Infrastructure and RAC database home.It can be used to patch separately but I am going with patching both. This is great improvement over 10g CRS patch bundles as it had lot of steps to be run as root or oracle software owner. ( I had myself messed up once by running a command as root instead of oracle 🙁 ) Opatch starts with patching the database home and then patches the grid infrastructure home. You can stop the database instance running out of oracle home.(opatch asks for shutting down database, so in case you do not stop,don’t worry it will warn)

Note: In case you are on RHEL5/OEL5 and using ACFS for database volumes, it is recommended to use opatch auto <loc> -oh <grid home> to first patch the Grid home instead of patching them together

Unzip patch 965506 to directory say /oracle/software/gips2.It will create two directories for patch 9654983 and 9655006. Connect as root user and set the oracle home and Opatch directory . Stop the database instance running on the node

srvctl stop instance -d db11g -i db11g1

Next run opatch auto command as root

#opatch auto /oracle/software/gips2
Oracle Interim Patch Installer version 11.2.0.1.3
Copyright (c) 2010, Oracle Corporation.  All rights reserved.

UTIL session

Oracle Home       : /oracle/product/grid
Central Inventory : /oracle/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 11.2.0.1.3
OUI version       : 11.2.0.1.0
OUI location      : /oracle/product/grid/oui
Log file location : /oracle/product/grid/cfgtoollogs/opatch/opatch2010-10-02_06-35-03AM.log

Patch history file: /oracle/product/grid/cfgtoollogs/opatch/opatch_history.txt

Invoking utility "napply"
Checking conflict among patches...
Checking if Oracle Home has components required by patches...
Checking conflicts against Oracle Home...
OPatch continues with these patches:   9654983  

Do you want to proceed? [y|n]
y
User Responded with: Y

Running prerequisite checks...
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name: 

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  Y

You selected -local option, hence OPatch will patch the local system only.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/oracle/product/grid')

Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files affected by the patch 'NApply' for restore. This might take a while...

I have not copied the whole output. It asks you to specify MOS credentials for setting up Oracle Configuration Manager. You can skip it as mentioned above. You will find following errors in the end

OPatch succeeded.
ADVM/ACFS is not supported on Redhat 4

Failure at scls_process_spawn with code 1
Internal Error Information:
  Category: -1
 Operation: fail
  Location: canexec2
  Other: no exe permission, file [/oracle/product/grid/bin/ohasd]
  System Dependent Information: 0

CRS-4000: Command Start failed, or completed with errors.

Timed out waiting for the CRS stack to start.<span style="font-family: Georgia, 'Times New Roman', 'Bitstream Charter', Times, serif; font-size: small;"><span style="line-height: 19px; white-space: normal;">
</span></span>

This is a known issue and discussed in 11.2.0.X Grid Infrastructure PSU Known Issues [ID 1082394.1]

To solve this issue, connect as root user and execute following command from your Grid Infrastructure home

cd $GRID_HOME

# ./crs/install/rootcrs.pl -patch
2010-10-02 06:59:54: Parsing the host name
2010-10-02 06:59:54: Checking for super user privileges
2010-10-02 06:59:54: User has super user privileges
Using configuration parameter file: crs/install/crsconfig_params
ADVM/ACFS is not supported on Redhat 4

CRS-4123: Oracle High Availability Services has been started.

Start oracle database instance on the node which has been patched

srvctl start instance -d db11g -i db11g1

Repeat the above steps for node 2.It took me 40 minutes for each node to complete opatch auto activity.Once it is done,execute following from one node for all databases to complete patch installation

cd $ORACLE_HOME/rdbms/admin

sqlplus /nolog

SQL> CONNECT / AS SYSDBA

SQL> @catbundle.sql psu apply

SQL> QUIT

This completes patching of Grid Infrastructure Patch or PSU2.

Upgrading Grid Infrastructure home to 11.2.0.2

Ensure following environment variables are not set :ORA_CRS_HOME; ORACLE_HOME; ORA_NLS10; TNS_ADMIN

# echo $ORA_CRS_HOME; echo $ORACLE_HOME; echo $ORA_NLS10; echo $TNS_ADMIN

Strating with 11.2.0.2, Grid infrastructure (Clusterware and ASM Home) upgrade is out of place upgrade i.e we install in new ORACLE_HOME. Unlike database home we cannot perform an in-place upgrade of Oracle Clusterware and Oracle ASM to existing homes.

Unset following variables too

$ unset ORACLE_BASE

$ unset ORACLE_HOME

$ unset ORACLE_SID

Relax permissions for GRID_HOME

#chmod -R 755 /oracle/product/grid

#chown -R oracle /oracle/product/grid

#chown oracle /oracle/product

Start the runInstaller from the 11.2.0.2 grid software directory.

On first screen, it will ask for MOS details. I have chosen skip Software update

Since we are upgrading existing installation, choose “Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management”. This will install software and also configure ASM

Next screen displays nodes which OUI will patch

Select the OSDBA,OSASM and OSOPER group. I have chosen dba

Since I have chosen all 3 groups to be same, it gives warning. Select Yes to continue

Specify the grid software installation directory. I have used /oracle/product/oragrid/11.2.0.2 (Better to input release as we will be having out of place upgrades for future patchsets too)

OUI reports some issues with swap size ,shmmax and NTP. You can fix them or choose to ignore. OUI can create a fixup script for you.

Installation starts copying file. After files have been copied to both nodes, it asks for running the rootupgrade.sh script from all nodes

Stop the database instance running from the node at this time and then run script as root. Please note that ASM and clusterware should not be stopped as rootupgrade.sh requires them to be up and takes care of shutting down and starting with new home.In my case, running rootupgrade.sh successfully succeeded on node 1 but it hung on node 2 . I did a cancel and re-ran it. Pasting the contents from node 2 second run

[root@oradbdev02 logs]# /oracle/product/oragrid/11.2.0.2/rootupgrade.sh
Running Oracle 11g root script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /oracle/product/oragrid/11.2.0.2

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/product/oragrid/11.2.0.2/crs/install/crsconfig_params
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-1115: Oracle Clusterware has already been upgraded.

ASM upgrade has finished on last node.

Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

On clicking next in OUI, it reported that cluvfy has failed and some of components are not installed properly. To verify I ran cluvfy manually. Pasting content for which it failed

./cluvfy stage -post crsinst -n oradbdev01,oradbdev02 -verbose 

<span style="font-family: Georgia, 'Times New Roman', 'Bitstream Charter', Times, serif; line-height: 19px; white-space: normal; font-size: 13px;">
<pre style="font: normal normal normal 12px/18px Consolas, Monaco, 'Courier New', Courier, monospace;">-------------------------------truncated output -------------------

ASM Running check passed. ASM is running on all specified nodes

Checking OCR config file "/etc/oracle/ocr.loc"...

ERROR:
PRVF-4175 : OCR config file "/etc/oracle/ocr.loc" check failed on the following nodes:

	oradbdev02:Group of file "/etc/oracle/ocr.loc" did not match the expected value. [Expected = "oinstall" ; Found = "dba"]

	oradbdev01:Group of file "/etc/oracle/ocr.loc" did not match the expected value. [Expected = "oinstall" ; Found = "dba"]

Disk group for ocr location "+DG_DATA01" available on all the nodes

OCR integrity check failed
-------------------------------truncated output -------------------

Checking OLR config file...

ERROR: 

PRVF-4184 : OLR config file check failed on the following nodes:

	oradbdev02:Group of file "/etc/oracle/olr.loc" did not match the expected value. [Expected = "oinstall" ; Found = "dba"]

	oradbdev01:Group of file "/etc/oracle/olr.loc" did not match the expected value. [Expected = "oinstall" ; Found = "dba"]

Checking OLR file attributes...

ERROR: 

PRVF-4187 : OLR file check failed on the following nodes:

	oradbdev02:Group of file "/oracle/product/oragrid/11.2.0.2/cdata/oradbdev02.olr" did not match the expected value. [Expected = "oinstall" ; Found = "dba"]

	oradbdev01:Group of file "/oracle/product/oragrid/11.2.0.2/cdata/oradbdev01.olr" did not match the expected value. [Expected = "oinstall" ; Found = "dba"]

OLR integrity check failed

-------------------------------truncated output -------------------
Checking NTP daemon command line for slewing option "-x"
Check: NTP daemon command line
  Node Name                             Slewing Option Set?
  ------------------------------------  ------------------------
  oradbdev02                            no
  oradbdev01                            no
Result:
NTP daemon slewing option check failed on some nodes
PRVF-5436 : The NTP daemon running on one or more nodes lacks the slewing option "-x"
Result: Clock synchronization check using Network Time Protocol(NTP) failed

PRVF-9652 : Cluster Time Synchronization Services check failed

We see that cluvfy is reporting error that it expected oinstall group but found dba group. I had not specified install group during installation, so can ignore it.For NTP you can correct it by setting following in ntpd.conf and restart the ntpd daemon

grep OPTIONS /etc/sysconfig/ntpd
OPTIONS=”-u ntp:ntp -p /var/run/ntpd.pid -x”

Refer How to Configure NTP to Resolve CLUVFY Error PRVF-5436 PRVF-9652 [ID 1056693.1]

At this moment Grid Infrastructure has been successfully upgraded to 11.2.0.2. Next we will upgrade the RAC database home. You can refer to Steps to Upgrade 11.2.0.1 RAC to 11.2.0.2