October 2010 Blogroll Report

It’s been time since we saw log-buffer edition with last being published on 1st October 2010 and  Coskan’s weekly blogroll report

I found them really useful to read top blog articles for week. So I decided to tag few articles posted in October 2010 which I  had read and found useful. I am subscribed to orana.info so most of articles are already aggregated there (barring few). Unlike Pythian’s log buffer, I will be concentrating only on Oracle Database Related posts. Lets start

11.2.0.2 patchset introduced lot of new features in Grid Infrastructure. Martin Bach discusses Redundant Interconnect feature introduced in 11.2.0.2 in his following post

Then there is series on setting up Stretched RAC (with 4 parts published till now) from him

Julian Dyke has started his blog and have posted quite a number of articles related to 11.2.0.2
He discussed issue related to Multicasting (required to enable HAIP interconnect feature) in below post

Julian also published test program to check if multicasting is enabled on your box

Ritesh has posted a handy script to map ASM devices on EMC along with Disk number along with a  tweak in max sector size configuration on RHEL5 which can boost ASM Performance.Read more below


Doug writes about a issue related to SQL*Net events and methodology used by him to debug ‘SQL*Net more data from client’ wait event

tkprof has undergone a suprising change in 11.2.0.2. Read Christian’s article about tkprof changes in 11.2.0.2

Another posting from Christian highlights changes related to Deferred Segment creation introduced in 11.2.0.2 in following post


Jarneil writes post about SQL Plan Management in 11g and discusses a case when adding index to table is not picked automatically by CBO

How do you answer question “What is difference between Nested Loop and Hash Joins“. Tanel Poder says “Hash joins can not look up rows from the inner (probed) row source based on values retrieved from the outer (driving) row source, nested loops can.”
Read on below article (make sure to check comments section to know what Jonathan Lewis and Christo kutrovsky had to say)


Check out article on Reading parallel execution with Bloom pruning by Greg Rahn


In 11.2 you can query bind values of running query instead of dumping using errorstack. Tanel Poder gives a example describing with example


Dominic talks about NLS (I know many of you would be scared but believe me you can’t avoid it). Read about using NLS_LENGTH_SEMANTICS

Kellyn troubleshoots ora-600 on RMAN and is able to resolve the bug without applying patch.

Meanwhile she is also trying out setting up wordpress blog on dbakevlar.com (which is so much better than blogspot)

Craig Shallahamer explains correlation between database commit time and log file sync time

Do you have export scripts which you are scared to migrate to datapump. Probably Legacy mode of datapump (11gR2 feature) can help you. Aman Sharma explains the usage in below post
Happy Reading!

Steps to Upgrade 11.2.0.1 RAC to 11.2.0.2

This post is in continuation to Grid Infrastructure upgrade post written earlier. Ensure that you have downloaded the software and  have upgraded the infrastructure database.

Following files contain database software for 11.2.0.2.

p10098816_112020_Linux-x86-64_1of7.zip

p10098816_112020_Linux-x86-64_2of7.zip

Please note that this is full release , so if you are installing new software, you can directly install 11.2.0.2 without need of first installing 11.2.0.1 database.Unzip the software and start the runInstaller from the database directory.

cd $SW_HOME/database

./runInstaller

We will be presented with OUI screen asking for MOS credentials. Please note that we would be installing software into new ORACLE_HOME, which is a new feature called out of place upgrade.

Next screen ask’s again for MOS credentials. This can be used to check if there are any patches which need’s to be applied before software can be upgraded. Since we have already applied the PSU2, we will choose skip Software updates

We are now presented with 3 options

a)Create and Configure database – installs software and creates a database using dbca

b)Install database software only – Only installs 11.2.0.2 software

c) Upgrade exisiting database – Installs software and launches dbua to upgrade database

We chose option b) i.e Install database software only

Next screen presents you with 3 options

a)Single Instance database installation

b)Oracle RAC Installation

c)Oracle RAC One Node database Installation

I will be discussing RAC one node in a future post. For our installation we are upgrading RAC database and require RAC software

Next screen gives option to choose between Enterprise edition and Standard Edition

Next screen asks for database software installation directory. Unlike Grid infrastructure, where it is mandatory to install in new ORACLE_HOME, RAC database software can be installed in existing home. But we will be choosing Out of place upgrade i.e install in new home.

Choose the OSDBA and OSOPER group

Next screen checks the pre-requisites. You can ask oracle to create fixup script. Refer to my earlier post for detail

Finally we are prompted to run root.sh from both nodes

After you have run root.sh from both nodes, you can then use dbua to upgrade the database. Note that we have not yet brought down the database. This is great benefit of using out of place upgrade . Second benefit is that you are not touching the existing binaries,so you are not required to take backup of binaries and can can easily rollback the changes (if required). You can also continue using the old binaries for databases which you cannot get downtime and can upgrade them later.

Set the ORACLE_HOME and PATH to include 11.2.0.2 software location and start dbua. Take backup of database before you start upgrade process using dbua

$dbua

We are presented with welcome screen. Press next

DBUA presents you with list of databases currently registered in /etc/oratab. We select db11g which we need to upgrade

Next screen asks for following options

a) Option to recompile invalid objects and degree of parallelism for running utlrp.sql. We keep default value of 3

b) Option to disable archiving during upgrade

c)Upgrade the timezone file. We have not selected it now and will do upgrade manually

Next screen presents with option to select Fast Recovery area and size

Next we will be presented with summary screen and asked to take backup of database. If you have not taken it till now, its good time to take backup. Also now database will be stopped and will be started from new oracle home.Please ensure that you have appropriate setting for JAVA_POOL_SIZE and SHARED_POOL_SIZE during upgrade or can use SGA_TARGET/MEMORY_TARGET to avoid ora-4031

Following screen shows database upgrade progress screen.

Once upgrade completes, it gives summary of upgrade process. As you can see it has warned for DST upgrade.

Oracle 11.2.0.2 contains version 14 file. Starting 11g you can have multiple database running out of single oracle home to have different timezone version files.

You can refer to Updating the RDBMS DST version in 11gR2 (11.2.0.1 and up) using DBMS_DST [ID 977512.1]

Giving summary of steps taken by us to upgrade from version 11 to version 14

---Check the current version of file ---
SQL> select version from V$TIMEZONE_FILE;

   VERSION
----------
	11

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;  2    3    4  

PROPERTY_NAME		       VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION	       11
DST_SECONDARY_TT_VERSION       0
DST_UPGRADE_STATE	       NONE

---Confirm if $ORACLE_HOME/oracore/zoneinfo contains timezlrg_14.dat i.e version 14 file

--Prepare for upgrade
SQL> exec DBMS_DST.BEGIN_PREPARE(14);

PL/SQL procedure successfully completed.

--Confirm status

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;  2    3    4  

PROPERTY_NAME		       VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION	       11
DST_SECONDARY_TT_VERSION       14
DST_UPGRADE_STATE	       PREPARE

---Re-running prepare statement will give below error.
SQL> exec DBMS_DST.BEGIN_PREPARE(14);
BEGIN DBMS_DST.BEGIN_PREPARE(14); END;

*
ERROR at line 1:
ORA-56920: a prepare or upgrade window or an on-demand or datapump-job loading of a secondary time zone data file is in
an active state
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DST", line 1340
ORA-06512: at line 1

-- truncate logging tables if they exist.

TRUNCATE TABLE SYS.DST$TRIGGER_TABLE;
TRUNCATE TABLE sys.dst$affected_tables;
TRUNCATE TABLE sys.dst$error_table;

-- log affected data

BEGIN
DBMS_DST.FIND_AFFECTED_TABLES
(affected_tables => 'sys.dst$affected_tables',
log_errors => TRUE,
log_errors_table => 'sys.dst$error_table');
END;
/
---Query table to check if any affected table
SQL> SQL> SELECT * FROM sys.dst$affected_tables;

no rows selected

SQL>
SQL> SELECT * FROM sys.dst$error_table;

no rows selected

-- End the prepare phase
SQL> EXEC DBMS_DST.END_PREPARE;
A prepare window has been successfully ended.

--Startup the database in upgrade mode. Ensure you have set cluster_database=false

SQL> EXEC DBMS_DST.BEGIN_UPGRADE(14);
An upgrade window has been successfully started.

PL/SQL procedure successfully completed.

-- To confirm whether it has been upgraded to version 14
SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;  2    3    4  

PROPERTY_NAME		       VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION	       14
DST_SECONDARY_TT_VERSION       11
DST_UPGRADE_STATE	       UPGRADE

--Restart the database after removing cluster_database

This completes the database upgrade process. I would recommend dbua to upgrade the database as  it takes care of copying the init.ora files,password file and also modified the OCR to point to new oracle home and upgraded srvctl version. Oracle Upgrade guide does not list any method documenting steps to be taken for srvctl. I referred following link i.e Oracle Database Upgrade documentation for the upgrade purpose

http://download.oracle.com/docs/cd/E11882_01/server.112/e17222/upgrade.htm#i1011981

In case you use manual method and use srvctl modify database to point to new oracle home, you will get errors like below

srvctl stop database -d test11g
PRCD-1027 : Failed to retrieve database test11g
PRCD-1229 : An attempt to access configuration of database test11g was rejected because its version 11.2.0.1.0 differs from the program version 11.2.0.2.0. Instead run the program from /oracle/product/app/11.2.0/dbhome_1.

You can use following command to upgrade the srvctl version

srvctl upgrade database -d test11g -o /oracle/product/app/11.2.0.2/dbhome_1

Happy Upgrading.

Upgrading Grid Infrastructure 11.2.0.1 to 11.2.0.2 on RHEL4

11.2.0.2 Patchset was released few days back. I decided to upgrade a test RAC database to 11.2.0.2 yesterday and was able to do it successfully. I will be documenting the steps here for easy reference

Environment Details

2 node RAC on Red Hat Enterprise Linux AS release 4 (Nahant Update 8), 64 bit

To upgrade existing 11.2.0.1 Oracle Grid Infrastructure installations to Oracle Grid Infrastructure 11.2.0.2, you must first do at least one of the following:

– Patch the release 11.2.0.1 Oracle Grid Infrastructure home with the fix for bug 9413827.
– Install Oracle Grid Infrastructure Patch Set 1 (GIPS1) or Oracle Grid Infrastructure Patch Set 2 (GIPS2).

I will be using Oracle Grid Infrastructure Patch set 2.

Software and Patches

1) Download Patch 9655006 – 11.2.0.1.2 for Grid Infrastructure (GI) Patch Set Update from MOS

2) Latest Opatch for 11.2 – 6880880 Universal Installer: Patch OPatch 11.2

You can refer to How To Download And Install OPatch [ID 274526.1]

3)11.2.0.2 Patchset files. You can find method to download them directly to server here

p10098816_112020_Linux-x86-64_1of7.zip  – 11.2.0.2 Database Installation Files

p10098816_112020_Linux-x86-64_2of7.zip – 11.2.0.2 Database Installation Files

p10098816_112020_Linux-x86-64_3of7.zip – 11.2.0.2 Grid Installation Files

Lets get started.

Install Latest Opatch

Unzip the zip file and copy OPatch folder to $ORACLE_HOME by renaming the earlier OPatch directory. You can refer to How To Download And Install OPatch [ID 274526.1]

Apply Grid Infrastructure (GI) Patch Set Update –  9655006

It applies patch 9654983 and 9655006 to both Database and Grid home. We proceed by checking for any patch conflicts
[oracle@oradbdev01]~/software/11.2.0.2/psu% opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./9655006
Invoking OPatch 11.2.0.1.3

Oracle Interim Patch Installer version 11.2.0.1.3
Copyright (c) 2010, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /oracle/product/app/11.2.0/dbhome_1
Central Inventory : /oracle/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 11.2.0.1.3
OUI version       : 11.2.0.1.0
OUI location      : /oracle/product/app/11.2.0/dbhome_1/oui
Log file location : /oracle/product/app/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2010-10-02_06-03-34AM.log

Patch history file: /oracle/product/app/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch_history.txt

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.<span style="font-family: Georgia, 'Times New Roman', 'Bitstream Charter', Times, serif; font-size: small;"><span style="line-height: 19px; white-space: normal;">
</span></span>

We need to use opatch auto to patch both Grid Infrastructure and RAC database home.It can be used to patch separately but I am going with patching both. This is great improvement over 10g CRS patch bundles as it had lot of steps to be run as root or oracle software owner. ( I had myself messed up once by running a command as root instead of oracle 🙁 ) Opatch starts with patching the database home and then patches the grid infrastructure home. You can stop the database instance running out of oracle home.(opatch asks for shutting down database, so in case you do not stop,don’t worry it will warn)

Note: In case you are on RHEL5/OEL5 and using ACFS for database volumes, it is recommended to use opatch auto <loc> -oh <grid home> to first patch the Grid home instead of patching them together

Unzip patch 965506 to directory say /oracle/software/gips2.It will create two directories for patch 9654983 and 9655006. Connect as root user and set the oracle home and Opatch directory . Stop the database instance running on the node

srvctl stop instance -d db11g -i db11g1

Next run opatch auto command as root

#opatch auto /oracle/software/gips2
Oracle Interim Patch Installer version 11.2.0.1.3
Copyright (c) 2010, Oracle Corporation.  All rights reserved.

UTIL session

Oracle Home       : /oracle/product/grid
Central Inventory : /oracle/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 11.2.0.1.3
OUI version       : 11.2.0.1.0
OUI location      : /oracle/product/grid/oui
Log file location : /oracle/product/grid/cfgtoollogs/opatch/opatch2010-10-02_06-35-03AM.log

Patch history file: /oracle/product/grid/cfgtoollogs/opatch/opatch_history.txt

Invoking utility "napply"
Checking conflict among patches...
Checking if Oracle Home has components required by patches...
Checking conflicts against Oracle Home...
OPatch continues with these patches:   9654983  

Do you want to proceed? [y|n]
y
User Responded with: Y

Running prerequisite checks...
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name: 

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  Y

You selected -local option, hence OPatch will patch the local system only.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/oracle/product/grid')

Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files affected by the patch 'NApply' for restore. This might take a while...

I have not copied the whole output. It asks you to specify MOS credentials for setting up Oracle Configuration Manager. You can skip it as mentioned above. You will find following errors in the end

OPatch succeeded.
ADVM/ACFS is not supported on Redhat 4

Failure at scls_process_spawn with code 1
Internal Error Information:
  Category: -1
 Operation: fail
  Location: canexec2
  Other: no exe permission, file [/oracle/product/grid/bin/ohasd]
  System Dependent Information: 0

CRS-4000: Command Start failed, or completed with errors.

Timed out waiting for the CRS stack to start.<span style="font-family: Georgia, 'Times New Roman', 'Bitstream Charter', Times, serif; font-size: small;"><span style="line-height: 19px; white-space: normal;">
</span></span>

This is a known issue and discussed in 11.2.0.X Grid Infrastructure PSU Known Issues [ID 1082394.1]

To solve this issue, connect as root user and execute following command from your Grid Infrastructure home

cd $GRID_HOME

# ./crs/install/rootcrs.pl -patch
2010-10-02 06:59:54: Parsing the host name
2010-10-02 06:59:54: Checking for super user privileges
2010-10-02 06:59:54: User has super user privileges
Using configuration parameter file: crs/install/crsconfig_params
ADVM/ACFS is not supported on Redhat 4

CRS-4123: Oracle High Availability Services has been started.

Start oracle database instance on the node which has been patched

srvctl start instance -d db11g -i db11g1

Repeat the above steps for node 2.It took me 40 minutes for each node to complete opatch auto activity.Once it is done,execute following from one node for all databases to complete patch installation

cd $ORACLE_HOME/rdbms/admin

sqlplus /nolog

SQL> CONNECT / AS SYSDBA

SQL> @catbundle.sql psu apply

SQL> QUIT

This completes patching of Grid Infrastructure Patch or PSU2.

Upgrading Grid Infrastructure home to 11.2.0.2

Ensure following environment variables are not set :ORA_CRS_HOME; ORACLE_HOME; ORA_NLS10; TNS_ADMIN

# echo $ORA_CRS_HOME; echo $ORACLE_HOME; echo $ORA_NLS10; echo $TNS_ADMIN

Strating with 11.2.0.2, Grid infrastructure (Clusterware and ASM Home) upgrade is out of place upgrade i.e we install in new ORACLE_HOME. Unlike database home we cannot perform an in-place upgrade of Oracle Clusterware and Oracle ASM to existing homes.

Unset following variables too

$ unset ORACLE_BASE

$ unset ORACLE_HOME

$ unset ORACLE_SID

Relax permissions for GRID_HOME

#chmod -R 755 /oracle/product/grid

#chown -R oracle /oracle/product/grid

#chown oracle /oracle/product

Start the runInstaller from the 11.2.0.2 grid software directory.

On first screen, it will ask for MOS details. I have chosen skip Software update

Since we are upgrading existing installation, choose “Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management”. This will install software and also configure ASM

Next screen displays nodes which OUI will patch

Select the OSDBA,OSASM and OSOPER group. I have chosen dba

Since I have chosen all 3 groups to be same, it gives warning. Select Yes to continue

Specify the grid software installation directory. I have used /oracle/product/oragrid/11.2.0.2 (Better to input release as we will be having out of place upgrades for future patchsets too)

OUI reports some issues with swap size ,shmmax and NTP. You can fix them or choose to ignore. OUI can create a fixup script for you.

Installation starts copying file. After files have been copied to both nodes, it asks for running the rootupgrade.sh script from all nodes

Stop the database instance running from the node at this time and then run script as root. Please note that ASM and clusterware should not be stopped as rootupgrade.sh requires them to be up and takes care of shutting down and starting with new home.In my case, running rootupgrade.sh successfully succeeded on node 1 but it hung on node 2 . I did a cancel and re-ran it. Pasting the contents from node 2 second run

[root@oradbdev02 logs]# /oracle/product/oragrid/11.2.0.2/rootupgrade.sh
Running Oracle 11g root script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /oracle/product/oragrid/11.2.0.2

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/product/oragrid/11.2.0.2/crs/install/crsconfig_params
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-1115: Oracle Clusterware has already been upgraded.

ASM upgrade has finished on last node.

Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

On clicking next in OUI, it reported that cluvfy has failed and some of components are not installed properly. To verify I ran cluvfy manually. Pasting content for which it failed

./cluvfy stage -post crsinst -n oradbdev01,oradbdev02 -verbose 

<span style="font-family: Georgia, 'Times New Roman', 'Bitstream Charter', Times, serif; line-height: 19px; white-space: normal; font-size: 13px;">
<pre style="font: normal normal normal 12px/18px Consolas, Monaco, 'Courier New', Courier, monospace;">-------------------------------truncated output -------------------

ASM Running check passed. ASM is running on all specified nodes

Checking OCR config file "/etc/oracle/ocr.loc"...

ERROR:
PRVF-4175 : OCR config file "/etc/oracle/ocr.loc" check failed on the following nodes:

	oradbdev02:Group of file "/etc/oracle/ocr.loc" did not match the expected value. [Expected = "oinstall" ; Found = "dba"]

	oradbdev01:Group of file "/etc/oracle/ocr.loc" did not match the expected value. [Expected = "oinstall" ; Found = "dba"]

Disk group for ocr location "+DG_DATA01" available on all the nodes

OCR integrity check failed
-------------------------------truncated output -------------------

Checking OLR config file...

ERROR: 

PRVF-4184 : OLR config file check failed on the following nodes:

	oradbdev02:Group of file "/etc/oracle/olr.loc" did not match the expected value. [Expected = "oinstall" ; Found = "dba"]

	oradbdev01:Group of file "/etc/oracle/olr.loc" did not match the expected value. [Expected = "oinstall" ; Found = "dba"]

Checking OLR file attributes...

ERROR: 

PRVF-4187 : OLR file check failed on the following nodes:

	oradbdev02:Group of file "/oracle/product/oragrid/11.2.0.2/cdata/oradbdev02.olr" did not match the expected value. [Expected = "oinstall" ; Found = "dba"]

	oradbdev01:Group of file "/oracle/product/oragrid/11.2.0.2/cdata/oradbdev01.olr" did not match the expected value. [Expected = "oinstall" ; Found = "dba"]

OLR integrity check failed

-------------------------------truncated output -------------------
Checking NTP daemon command line for slewing option "-x"
Check: NTP daemon command line
  Node Name                             Slewing Option Set?
  ------------------------------------  ------------------------
  oradbdev02                            no
  oradbdev01                            no
Result:
NTP daemon slewing option check failed on some nodes
PRVF-5436 : The NTP daemon running on one or more nodes lacks the slewing option "-x"
Result: Clock synchronization check using Network Time Protocol(NTP) failed

PRVF-9652 : Cluster Time Synchronization Services check failed

We see that cluvfy is reporting error that it expected oinstall group but found dba group. I had not specified install group during installation, so can ignore it.For NTP you can correct it by setting following in ntpd.conf and restart the ntpd daemon

grep OPTIONS /etc/sysconfig/ntpd
OPTIONS=”-u ntp:ntp -p /var/run/ntpd.pid -x”

Refer How to Configure NTP to Resolve CLUVFY Error PRVF-5436 PRVF-9652 [ID 1056693.1]

At this moment Grid Infrastructure has been successfully upgraded to 11.2.0.2. Next we will upgrade the RAC database home. You can refer to Steps to Upgrade 11.2.0.1 RAC to 11.2.0.2

11.2.0.2 Patchset available.Download directly to server

11.2.0.2 Software is now available for download. It contains 4G+ binaries as you now perform out of place upgrade i.e software is installed in separate home from current home. You can also use it to install 11.2.0.2 directly instead of upgrading from 11.2.0.1.  Use wget to download it directly to server (thus avoiding copying to your machine and then to server) .

Please find below description of each file. Download whatever is necessary

p10098816_112020_Linux-x86-64_1of7.zip  – 11.2.0.2 Database Installation Files

p10098816_112020_Linux-x86-64_2of7.zip – 11.2.0.2 Database Installation Files

p10098816_112020_Linux-x86-64_3of7.zip – 11.2.0.2 Grid Installation Files

p10098816_112020_Linux-x86-64_4of7.zip – 11.2.0.2  Client Installation Files

p10098816_112020_Linux-x86-64_5of7.zip – 11.2.0.2 Gateway Installation Files

p10098816_112020_Linux-x86-64_6of7.zip – Examples file

p10098816_112020_Linux-x86-64_7of7.zip – 11.2.0.2 Deinstall Utility Files

You need to copy the link location for patchset (right click on download link+ copy link location) and then supply MOS username and password using -http-user and –http-password clause

Syntax is

wget –http-user=”<username>” -http-password=”<passwd>” –output-document=”<patchname>” “link”

Note: – There is double dash before each parameter but due to html formatting , it is not being preserved. Look below example

wget --http-user="[email protected]" --http-password="amit123" --output-document="p10098816_112020_Linux-x86-64_1of7.zip"
"http://updates.oracle.com/Orion/Download/process_form/p10098816_112020_Linux-x86-64_1of7.zip?file_id=32675713&aru=12918649&[email protected]&[email protected]&patch_password=&patch_file=p10098816_112020_Linux-x86-64_1of7.zip"

Above credentials will not work , they are just dummy 😛

100%[====================================================================================================================>] 1,307,536,804   27.87M/s    ETA 00:00

05:21:26 (25.76 MB/s) - `p10098816_112020_Linux-x86-64_1of7.zip' saved [1307536804/1307536804]

Wow!! That was fast.
Happy downloading!

Renaming Diskgroup containing Voting Disk and OCR

Recently encountered a issue where we had to rename a diskgroup for 11.2 RAC containing OCR and Voting Disk.At this moment we had not created database .

I had done similar activity on 11.2 Single instance having database files (see this post) using renamedg. Therefore we tried using same approach to do this activity.

Pre-requisite for renamedg is that diskgroup should be dismounted. As this Diskgroup was storing OCR/Voting disk, it was not possible to dismount the diskgroup. So we stopped the cluster on both the nodes and tried running renamdg command

Old Diskgroup name – DATA
New Diskgroup name – DG_DATA01

[oracle@prod-001]~% renamedg phase=both dgname=DATA newdgname=DG_DATA01 config=/oracle/renamedg.out asm_diskstring='ORCL:DISK*' verbose=true keep_voting_files=true

Parsing parameters..

Parameters in effect:

 	 Old DG name       : DATA
	 New DG name          : DG_DATA01
	 Phases               :
	 	 Phase 1
	 	 Phase 2
	 Discovery str        : ORCL:DISK*
	 Keep_voting_files  : TRUE
	 Clean              : TRUE
	 Raw only           : TRUE
renamedg operation: phase=both dgname=DATA newdgname=DG_DATA01 config=/oracle/renamedg.out asm_diskstring=ORCL:DISK* verbose=true keep_voting_files=true
Executing phase 1
Discovering the group
Performing discovery with string:ORCL:DISK*
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DISK1 with disk number:0 and timestamp (32941163 273325056)
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DISK2 with disk number:1 and timestamp (32941163 273325056)
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DISK3 with disk number:2 and timestamp (32941163 273325056)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:ORCL:DISK*
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DISK1 with disk number:0 and timestamp (32941163 273325056)
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DISK2 with disk number:1 and timestamp (32941163 273325056)
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DISK3 with disk number:2 and timestamp (32941163 273325056)
Checking if the diskgroup is mounted
Checking disk number:0
Checking disk number:1
Checking disk number:2
KFNDG-00600: file not found; arguments: [600] [Failed to initialize CSS context
] []
LEM-00031: Error encountered in lempgmh after calling lmserr.
KFNDG-00600: file not found; arguments: [600] [Failed to initialize CSS context
] []

Terminating kgfd context 0x2b592002f0a0

As we can see it failed informing that CSS is not up. Then we tried renamedg command by starting cluster (ASM Diskgroup was mounted) but it too failed with error

Checking if the diskgroup is mounted
Checking disk number:0
KFNDG-00405: file not found; arguments: [DATA]

Description of KFNDG clearly informs that diskgroup needs to be dismounted

KFNDG-00405: specified disk group string appears to be mounted
Cause: Disk group was mounted.
Action: Unmount the disk group and retry renamed

At this moment we thought that we cannot proceed and we would be required to re-install clusterware. Thinking over problem, I thought of moving the ocr and voting disk to different diskgroup and then renaming the old diskgroup.

Checked documentation and found out that there was command for replacing voting disk. In 11gR2 you can run this command while your cluster is up and running. So we created a new diskgroup DG_DATA01 and used following command

[oracle@prod-001]/dev/oracleasm/disks% crsctl replace votedisk +DG_DATA01
Failed to create voting files on disk group DG_DATA01.
Change to configuration failed, but was successfully rolled back.
CRS-4000: Command Replace failed, or completed with errors.

Oops. This failed. Checking ASM alert log gave the exact cause of this error. As you can see , it is asking for compatible.asm to be set to 11.2

NOTE: Creating voting files in diskgroup DG_DATA01
NOTE: Voting File refresh pending for group 2/0x88b6d50 (DG_DATA01)
NOTE: Attempting voting file creation in diskgroup DG_DATA01
ERROR: Voting file allocation failed for group DG_DATA01
Errors in file /oracle/diag/asm/+asm/+ASM1/trace/+ASM1_ora_4272.trc:
ORA-15221: ASM operation requires compatible.asm of 11.2.0.0.0 or higher

Checking the compatability confirmed that it is set to 10.1

SQL> select name,COMPATIBILITY,DATABASE_COMPATIBILITY from V$ASM_DISKGROUP where name='DG_DATA01';

NAME       COMPATIBILITY      DATABASE_COMPATIBILITY
--------  ----------------   --------------------------
DG_DATA01  10.1.0.0.0        10.1.0.0.0

You can use set attribute to change the compatability. e.g

SQL> alter diskgroup DG_DATA01 SET ATTRIBUTE 'compatible.asm'='11.2';

Diskgroup altered.

Retrying the command succeeded

[oracle@prod-001]/dev/oracleasm/disks% crsctl replace votedisk +DG_DATA01
Successful addition of voting disk 241b1e0a36344f7bbfaca4a576d514e9                                                                                                                                                                                                                                .
Successful deletion of voting disk 72e47e5e3afb4fe9bfb502b1b4340503.
Successfully replaced voting disk group with +DG_DATA01.
CRS-4266: Voting file(s) successfully replaced

We verified same using below command

[root@prod-001 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   241b1e0a36344f7bbfaca4a576d514e9 (ORCL:DISK4) [DG_DATA01]
Located 1 voting disk(s).

This is external redundancy diskgroup and oracle allows only one voting disk in external redundancy diskgroup.

Next was to change ocr location. This can be done using ocrconfig which needs to be run as root user. We tried using replace command but it didn’t work as it was the only OCR configured on cluster.

[root@prod-001 ~]# ocrconfig -replace +DATA -replacement +DG_DATA01
PROT-28: Cannot delete or replace the only configured Oracle Cluster Registry location

So we added additional OCR disk to new diskgroup and dropped old one

[root@prod-001 ~]# ocrconfig -add +DG_DATA01
[root@prod-001 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2424
	 Available space (kbytes) :     259696
	 ID                       :  579998313
	 Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
	 Device/File Name         : +DG_DATA01
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

	 Cluster registry integrity check succeeded

	 Logical corruption check succeeded

[root@prod-001 ~]# ocrconfig -delete +DATA
[root@prod-001 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2424
	 Available space (kbytes) :     259696
	 ID                       :  579998313
	 Device/File Name         : +DG_DATA01
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

	 Cluster registry integrity check succeeded

	 Logical corruption check succeeded

We then tried dropping diskgroup DATA

SQL> drop diskgroup data including contents;
drop diskgroup data including contents
*
ERROR at line 1:
ORA-15039: diskgroup not dropped
ORA-15027: active use of diskgroup "DATA" precludes its dismount

It failed as ASM instance was using the parameter file which was kept on ASM diskgroup. So created a new pfile and then re-created spfile on new Diskgroup

SQL> create pfile from spfile;
File created.

SQL> create spfile='+DG_DATA01' from pfile;

File created.

To drop the diskgroup , you will need to restart the cluster on both the nodes as ASM instance is still using the file. Run following commands on both the nodes

crsctl stop cluster
crsctl start cluster

Verify that OCR location is pointing to the correct diksgroup on all nodes before starting cluster. We faced a issue in one more environment, where for some reason OCR location was not properly updated on second node. In that case we modified it manually (after confirming OCR location using ocrcheck)

[root@prod-002 ~]# cat /etc/oracle/ocr.loc
#Device/file +DATA getting replaced by device +DG_DATA01
ocrconfig_loc=+DG_DATA01
local_only=false

You can verify the spfile and drop the old diskgroup now

SQL> sho parameter spfile

NAME				     TYPE			       VALUE
------------------------------------ --------------------------------- ------------------------------
spfile				     string			       +DG_DATA01/prod-cluster/asmp
								       arameterfile/registry.253.7290
								       69769
SQL> drop diskgroup data including contents;

Diskgroup dropped.

In case you don’t wish to drop this diskgroup as it contains datafiles and would like to rename the diskgroup, you can refer to my earlier post 11gR2: Steps to Rename ASM Diskgroup with DB files

Effect Of Multiple SHMMAX Settings

Last week I saw a warning message at database startup time saying:

WARNING: EINVAL creating segment of size 0x000000000f0020xx
fix shm parameters in /etc/system or equivalent 

It is an Oracle 10204 database running on Solaris.

Searching MOS for exact meaning for this warning, it states that a new shared memory segment is getting created to accommodate SGA.

As the message indicated, I opened /etc/system file to verify the settings of SHMMAX parameter and found the SHMMAX value to be 4GB. I stopped at this point and closed the /etc/system file. Then the next thing to check is the number of oracle instances running on the server and the size of largest SGA.

There were two instances running on the server and the largest SGA was set to 1.8G and the other SGA size was 700M.

This setting shows that there is no need to create additional shared memory segment. Then I checked the /etc/system file again, but this time I used the following command :

<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">$ cat /etc/system | grep shmmax</span></span>
<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">set shmsys:shminfo_shmmax=4000000000  ==&gt;4GB</span></span>
<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">set shmsys:shminfo_shmmax=500000000  ==&gt;500M</span></span><span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">

There were two different values set for SHMMAX parameter.

The cause of the above warning message came out to be:
As the files are read from bottom-to-top, server was taking SHMMAX value as 500M and ignoring the 4GB value.

After commenting SHMMAX value of 500M, the warning message disappeared.