11g Release 2

In-Place Upgrade 11gR2 RAC : 11.2.0.1 To 11.2.0.2

Sharing a post from my friend “Carthik” on 11gR2 RAC in-place upgrade.

Overview
========
Starting oracle 11gR2 the Oracle database and Clusterware upgrades are done via the “Out of place upgrade”. And is the easier way to perform your upgrade.  However, the intent of this blog is to explain how an “In-Place upgrade” of a RAC database is done in 11gR2, the advantages, disadvantages, pre-requisites and pain points involved in this method which is the traditional method of upgrading an oracle database. And I have chosen to upgrade a 2 Node 11.2.0.1 RAC Database to 11.2.0.2 RAC Database.

Clearly, the advantage is that you save space, Instead of installing a new Oracle Home.

The greatest disadvantage is that you need to back up the oracle home and run detach oracle home commands. This could potentially damage your oracle binaries. However, when done carefully it should not cause a problem.

The pain points Include:
1. Backing up the Oracle home
2. Restoring the Oracle Home from backup in-case of installation failure.
3. Attach the restored home, and then bring up the instance.
Clearly, there is a lot of manual intervention, which is a major pain point.

The idea behind using an in-place upgrade is to save space. And this method of upgrade requires a significant amount time. The only time one should use this method is when you lack space on your server. However, you can use this method for your test and development environments to save space. Since this method was the traditional method of doing things, I thought it’s worth checking how it works in 11gR2.

NOTE: If you have an existing Oracle Clusterware installation, then you upgrade your existing cluster by performing an out-of-place upgrade. You cannot perform an in-place upgrade to the oracle Clusterware. However, you can perform an in-place upgrade for the database. This will be elaborated in this blog.
Prerequisites for Oracle 11gR2 11.2.0.2 installation is to install patch 9655006 to the 11.2.0.1 GI home before upgrading to 11.2.0.2 from 11.2.0.1. See Bug 9413827 on MOS. For additional details you can refer to the Metalink article Pre-requisite for 11.2.0.1 to 11.2.0.2 ASM Rolling Upgrade Metalink Note : 1274629.1. Additionally ,Refer to “How to Manually Apply A Grid Infrastructure PSU Without Opatch Auto” Metalink Note 1210964.1.

Performing an In-place upgrade of a RAC DB from 11.2.0.1 to 11.2.0.2

In-order to upgrade a RAC Database from 11.2.0.1 to 11.2.0.2 you need to upgrade

1. The grid infrastructure first.
2. Then, the Oracle Database.

Environment Setup Details used in this post:
— 2 Node RAC Red Hat Linux 5.5 with RACK Servers (R710).
— Applies to any storage.

Latest OPatch
It is recommended to we use the Latest Version of OPatch. Unzip the zip file and copy OPatch folder to $ORACLE_HOME And $GI_HOME by renaming
the earlier OPatch directory. You can refer to how to download and Install OPatch Metalink ID 274526.1.

Pre- Requisite Patch:

First, let’s discuss about the mandatory patches required before upgrading to 11.2.0.2. Patch 9655006 is required in order for the upgrade
to succeed, if not rootupgrade.sh will fail.
Patch 9655006 is downloadable from http://www.metalink.oracle.com/ For information on Bug 9655006, refer to Metalink article ID 9655006.8
Download the patch and unzip it to a stage directory (it can be any directory), once you unzip the downloaded patch, 2 folders will be created. In this example I have unzipped the patch to /patches directory.

Now, let’s see how to patch the $GI_HOME with Patch 9655006.

Before the patch is installed, we need to perform a pre-req check. Let’s see how to do this.
1. [root@rac1 patches]# su – oracle

2. [oracle@rac1 ~]$ cd /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/

3. [oracle@rac1 OPatch]$ ./opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /patches

Invoking OPatch 11.2.0.1.6
Oracle Interim Patch Installer version 11.2.0.1.6
Copyright (c) 2011, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /opt/app/oracle/product/11.2.0/dbhome_1
Central Inventory : /opt/app/oraInventory
 from           : /etc/oraInst.loc
OPatch version    : 11.2.0.1.6
OUI version       : 11.2.0.1.0
Log file location : /opt/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2011-08-16_19-36-09PM.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

Patching the $GI_HOME
1. Stop the Instance running on Node1
srvctl stop instance -d upgrade -i upgrade1

2. As root User run the opatch auto from the $GI_HOME
./opatch auto /patches

Note: The Opatch auto takes care of the patching of both the Grid infrastructure Home and the Oracle Home with the mandatory patch .
Once the patching is done on Node1, start the instance on Node1

3. Starting the Instance on Node1
srvctl start instance -d upgrade -i upgrade1

Repeat the process of pre-req and patching on Node2

Once the mandatory patch is applied, we can proceed with the upgrade of the grid infrastructure home.

Patches required:

The software/Patch can be downloaded from My Oracle support: patch 10098816. Select p10098816_112020_Linux-x86-64_3of7.zip
for grid infrastructure download. Once downloaded, unzip them.

Upgrading Grid Infrastructure:

Unzip the patches downloaded and invoke runInstaller from the unzipped grid folder. You will be taken to the welcome screen.

Choose Skip Software updates

Choose Upgrade Gird Infrastructure or Oracle ASM

Choose the Language

The Nodes present are selected by default, Click Next.

Leave the OS groups to Defaults

Choose the New Location where the Grid Infrastructure should be installed


The Pre-Requisite checks are performed, click next


The summary screen appears click next

Click on Install

Run rootupgrade.sh on both the nodes as specified in the screenshot

Upgrading the Database via In-place upgrade:

Patches Required:
The software/Patch can be downloaded from My Oracle support: patch 10098816.
Select p10098816_112020_Linux-x86-64_1of7.zip and p10098816_112020_Linux-x86-64_2of7.zip for database patch/software download.
Once downloaded, unzip them.

In-Place upgrades (Things to do before performing and In-place upgrade)

When performing an in-place upgrade, which uses the same Oracle home location, an error messages appears stating that the installer
detects Oracle Database software in the location that you specified.

Message: The installer has detected that the software location you have specified contains Oracle Database software release 11.2.0.1.
Oracle recommends that when upgrading to 11.2.0.2, you perform an out-of-place installation of the software into a new Oracle home and
then upgrade the database using the new software binaries.

Cause: The installer has detected that the software location you have specified contains Oracle Database software release 11.2.0.1.
Action: Either perform an in-place upgrade (Steps provided in this section), or perform an out-of-place upgrade

Performing an In-Place Upgrade for an Oracle RAC Database

To perform an in-place upgrade for Oracle RAC Database instances:
1. Back up the configuration data by backing up the following directories on all cluster nodes:
o ORACLE_HOME/dbs
o ORACLE_HOME/network/admin
o ORACLE_HOME/hostname_dbname
o ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_hostname_dbname

2. Run the following command on each of the nodes to detach the 11.2.0.1.0 Oracle RAC ORACLE_HOME:
$ORACLE_HOME/oui/bin/runInstaller -detachHome ORACLE_HOME=11.2.0.1.0 software location

3. Rename the 11.2.0.1.0 Oracle RAC ORACLE_HOME directory on all the nodes to a temporary name.

4. Install release 11.2.0.2 Software Only on all nodes:

From the unzipped folder, invoke the run Installer from the database folder

The welcome screen appears, uncheck the security updates and click next

Choose Skip Software Updates

Select Install Database Software Only and click next

Choose Oracle Real Application Clusters DB Installation and Select the Nodes and click next

Choose the Language and click next

Select Enterprise Edition and click next

Specify the location of the old home, and click next

Provide the Operating System groups and click next

Once the Pre-requisite checks are performed, click next

The summary screen appears, verify the settings and click next

The installation proceeds

Once the installation is done, run Root.sh on both the nodes as instructed and click ok.

Now, on all nodes, restore into the 11.2.0.2 ORACLE_HOME software location the backed up configuration data files
(from the backups you made of /dbs and network/admin), and also restore the following two directories:
/hostname_dbname and /oc4j/j2ee/OC4J_DBConsole_hostname_dbname. Specify the actual name for hostname_dbname.

Database Upgrade using DBUA:

Run DBUA from the 11.2.0.2 ORACLE_HOME/bin directory on the local node and select the 11.2.0.1.0 Oracle RAC database instance to
upgrade it to release 11.2.0.2.0.

The welcome screen appears once you invoke the DBUA, click next to proceed

DBUA Lists the databases that can be upgraded, select the one that you would like to upgrade

You can choose the Degree of parallelism and upgrading the time zone version and click next

Skip this screen by clicking next

The summary screen appears, click on finish for the upgrade to proceed.

The upgrade proceeds

NOTE: The only time one should use this method (in-place upgrade) is when you lack space on your server.
However, you can use this method for your test and development environments to save space.

Since this method was the traditional method of doing things,
Since this method is available, I thought it’s worth checking how it works in 11gR2.
During the entire upgrade process, I never ran into any issues, apart from the slightly higher downtime in
comparison to the out-of place upgrade. However, oracle doesn’t recommend this method. You can refer to the following metalink note 1291682.1.

11gR2: Steps to Rename ASM Diskgroup with DB files

11gR2 introduced renamedg utility to rename diskgroups. I would discuss renaming a diskgroup on a Standalone Grid Infrastructure installation with Oracle Restart. Before you use the command, ensure that ASM diskgroup is dismounted.
Below is syntax for the command. You can refer to
11gR2 docs for details

renamedg
[phase={ one|two |both } ] dgname=diskgroup
newdgname=newdiskgroup [config=configfile]
[ asm_diskstring=discoverystring, discoverystring ... ]
[ clean={true|false} ] [ check={true|false} ]
[ confirm={true|false}] [ verbose={ true|false} ]
[ keep_voting_files={true|false}]

Basically we have two phases of using renamedg command, phase 1 generates a configuration file to be used by phase 2. In phase 2 actual renaming of diskgroup is done.

Note : – I have tried these steps in a Test environment and tried to cover the most components. But would recommend you  to cross verify the steps in a test environment before implementing in production environment

Lets get Started!!

1) Gather all the information for Oracle restart Configuration and DB parameters

Database Configuration

[oracle@db11g bin]$ srvctl config database -d test11r2
Database unique name: test11r2
Database name: test11r2
Oracle home: /u02/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/test11r2/spfiletest11r2.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA
Services:

ASM Information

[oracle@db11g bin]$ srvctl config asm
ASM home: /u02/app/oracle/product/11.2.0/grid
ASM listener: LISTENER
Spfile: +DATA/asm/asmparameterfile/registry.253.696489255
ASM diskgroup discovery string: /dev/oracleasm/disks/VOL*

[oracle@db11g bin]$ srvctl config listener
Name: LISTENER
Home: /u02/app/oracle/product/11.2.0/grid
End points: TCP:1521

Create Database parameter file

sql> create pfile='/tmp/param.txt' from spfile;

Oracle Restart configuration

Disable Oracle Restart so that it does not restart any component. ‘crsctl status resource’ can be used to check status of resources. As you can see I have stopped database but  ASM Diskgroup DATA is still mounted.

[oracle@db11g bin]$ crsctl disable has
[oracle@db11g bin]$ crsctl status resource
NAME=ora.DATA.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE
STATE=ONLINE on db11g

NAME=ora.FLASH_ARC.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE
STATE=ONLINE on db11g

NAME=ora.LISTENER.lsnr
TYPE=ora.listener.type
TARGET=ONLINE
STATE=ONLINE on db11g

NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE
STATE=ONLINE on db11g

NAME=ora.cssd
TYPE=ora.cssd.type
TARGET=ONLINE
STATE=ONLINE on db11g

NAME=ora.diskmon
TYPE=ora.diskmon.type
TARGET=ONLINE
STATE=ONLINE on db11g

NAME=ora.test11r2.db
TYPE=ora.database.type
TARGET=OFFLINE
STATE=OFFLINE

2) We are renaming DATA diskgroup to DG.

[oracle@db11g bin]$ renamedg dgname=data newdgname=dg asm_diskstring='/dev/oracleasm/disks/VOL*' verbose=true
NOTE: No asm libraries found in the system

Parsing parameters..

Parameters in effect:

         Old DG name       : DATA
         New DG name          : DG
         Phases               :
                 Phase 1
                 Phase 2
         Discovery str        : /dev/oracleasm/disks/VOL*
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: dgname=data newdgname=dg asm_diskstring=/dev/oracleasm/disks/VOL* verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:/dev/oracleasm/disks/VOL*
Identified disk UFS:/dev/oracleasm/disks/VOL11 with disk number:2 and timestamp (32924744 398242816)
Identified disk UFS:/dev/oracleasm/disks/VOL12 with disk number:1 and timestamp (32924741 928525312)
Identified disk UFS:/dev/oracleasm/disks/VOL21 with disk number:0 and timestamp (32924744 362846208)
Identified disk UFS:/dev/oracleasm/disks/VOL22 with disk number:3 and timestamp (32924741 928525312)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:/dev/oracleasm/disks/VOL*
Identified disk UFS:/dev/oracleasm/disks/VOL11 with disk number:2 and timestamp (32924744 398242816)
Identified disk UFS:/dev/oracleasm/disks/VOL12 with disk number:1 and timestamp (32924741 928525312)
Identified disk UFS:/dev/oracleasm/disks/VOL21 with disk number:0 and timestamp (32924744 362846208)
Identified disk UFS:/dev/oracleasm/disks/VOL22 with disk number:3 and timestamp (32924741 928525312)
Checking if the diskgroup is mounted
Checking disk number:2
Checking disk number:1
Checking disk number:0
Checking disk number:3
Checking if diskgroup is used by CSS
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for /dev/oracleasm/disks/VOL11
Modifying the header
Looking for /dev/oracleasm/disks/VOL12
Modifying the header
Looking for /dev/oracleasm/disks/VOL21
Modifying the header
Looking for /dev/oracleasm/disks/VOL22
Modifying the header
Completed phase 2
Terminating kgfd context 0xb7e7a050

3) Modify ASM configuration to correct the spfile location

$srvctl modify asm -p +DG/asm/asmparameterfile/registry.253.696489255
$srvctl start asm

Connect to ASM instance and mount +DG diskgroup.

sqlplus "/ as sysasm"
sql> alter diskgroup DG mount;

At this moment, you will see additional ASM diskgroup resource ora.DG.dg in crsctl configuration. Remove the old ora.DATA.dg resource

oracle@db11g ~]$ crsctl delete resource ora.DATA.dg
CRS-2730: Resource 'ora.test11r2.db' depends on resource 'ora.DATA.dg'
CRS-4000: Command Delete failed, or completed with errors.

We get above error as the test11r2 database configuration states ora.DATA.dg as dependent resource. Use -f option to drop it by force. Or you can first modify the database configuration and drop the resource later.

[oracle@db11g ~]$ crsctl delete resource ora.DATA.dg -f

4) Startup Database.

To do that we need to modify parameter file as control_files parameter and other *_file_dest needs to be changed. Change all occurence of ‘+DATA’ to ‘+DG’. Open database in mount mode as we would have to rename database files and redo logs before opening databases

SQL> startup nomount pfile='/tmp/param.txt'
ORACLE instance started.

Total System Global Area  343154688 bytes
Fixed Size                  1336428 bytes
Variable Size             218106772 bytes
Database Buffers          117440512 bytes
Redo Buffers                6270976 bytes

SQL>alter database mount;

Use below command to get sql for renaming diskgroup information for datafiles/redo logs

 select 'alter database rename file '''||name||''' to ''<strong>+DG</strong>'||substr(name,instr(name,'/',1,1))||''';' from V$DATAFILE;

select 'alter database rename file '''||member||''' to ''<strong>+DG</strong>'||substr(member,instr(member,'/',1,1))||''';' from V$logfile;

V$RECOVER_FILE view can be used to check for any issues with datafiles not being available.
If everything is fine, you can open the database. You would have noticed that I have not done anything for Temp files. Checking alert log, we see that it is renamed automatically

 Re-creating tempfile +DATA/test11r2/tempfile/temp.268.696494395 as +DG/test11r2/tempfile/temp.272.704195475

Now we need to re-create spfile on ASM Diskgroup. Use following steps

$cd $ORACLE_HOME/dbs
$cat inittest11r2.ora
SPFILE='+DATA/test11r2/spfiletest11r2.ora'
$mv inittest11r2.ora inittest11r2.ora.new
$ cp /tmp/param.txt $ORACLE_HOME/dbs/inittest11r2.ora
$sqlplus "/ as sysdba"
SQL>create spfile='+DG/test11r2/spfiletest11r2.ora' from pfile;
exit
$mv inittest11r2.ora initest11r2.old

--- Modify inittest11r2.ora.new and change diskgroup to +DG---
$cat inittest11r2.ora.new
SPFILE='+DG/test11r2/spfiletest11r2.ora'
$mv inittest11r2.ora.new inittest11r2.ora

Now we need to modify Database configuration for Oracle Restart

[oracle@db11g ~]$ srvctl config database -d test11r2
Database unique name: test11r2
Database name: test11r2
Oracle home: /u02/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/test11r2/spfiletest11r2.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
PRCD-1012 : Failed to retrieve disk group list for database test11r2.
PRCR-1035 : Failed to look up CRS resource ora.DATA.dg for test11r2
PRCR-1001 : Resource ora.DATA.dg does not exist

As there is no DATA diskgroup available, it gives a error. If you notice initial output (before renaming diskgroup), there was entry for DATA Diskgroup. We also need to modify the entry for spfile.

[oracle@db11g ~]$ srvctl modify database -d test11r2 -p +DG/test11r2/spfiletest11r2.ora
[oracle@db11g ~]$ srvctl modify database -d test11r2 -a DG
[oracle@db11g dbs]$ srvctl config database -d test11r2
Database unique name: test11r2
Database name: test11r2
Oracle home: /u02/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
<strong>Spfile: +DG/test11r2/spfiletest11r2.ora</strong>
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
<strong>Disk Groups: DG</strong>
Services:

You can now shutdown database and use srvctl to start If you have not dropped ora.DATA.dg resource earlier (due to errors), you can drop it now without errors

 [oracle@db11g ~]$ crsctl delete resource ora.DATA.dg

You can now enable Oracle Restart

[oracle@db11g dbs]$ crsctl enable has
CRS-4622: Oracle High Availability Services autostart is enabled.

Phew!! This was quite a posting after long time!!

Link:11gR2 RAC installation steps on OEL4

If you are looking for steps for 11gR2 RAC installation, you can refer to this article by Rajeev Ramdas at Dbastreet.com. Article lists down steps for installing 11gR2 RAC on 64 bit Oracle Enterprise Linux 4 (OEL4) using ASM for storage. As Raw devices are no longer supported, OCR and Voting Disks are also stored on ASM. Yes, this is one more cool New Feature available in 11gR2.

11gR2: Monitoring Real Time SQL Plan Execution from Oracle Enterprise Manager

Ever since 11gR2 has been released, it has created a lot of buzz in Oracle Blogosphere with various posts on 11g Release 2 (11gR2 ) New Features primarily ASM New features. I would like to introduce you all to a feature which has been incorporated in 11gR2 Oracle Enterprise Manager to display Real-Time SQL Monitoring information . I would say that this was much awaited feature which had to be implemented in OEM and comes as a handy tool for DBA’s to tune/monitor sql execution.

Oracle 11gR1 introduced two new views V$SQL_MONITOR and V$SQL_MONITOR_PLAN to provide runtime execution statistics. This was part of feature called Real-Time SQL Monitoring which allows you to monitor sql in near real-time as the statement executes.

To recap, please find below description for both views

V$SQL_MONITOR displays SQL statements whose execution have been (or are being) monitored by Oracle. An entry is created in V$SQL_MONITOR every time the execution of a SQL statement is being monitored. SQL monitoring is automatically started when a SQL statement runs parallel or when it has consumed at least 5 seconds of CPU or I/O time.

V$SQL_PLAN_MONITOR displays plan level monitoring statistics for each SQL statement found in V$SQL_MONITOR. Each row in V$SQL_PLAN_MONITOR corresponds to an operation of the execution plan being monitored. Timing information could be viewed by joining V$SQL_PLAN_MONITOR with V$ACTIVE_SESSION_HISTORY on SQL_ID, SQL_EXEC_START, SQL_EXEC_ID, and SQL_PLAN_LINE_ID (simply named PLAN_LINE_ID in V$SQL_PLAN_MONITOR).

In 11gR1, you could use DBMS_SQLTUNE.REPORT_SQL_MONITOR to generate html/text reports . Greg Rahn has written a post describing it in detail.

11gR2 Oracle Enterprise Manager provides a graphical interface which makes job easier.

Click on “Top Activity” link under Performance Page to view the Active session details. This shows wait events classified by wait class along with Top Sql and Top sessions activity.

Top Activity

Click on “SQL Monitoring” link under Performance Page to view the Monitored SQL. SQL’s having execution time >5 second or parallel queries will be displayed here. You can also use MONITOR hint to monitor a sql statement.
Green circle shows that the SQL is being currently executing. Click on the symbol to view plan

Sql_monitor_1
This shows following details

Overview – SQL_ID and user details
Time and Wait statistics – Gives Duration for query, DB time and Wait activity %. Moving mouse on wait activity graph will give you the breakup of wait events.
I/O Statistics – This section gives I/O details
Detils – This section includes two sub headings

Plan Statistics

This gives execution plan details
sql_monitor_2

These graphs are dynamic and are refreshed as the SQL is being executed.

Activity

This gives ASH Report for session

ash_screenshot

You can also get a report by clicking on Report link
sql_report
Below screenshot shows execution plan for Parallel SQL query execution

sql_monitor_parallel_execution

parallel_execution_plan
This has additional section under Details section called “Parallel”. This gives parallel server details giving DB Time,wait activity %,IO Requests and Buffer gets detail.
Parallel Severs Breakdown

You have option to save/mail the report
11gr2_sql_monitor2

Moving mouse pointer on User under “Overview” section also shows Session information

session_info 
Right click on the report to get Session Details

session_details

To enable SQL Monitoring, STATISTICS_LEVEL parameter need’s to be set to ALL or TYPICAL. Additionally CONTROL_MANAGEMENT_PACK_ACCESS parameter must be set to DIAGNOSTIC+TUNING which requires you to have Oracle Database tuning pack.

 

11gR2 Database Installation with ASM on OEL5

I will be listing down steps for 11g Release 2 installation on Linux x86. There has been few changes incorporated in 11gR2 which must be given due consideration before starting installation. ASM instance now runs from new Oracle Home called Grid Infrastructure.

If you wish to use ASM for storing database files, then you should install first Oracle grid infrastructure for a standalone server. It is also required if you wish to use Oracle Restart feature (Provides for automatic restart of DB and other components after a hardware or software failure or whenever your database host computer restarts) for single instance.

Environment used for this setup
Operating System – Oracle Enterprise Linux Release 5(OEL5) on Vmware
Physical Ram – 1Gb

Asmlib for labeling ASM Disks.Refer to following OTN article for configuring asmlib and creating disks.

Software can be downloaded from OTN at following link

Oracle Database 11g Release 2 Grid Infrastructure (11.2.0.1.0) for Linux x86

Oracle Database 11g Release 2 (11.2.0.1.0) for Linux x86

All the pre-requisites for installation can be found here
http://download.oracle.com/docs/cd/E11882_01/install.112/e10840/pre_install.htm

I have tried to shorten it by keeping things which are relevant to our setup.

Check Packages

binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
elfutils-libelf-devel-static-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-common-2.5
glibc-devel-2.5
glibc-headers-2.5
kernel-headers-2.6.18
ksh-20060214
libaio-0.3.106
libaio-devel-0.3.106
libgcc-4.1.2
libgomp-4.1.2
libstdc++-4.1.2
libstdc++-devel-4.1.2
make-3.81
sysstat-7.0.2
unixODBC-2.2.11
unixODBC-devel-2.2.11

If you have yum installed on your machine, then it is very easy to install the packages. You can copy the list of packages in a text file and insert ‘yum install ‘ in beginning and execute it as root. This will install any missing packages. To know more about setting up yum, refer to my previous post

If Kernel parameters are not setup correctly, Oracle 11g OUI provides you  a fixup script and also ability to resume the installation process without need to re-run the earlier steps. Please refer to article here

Users,Roles and Groups for Installation

osdba group for ASM – Membership in the OSDBA group allows access to the files managed by Automatic Storage Management. It can be same as database osdba group. We will be using group dba in our setup.
OSASM group – Members of the OSASM group can use SQL to connect to an Automatic Storage Management instance as SYSASM using operating system authentication. The SYSASM privileges permit mounting and dismounting disk groups, and other storage administration tasks. SYSASM privileges provide no access privileges on an RDBMS instance. We will use asmadmin

OSOPER group – Similar to SYSOPER privilege in RDBMS, this is used for limited privileges like starting up and stopping the Oracle ASM instance. We will not be using separate group for it, will be selecting dba in our setup

oinstall – This is similar to previous releases, and used for Oracle Inventory group

I am using oracle user for installing Grid Infrastructure and Oracle Database Software. You can have a separate user say oragrid with oinstall membership and permission on oracle inventory directory.

Create Base directory where software will be installed. I used /u02/app/oracle

Add users and groups

/usr/sbin/groupadd -g 501 dba
/usr/sbin/groupadd -g 504 asmadmin
/usr/sbin/groupadd oinstall

/usr/sbin/useradd -u 502 -g oinstall -G dba,asmadmin oracle

Add following entries to /etc/security/limits.conf

oracle              soft    nproc   2047
oracle              hard    nproc   16384
oracle              soft    nofile  1024
oracle              hard    nofile  65536

Add or edit the following line in the /etc/pam.d/login file, if it does not already exist:

session    required     pam_limits.so

Add/replace following entries in /etc/sysctl.conf

fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586

Execute following command as root to change the current kernel parameters:

/sbin/sysctl -p

Copy following in your login profile.

if [ $USER = "oracle" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
        else
              ulimit -u 16384 -n 65536
        fi
umask 022
fi

We need to start with Grid Infrastructure installation. Unzip the contents and execute  runInstaller . This would require 2.8Gb of disk space.

$./runInstaller

Please click on image below for  displaying screenshots for 11gR2 Grid Infrastructure installation

Grid_infra_install

Execute root.sh from Grid ORACLE_HOME as root user

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u02/app/oracle/product/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2009-09-02 05:07:26: Checking for super user privileges
2009-09-02 05:07:26: User has super user privileges
2009-09-02 05:07:26: Parsing the host name
Using configuration parameter file: /u02/app/oracle/product/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
CRS-4664: Node db11g successfully pinned.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting

db11g     2009/09/02 05:09:24     /u02/app/oracle/product/11.2.0/grid/cdata/db11g/backup_20090902_050924.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2399 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

To install Database software, Unzip the contents of both the zip files and execute runInstaller . Ensure you have 3.95Gb of free disk space

$./runInstaller

Please click on image for  displaying screenshots for 11gR2 Database installation .

11g_db_install

You are done 🙂

Component

Value /Commands to Check

Physical RAM >1 Gb

1Gb
grep MemTotal /proc/meminfo

Swap should be 1.5 times the RAM.

In our case it will be 1.5G
grep SwapTotal /proc/meminfo

/dev/shm to be configured to use MEMORY_TARGET

# df -h /dev/shm/

1Gb of /tmp space

df -h /tmp

Disk space Requirement for Grid Infrastructure Software (Enterprise edition)

2.95 Gb

Disk space Requirement for Database Software (Enterprise edition)

3.95 Gb

Packages for OEL5

binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
elfutils-libelf-devel-static-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-common-2.5
glibc-devel-2.5
glibc-headers-2.5
kernel-headers-2.6.18
ksh-20060214
libaio-0.3.106
libaio-devel-0.3.106
libgcc-4.1.2
libgomp-4.1.2
libstdc++-4.1.2
libstdc++-devel-4.1.2
make-3.81
sysstat-7.0.2
unixODBC-2.2.11
unixODBC-devel-2.2.11