During configuration phase of Grid Infrastructure for cluster, CVU failed while performing post-checks.
Following message is displaced in the Installation Log file:
<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">Checking Single Client Access Name (SCAN)...</span></span>
<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">WARNING:
<strong>PRVF-5056</strong> : Scan Listener "LISTENER" not running</span></span>
<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">Checking name resolution setup for "scan-test.abc.com"...</span></span>
<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">Verification of SCAN VIP and Listener setup failed</span></span>
The LISTENER from Grid Infrastructure home is running fine:
Same error is observerd by manually runnig CVU as:
$ cluvfy comp scan
<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">Verifying scan
Checking Single Client Access Name (SCAN)...</span></span>
<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">WARNING:
<strong>PRVF-5056 </strong>: Scan Listener "LISTENER" not running</span></span>
<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">Checking name resolution setup for "scan-test.abc.com"...
Verification of SCAN VIP and Listener setup failed
Verification of scan was unsuccessful on all the specified nodes.</span></span>
Checking the status of SCAN using SRVCTL gives correct results:
After this, I clicked at the retry button for the CVU post checks and this time it succeeded.
Though it worked fine after this, but still not sure why I have to manually start the SCAN-Listeners !!!!!
I am creating 11gR2 RAC setup for one of my client. Following Oracle documentation for storage, I opted for Oracle ASM and asked storage team for new physical devices.
Storage admin thus provided me with a set of LUNs instead of actual physical device names like /dev/sdcxxx.
Now the major task is to get the actual device name associated with LUNs.
In OEL4 this is easy to get by issuing
This time I am sharing an experience from my personal life, I got Married on 2nd Dec 2009 🙂
I was on holidays since last month and just joined back. In this time I visited beautiful Island of Koh-Samui (Thailand).
At this time finding hard to concentrate on work 🙂
As 11gR2 is out for Linux, I decided to upgrade one of my existing 10.2.0.3 database to 11.2.0.1 to get the look and feel of 11gR2. Direct upgrade to 11gR2 is supported from 9.2.0.8 or higher, 10.1.0.5 or higher, 10.2.0.2 or higher and 11.1.0.6 or higher. If you have a database 9.2.0.6 then first you need to upgrade to intermediate release i.e 9.2.0.8 then to 11.2.0.1.
I will discuss how to upgrade an existing Single Instance 10.2.0.3 database with ASM, having same ORACLE_HOME, to 11gR2 with ASM. The upgrade needs to be performed in two phases:
1. Upgrade the ASM instance
2. Upgrade the database
Upgrade ASM Instance: ===============
There are three ways to upgrade already existing ASM instance:
- Using OUI of Grid Infrastructure
- Using ASM Configuration Assistant
- Manual upgrade
The recommended method to upgrade ASM instance is to use OUI of Grid Infrastructure, which I have used .
STEP 1:
— Create OSASM group:
If you want, you can create a separate group for ASM instance here it is named as ASMADMIN.
# groupadd asmadmin
# usermod -a -G asmadmin oracle
STEP 2:
Before upgrading an ASM instance to 11gR2 it is mandatory to add a ‘user and password’ combination to the password file which is local to node’s ASM instance. Login to database instance “/ as sysdba”:
SQL> create user sood identified by oracle;
SQL> grant sysdba to sood;
SQL> select * from v$pwfile_users;
USERNAME SYSDB SYSOP
------------------------------ ----- -----
SYS TRUE TRUE
SOOD TRUE FALSE
STEP 3:
From 11gR2 onwards ASM is part of Grid Infrastructure and we need to download Grid Infrastructure software first. To download the software for 11gR2 Grid Infrastructure click here
Start the RunInstaller:
./runInstaller
Click on the Image to Enlarge.
1. It will automatically defaults to the “Upgrade Mode”
Select “Upgrade Grid Infrastructure” and click Next.
2. On clicking Next, It will detect the already existing ASM instance. Shutdown the Database and ASM instance at this point.
Click “Yes”.
3. Select the Language
Click “Next”.
4. Enter a Password for ASMSNMP user. The password can be anything you want, though Oracle will ask you to set a password which adhers to Oracle’s standards otherwise a “Red Cross” will be shown in the tab at left hand side. Do not worry about that cross 🙂
Click “Next”.
5. Provide the Group details
Click “Next”.
Click “Yes”.
6. Provide the Base and Home location for Grid Infrastructure Home
Click “Next”.
7. It will perform the Prerequisite checks here, For more information on this click Installation Fixup script I have select “Ignore All”
Click “Next”.
8. Now you will see “Summary” page, make sure that the Installation Option is shown as “Upgrade Grid Infrastructure” and Migrate ASM as “True”.
Click “Finish”.
9. Now the setup for “Grid Infrastructure’ is started
Run the rootupgrade.sh
# ./rootupgrade.sh
[root@localhost ~]# cd /u01/11g/oracle/product/11.2.0/grid/
[root@localhost grid]# pwd
/u01/11g/oracle/product/11.2.0/grid
[root@localhost grid]# ./rootupgrade.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/11g/oracle/product/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2009-09-05 11:46:25: Checking for super user privileges
2009-09-05 11:46:25: User has super user privileges
2009-09-05 11:46:25: Parsing the host name
Using configuration parameter file: /u01/11g/oracle/product/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
CSS appears healthy
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
Shutdown has begun. The daemons should exit soon.
CRS-4664: Node localhost successfully pinned.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
localhost 2009/09/05 11:49:02 /u01/11g/oracle/product/11.2.0/grid/cdata/localhost/backup_20090905_114902.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 885 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/10g/oraInventory
'UpdateNodeList' was successful.
[root@localhost grid]#
10. After the upgrade I have checked /etc/oratab file and found the entry of ASM pointing to new home i.e now ASM is a part of “Grid Infrastructure”
“+ASM:/u01/11g/oracle/product/11.2.0/grid:N”
Upgrade Database Instance: ===================
NOTE: DO NOT SHUTDOWN DATABASE BEFORE RUNNING DBUA.
STEP 1: Install The Software:
To download Oracle Database 11gR2 software click Here . Execute runInstaller to install “SOFTWARE ONLY” option, you can follow this link to install the software. Make sure that you select “software only” option as shown below rest of the steps are same as described in above link.
STEP 2: Run Pre-Upgrade Information tool
I have installed the software under “/u01/11g/oracle/product/11.2.0/dbhome_1” location. Once the software is installed, then go to location $ORACLE_HOME/rdbms/admin and copy utlu112i.sql script to /tmp directory. Now login to 10g database “/ as sysdba” and startup the 10g database, then:
Following is the output of this script from my database:
Oracle Database 11.2 Pre-Upgrade Information Tool 09-04-2009 01:54:32
.
**********************************************************************
Database:
**********************************************************************
--> name: ORCL10G
--> version: 10.2.0.3.0
--> compatible: 10.2.0.3.0
--> blocksize: 8192
--> platform: Linux IA (32-bit)
--> timezone file: V3
.
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 724 MB
.... AUTOEXTEND additional space required: 244 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 464 MB
.... AUTOEXTEND additional space required: 439 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 447 MB
.... AUTOEXTEND additional space required: 207 MB
--> TEMP tablespace is adequate for the upgrade.
.... minimum required size: 61 MB
.... AUTOEXTEND additional space required: 41 MB
.
**********************************************************************
Flashback: OFF
**********************************************************************
**********************************************************************
Update Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
WARNING: --> "sga_target" needs to be increased to at least 336 MB
WARNING: --> "java_pool_size" needs to be increased to at least 64 MB
WARNING: --> "pga_aggregate_target" needs to be increased to at least 24 MB
.
**********************************************************************
Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
-- No renamed parameters found. No changes are required.
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
--> background_dump_dest 11.1 DEPRECATED replaced by
"diagnostic_dest"
--> user_dump_dest 11.1 DEPRECATED replaced by
"diagnostic_dest"
--> core_dump_dest 11.1 DEPRECATED replaced by
"diagnostic_dest"
.
**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
--> Oracle Catalog Views [upgrade] VALID
--> Oracle Packages and Types [upgrade] VALID
--> JServer JAVA Virtual Machine [upgrade] VALID
--> Oracle XDK for Java [upgrade] VALID
--> Oracle Workspace Manager [upgrade] VALID
--> OLAP Analytic Workspace [upgrade] VALID
--> OLAP Catalog [upgrade] VALID
--> EM Repository [upgrade] VALID
--> Oracle Text [upgrade] VALID
--> Oracle XML Database [upgrade] VALID
--> Oracle Java Packages [upgrade] VALID
--> Oracle interMedia [upgrade] VALID
--> Spatial [upgrade] VALID
--> Data Mining [upgrade] VALID
--> Expression Filter [upgrade] VALID
--> Rule Manager [upgrade] VALID
--> Oracle OLAP API [upgrade] VALID
.
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: --> Database is using a timezone file older than version 11.
.... After the release migration, it is recommended that DBMS_DST package
.... be used to upgrade the 10.2.0.3.0 database timezone version
.... to the latest version which comes with the new release.
WARNING: --> Database contains schemas with stale optimizer statistics.
.... Refer to the Upgrade Guide for instructions to update
.... schema statistics prior to upgrading the database.
.... Component Schemas with stale statistics:
.... SYS
.... XDB
WARNING: --> Database contains schemas with objects dependent on network
packages.
.... Refer to the Upgrade Guide for instructions to configure Network ACLs.
WARNING: --> EM Database Control Repository exists in the database.
.... Direct downgrade of EM Database Control is not supported. Refer to the
.... Upgrade Guide for instructions to save the EM data prior to upgrade.
WARNING:--> recycle bin in use.
.... Your recycle bin turned on.
.... It is REQUIRED
.... that the recycle bin is empty prior to upgrading
.... your database.
.... The command: PURGE DBA_RECYCLEBIN
.... must be executed immediately prior to executing your upgrade.
PL/SQL procedure successfully completed.
SQL> spool off
Adjust the warnings shown by Pre-Upgrade Information tool.
STEP 3 :Upgrade using DBUA
Execute the DBUA from 11gR2 software home as
$ cd $ORACLE_HOME/bin
$./dbua
1.
Click “Next”
2. Select the database that you want to upgrade
Click “Next”
3. Here DBUA will show warnings that were not solved after running Pre-Upgrade Information tool
Click “Yes”
4. Turn OFF archiving while upgrading
Click “Next”
5. Check whether you want to move the datafiles while upgrade, though the “move datafile” check-box was not highlighted when I upgraded
Click “Next”.
6. Specify “FRA” and “Diagnostic Destination”
Click “Next”.
7. Check configuration for EM
Click “Next”.
8. Check “Summary” page
Click “Finish”
Upgrade Process is started
Check the Results
Congratulations!!!!!!!!!! Upgrade is Successful !!!!!!!!!!!!!!!
Now you are ready to use Most Powerful Database!! 🙂
While setting up ocfs2 for OCR and Voting disk storage with following commad:
# ocfs2console
After clicking on ==>cluster ==> configure nodes, I got a pop-up saying:
<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">"Could not start cluster stack. This must be resolved before any OCFS2 filesystem can be mounted."</span></span>
Soon I realized that the thing which takes few minutes to get installed, is going to give me a tough time.
/var/log/messages shows following details:
<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">Aug 17 14:53:40 rac1 modprobe: FATAL: Module configfs not found.
Aug 17 14:55:23 rac1 modprobe: FATAL: Module configfs not found.
Aug 17 14:56:56 rac1 modprobe: FATAL: Module configfs not found.
</span></span>
This prevents the configuration of OCFS2’s cluster stack, but it is mandatory to have OCFS2 cluster stack “O2CB” running, before
we can start anything with OCFS2 filesystem.
The stack includes the following services:
<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;"> * NM: Node Manager that keep track of all the nodes in the cluster.conf
* HB: Heart beat service that issues up/down notifications when nodes join or leave the cluster
* TCP: Handles communication between the nodes
* DLM: Distributed lock manager that keeps track of all locks, its owners and status
* CONFIGFS: User space driven configuration file system mounted at /config
* DLMFS: User space interface to the kernel space DLM
</span></span>
“Error : modprobe: FATAL: Module configfs not found” can occur because of following reasons:
1. SELINUX is enabled.
2. Mismatch between the Kernel and OCFS2 module.
1. To check for selinux:
# sestatus
Or
# vi /etc/sysconfig/selinux
Make sure that selinux is DISABLED here.
2. To check for Mismatch:
# uname -a (It will give the exact kernel version of the OS)
2.6.9-42.ELsmp
# rpm -qa |grep ocfs2 (It will tell us the ocfs2 package currently installed)
ocfs2-2.6.9-89.EL
Here it can be seen that ocfs2 is for kernel version 89 not for kernel version 42.
There are situations when we see “temporary segments” in permanent tablespaces hanging around and not getting cleaned up.
These temporary segments in permanent tablespace can be created by DDL operations like CTAS and “alter index..rebuild” because
the new object is created as a temporary segment in the target tablespace and when the DDL action finishes it will be changed to permanent type.
These temporary segments take actual disk space when SMON fails to perform its assigned job to cleanup stray temporary segments.
Following query finds out these segments:
Here we can see that tablespace KMRPT_DATA, SPCT_INDEX and SPCT_DATA have large temporary segments.
To know if any DDL is active which can create temporary segments we can use the following:
SQL> conn / as sysdba
SQL> select owner FROM dba_segments WHERE segment_name='345.87';
SQL> select pid from v$process where username='owner from above query';
SQL> alter session set tracefile_identifier='TEMPORARY_SEGMENTS';
SQL> oradebug setorapid <pid obtained>
SQL> oradebug dump errorstack 3
SQL > oradebug tracefile_name
It will give you the tracefile name, open that file and check for the “current sql”
If it is a DDL like CTAS or index rebuild, then wait for the operation to complete. If there is no pid
returned then these segments are “stray segements” and needs to cleaned up manually.
There are two ways to force the drop of temporary segments:
1. Using event DROP_SEGMENTS
2. Corrupting the segments and dropping these corrupted segments.
1. Using DROP_segments:
Find out the tablespace number (ts#) which contains temporary segments:
SQL> select ts# from sys.ts$ where name = 'tablespace name';
Suppose it comes out to be 10, use the following command to cleanup temporary segments:
SQL> alter session set events 'immediate trace name DROP_SEGMENTS level 11';
level is ts#+1 i.e 10+1=11 in this case.
2. Corrupting temporary segments for drop:
For this following procedures are used:
– DBMS_SPACE_ADMIN.TABLESPACE_VERIFY
– DBMS_SPACE_ADMIN.SEGMENT_CORRUPT
– DBMS_SPACE_ADMIN.SEGMENT_DROP_CORRUPT
— Verify the tablespace that contains temporary segments (In this case it is KMRPT_DATA)
This blog reflect our own views and do not necessarily represent the views of our current or previous employers.
The contents of this blog are from our experience, you may use at your own risk, however you are strongly advised to cross reference with Product documentation and test before deploying to production environments.
Recent Comments