database

Checking Database Feature Usage Stats

Today I came across view DBA_FEATURE_USAGE_STATISTICS (10g) which let’s us know whether particular Database Feature has been used till now or not. If yes, then it also let’s us know when it was used for the first time and also the last usage time. This can be helpful to check if anyone is using Database feature which is not licensed .

I have used filter DETECTED_USAGES >0 for checking the features which has been used in this Database.

<span style="font-size: x-small; font-family: helvetica;"><span style="font-size: small;">SQL&gt;  SELECT NAME,DETECTED_USAGES AS "USAGE",CURRENTLY_USED,FIRST_USAGE_DATE,LAST_USAGE_DATE
  2  FROM DBA_FEATURE_USAGE_STATISTICS WHERE DETECTED_USAGES &gt;0 order by 1;

NAME                                               USAGE CURRE FIRST_USA LAST_USAG
--------------------------------------------- ---------- ----- --------- ---------
Automatic SQL Execution Memory                         2 TRUE  14-AUG-08 21-AUG-08
Automatic SQL Execution Memory                         1 TRUE  13-AUG-08 13-AUG-08
Automatic Segment Space Management (system)            2 TRUE  14-AUG-08 21-AUG-08
Automatic Segment Space Management (system)            1 TRUE  13-AUG-08 13-AUG-08
Automatic Segment Space Management (user)              1 TRUE  13-AUG-08 13-AUG-08
Automatic Storage Manager                              2 TRUE  14-AUG-08 21-AUG-08
Automatic Storage Manager                              1 TRUE  13-AUG-08 13-AUG-08
Automatic Undo Management                              2 TRUE  14-AUG-08 21-AUG-08
Automatic Undo Management                              1 TRUE  13-AUG-08 13-AUG-08
Character Set                                          2 TRUE  14-AUG-08 21-AUG-08
Character Set                                          1 TRUE  13-AUG-08 13-AUG-08
Dynamic SGA                                            1 TRUE  13-AUG-08 13-AUG-08
Internode Parallel Execution                           2 TRUE  14-AUG-08 21-AUG-08
Locally Managed Tablespaces (system)                   1 TRUE  13-AUG-08 13-AUG-08
Locally Managed Tablespaces (system)                   2 TRUE  14-AUG-08 21-AUG-08
Locally Managed Tablespaces (user)                     1 TRUE  13-AUG-08 13-AUG-08
Locally Managed Tablespaces (user)                     2 TRUE  14-AUG-08 21-AUG-08
Parallel SQL Query Execution                           2 TRUE  14-AUG-08 21-AUG-08
Partitioning (system)                                  1 TRUE  13-AUG-08 13-AUG-08
Partitioning (system)                                  2 TRUE  14-AUG-08 21-AUG-08
Protection Mode - Maximum Performance                  1 TRUE  13-AUG-08 13-AUG-08
Protection Mode - Maximum Performance                  2 TRUE  14-AUG-08 21-AUG-08
Real Application Clusters (RAC)                        2 TRUE  14-AUG-08 21-AUG-08
Real Application Clusters (RAC)                        1 TRUE  13-AUG-08 13-AUG-08
Recovery Area                                          1 TRUE  13-AUG-08 13-AUG-08
Recovery Area                                          2 TRUE  14-AUG-08 21-AUG-08
Segment Advisor                                        2 TRUE  14-AUG-08 21-AUG-08
Server Parameter File                                  2 TRUE  14-AUG-08 21-AUG-08
Server Parameter File                                  1 TRUE  13-AUG-08 13-AUG-08
Streams (system)                                       2 TRUE  14-AUG-08 21-AUG-08
Streams (system)                                       1 TRUE  13-AUG-08 13-AUG-08
Streams (user)                                         1 TRUE  13-AUG-08 13-AUG-08
Streams (user)                                         2 TRUE  14-AUG-08 21-AUG-08
Virtual Private Database (VPD)                         2 TRUE  14-AUG-08 21-AUG-08
Virtual Private Database (VPD)                         1 TRUE  13-AUG-08 13-AUG-08
XDB                                                    2 TRUE  14-AUG-08 21-AUG-08

36 rows selected.

</span></span>

So be careful when you use any Licensed feature (Unless you have already bought it) like Partitioning, AWR,Database Replay as this auditing is enabled by default 🙂

10.2.0.4 Window’s Patchset overwrites Sqlnet.ora

Not sure, how many of you are aware of this alert. Oracle has published an Alert document NOTE:726418.1 -ALERT: The 10.2.0.4 Windows Patchset Overwrites %ORACLE_HOME%\network\admin\sqlnet.ora

According to it, Patch 6810189 – 10.2.0.4 RDBMS patchset on Microsoft Windows (32-bit) and Microsoft Windows (AMD64 and EM64T) overwrites the %ORACLE_HOME%\network\admin\sqlnet.ora file.

In case you have Downloaded (and installed) the 10.2.0.4 patchset for Windows before 10 July, 2008 , then please download the software again. In case you have not customized the sqlnet.ora file, then no action is needed.

Simplified Approach to Resolve ORA-4031

After writing few Case studies and other related articles, I will be sharing my approach for Resolving ORA -4031 error. First we will see what ORA-4031 actually means.

04031, 00000, "unable to allocate %s bytes of shared memory (\"%s\",\"%s\",\"%s\,\"%s\")"
// *Cause:  More shared memory is needed than was allocated in the shared pool.
// *Action: If the shared pool is out of memory, either use the
//          dbms_shared_pool package to pin large packages,
//          reduce your use of shared memory, or increase the amount of
//          available shared memory by increasing the value of the
//          INIT.ORA parameters "shared_pool_reserved_size" and
//          "shared_pool_size".
//          If the large pool is out of memory, increase the INIT.ORA
//          parameter "large_pool_size".

ORA-4031 error is encountered when we do not have sufficient memory available in shared pool/large pool to service a memory request. But in actual ORA – 4031 can be encountered in any of these areas

1) Shared pool
2) Large Pool
3) Java Pool
4)Streams pool (new to 10g)

This brings us to the first step in our pursuit for finding the cause for ORA -4031.

Step1: Identify the Pool associated with error

Like any other Oracle error, we first need to check Database Alert Log file and also any trace files which gets generated during that time in user_dump_dest,background_dump_dest. Though there are cases when ORA-4031 error is not recorded in alert.log. Starting from 9.2.0.5, you should be able to see trace files which gets generated in udump/bdump location (Depending on whether background process or user process encountered the error).

ORA – 4031 has basically three arguments

1) Size requested
2) Area
3) Comment

ORA-4031: unable to allocate <size requested> bytes of shared memory (“area “,”comment”)

e.g ORA-4031: unable to allocate 2196 bytes of shared memory
(shared pool,JOB$","KGLS heap","KGLS MEM BLOCK))

So we see from above that the error has occurred in Shared Pool. This is very important step as in case of other pools, ORA-4031 errors are resolved by increasing Java_pool_size and Streams_pool _size.

In this article I will be discussing mostly about errors encountered in Shared pool with small section on Large Pool.

Step2: What is value of SHARED_POOL_SIZE?

Current settings for shared pool related parameters can be found using below query

SQL>col name for a50
SQL>col value for a10
SQL> select nam.ksppinm NAME, val.KSPPSTVL VALUE from x$ksppi nam, x$ksppsv val
where nam.indx = val.indx and nam.ksppinm like '%shared_pool%' order by 1;

NAME                                               VALUE
-------------------------------------------------- ----------
__shared_pool_size                                 654311424
_dm_max_shared_pool_pct                            1
_enable_shared_pool_durations                      TRUE
_io_shared_pool_size                               4194304
_shared_pool_max_size                              0
_shared_pool_minsize_on                            FALSE
_shared_pool_reserved_min_alloc                    4400
_shared_pool_reserved_pct                          5
shared_pool_reserved_size                          19293798
shared_pool_size                                   0

You can use following notes for checking the minimum shared pool size

Note 105813.1 – SCRIPT TO SUGGEST MINIMUM SHARED POOL SIZE

In case of 10g, you can use SGA_TARGET parameter for managing values of Shared Pool,Large pool, Streams Pool,Java Pool, Buffer Cache (DB_CACHE_SIZE). Following note can be used for 10g

Note 270935.1 – Shared pool sizing in 10g

It is recommended to set a lower limit for SHARED_POOL_SIZE parameter.

You can also use V$LIBRARYCACHE view (AWR/Statspack report also has this section) and check if there were lot of Reloads happening for SQL AREA and TABLE/PROCEDURE Namespace. This gives indication that Shared Pool is not appropriately sized. In case you see high value for Invalidations, then this could be due to executing DDL against the objects, gathering stats (DBMS_STATS), or granting/revoking privileges.

High Value for Hard parses in AWR/Statspack report can also be caused by shared pool sizing issues but it cannot be used as a sole criteria as High hard parses can be caused by use of literals and presence of version counts/Child Cursors. This is discussed in section Using Literals Instead of Bind Variables and Multiple Child Cursors/High Version Count.

Some more key points related to Shared pool Sizing

-Shared pool memory consumption varies from release to release

-10g might fail with shared pool of 300 Mb though 8i was working fine

-Some part of memory allocated to fixed structures. Parameters like db_files, open_cursors and processes contribute to Overhead. When you use “Show SGA” command, you will see that “Variable Size” will be more then sum of “Shared Pool + Large Pool + Java Pool”. This is attributed to the value of these parameters.

Please note that in case you specify a low value for SGA_MAX_SIZE, you will see Oracle bumping the value to higher value so as to accomodate high value of Overhead memory.

Staring from 10g, Overhead memory is accomodated in shared_pool_size.

e.g If you specify SHARED_POOL_SIZE as 200 MB and your internal overhead is 100 Mb, then your actual shared pool value available to instance is only 100Mb.

You can read Note:351018.1 – Minimum for SHARED_POOL_SIZE Parameter in 10.2 Version for more information.

Shared Pool Fragmentation

Shared Pool fragmentation also can cause ORA-4031. This is caused when your queries are not being shared and you are seeing lot of reloads and Hard parses in the Statspack Report. In this case check the request failure size

ORA-4031: unable to allocate 16400 bytes of shared memory

We see that failure size is 16K. In this case you can see if you are using Shared_pool_reserved_size parameter for defining shared pool reserved area. Algorithm for memory allocation is such that it will first try to get memory from the shared pool and then if the requested memory size is greater then _Shared_pool_reserved_min_alloc , then it will get the memory from Shared Pool Reserved area. By default this value is set to 4400 bytes. In case the failure value is say 4200, you can try reducing the value of this parameter to reduce the occurences. Though this is not the complete solution. Read Tweaking _Shared_pool_reserved_min_alloc and ORA-4031 for more details.

You can also identify shared pool fragmentation by querying X$KSMSP

select  'sga heap('||KSMCHIDX||',0)'sga_heap,ksmchcom ChunkComment,
decode(round(ksmchsiz/1000),0,'0-1K', 1,'1-2K', 2,'2-3K',
3,'3-4K',4,'4-5K',5,'5-6k',6,'6-7k',7,'7-8k',8,'8-9k', 9,'9-10k','> 10K') "Size",
count(*), ksmchcls "Status", sum(ksmchsiz) "Bytes" from x$ksmsp
where KSMCHCOM = 'free memory' group by 'sga heap('||KSMCHIDX||',0)',
ksmchcom, ksmchcls, decode(round(ksmchsiz/1000),0,'0-1K', 1,'1-2K', 2,'2-3K',
 3,'3-4K',4,'4-5K',5,'5-6k',6,'6-7k',7,'7-8k',8,'8-9k', 9,'9-10k','> 10K') 

SGA_HEAP       CHUNKCOMMENT     Size    COUNT(*) Status          Bytes
-------------- ---------------- ----- ---------- ---------- ----------
sga heap(1,0)  free memory      > 10K        393 free         11296600
sga heap(1,0)  free memory      3-4K         256 free           781928
sga heap(1,0)  free memory      8-9k          63 free           510656
sga heap(1,0)  free memory      6-7k          60 free           367076
sga heap(1,0)  free memory      2-3K         555 free          1071448
sga heap(1,0)  free memory      1-2K        1818 free          1397244
sga heap(1,0)  free memory      0-1K        3418 free           348344
sga heap(1,0)  free memory      9-10k         30 free           269820
sga heap(1,0)  free memory      4-5K         154 free           640332
sga heap(1,0)  free memory      5-6k          75 free           381920
sga heap(1,0)  free memory      > 10K         39 R-free        8302632
sga heap(1,0)  free memory      7-8k          22 free           152328

If you see lot of memory chunks in 1-4k and very few in buckets >5K then it indicates Shared Pool Fragmentation. In this case you need to also look at Hard Parses (Statspack/AWR Report). This is discussed in section Using Literals Instead of Bind Variables and Multiple Child Cursors/High Version Count.

Note: – It is not recommended to run queries on X$KSMSP as it can lead to Latching issues. Do not run them frequently (I have seen people scheduling them as part of Oracle Hourly jobs. This should be avoided)

Step3: Is it MTS? If Yes, then are you using LARGE_POOL_SIZE?

LARGE_POOL_SIZE recommended for many features of Oracle which are designed to utilize large shared memory chunks like

– Recovery Manager (RMAN)

– parallel processing/IO slave processing. e.g px msg pool consuming more memory

– Shared Server Configuration

UGA will be allocated from shared pool in case large pool is not configured. So this can cause issues while using Shared Server Mode (MTS). Ensure that you are using LARGE_POOL_SIZE parameter or SGA_TARGET.

Step4: Are you having Multiple Subpools?

Subpool concept introduced from 9i R2. Instead of one big shared pool, memory will be divided into many sub pools.To determine number of subpools, you can use below query

SQL> select nam.ksppinm NAME, val.KSPPSTVL VALUE from x$ksppi nam, x$ksppsv val
where nam.indx = val.indx and nam.ksppinm like ‘%kghdsidx%’ order by 1 ;

NAME VALUE
—————————— ——————–
_kghdsidx_count 4

Above query indicates that there are 4 subpools

In case you get ORA-4031 and trace file gets generated, then the trace file can also be used to know the number of subpools configured. To do this search on “Memory Utilization of Subpool”
e.g
Memory Utilization of Subpool 1
========================
free memory 10485760
Memory Utilization of Subpool 2
========================

free memory 20971520

This means that there are two subpools configured for your database.

Oracle suggest having 500M as minimum subpool size. I will say that in case you are not facing serious Shared pool Latch contention, 2 subpools should be sufficient (though I believe most of contention issues can be solved by tuning the application). To change the number of subpools, we need to set parameter _kghdsidx_count in pfile or spfile and restart the database

In case of Spfile

alter system set “_kghdsidx_count”=1 scope=spfile;

Restart of database is required as it is a Static parameter. Please note that Large pool has same number of subpools as shared pool so you might be required to change number of subpools in case you are observing ORA-4031 in large pool.

You can read more about Shared Subpools in my earlier post

Step5: Is Sqlarea consuming lot of Memory?

Actually this can also be categorized into “Bad Application Design” as most of the cases are caused by way applications have been designed. High value for sqlarea in V$SGASTAT (or AWR/Statspack report) can be attributed to following causes

Using Literals Instead of Bind Variables

This is the most common cause for ORA-4031. Tom Kyte explains this on one of his post consequences of not using bind variables

If you do not use bind variables and you flood the server with
hundreds/thousands of unique queries you will
-run dog slow
-consume a ton of RAM (and maybe run out)
-not scale beyond a handful of users, if
thatamong other really bad side effects.
The above statement is true and you can find lot of cases where not using Bind variables caused excessive Parsing issues (leading to CPU contention) and ORA-4031 issues. One of the way to locate such statements is by running following query.
SELECT substr(sql_text,1,90) "SQL",count(*) "SQL Copies",
   sum(executions) "TotExecs", sum(sharable_mem) "TotMemory"
FROM v$sqlarea
WHERE executions &lt; 5
GROUP BY substr(sql_text,1,90) HAVING count(*) > 30
ORDER BY 2;

I personally try to use script from Asktom website to find these statements. You can find ,more information by clicking here

create table t1 as select sql_text from v$sqlarea;
alter table t1 add sql_text_wo_constants varchar2(1000);
create or replace function
remove_constants( p_query in varchar2 ) return varchar2
as
    l_query long;
    l_char  varchar2(1);
    l_in_quotes boolean default FALSE;
begin
    for i in 1 .. length( p_query )
    loop
        l_char := substr(p_query,i,1);
        if ( l_char = '''' and l_in_quotes )
        then
            l_in_quotes := FALSE;
        elsif ( l_char = '''' and NOT l_in_quotes )
        then
            l_in_quotes := TRUE;
            l_query := l_query  '''#';
        end if;
        if ( NOT l_in_quotes ) then
            l_query := l_query  l_char;
        end if;
    end loop;
l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );
    for i in 0 .. 8 loop
        l_query := replace( l_query, lpad('@',10-i,'@'), '@' );
        l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );
    end loop;
    return upper(l_query);
end;
/
update t1 set sql_text_wo_constants = remove_constants(sql_text);

select sql_text_wo_constants, count(*)
  from t1
 group by sql_text_wo_constants
having count(*) > 100
 order by 2
/

Above query will give you queries which are using literals and should be modified to use bind variables. Sometimes it is not possible to modify the application, in that case you can use CURSOR_SHARING=SIMILAR/FORCE to force the application to use bind variables. Please note that this can cause issues (especially CURSOR_SHARING=SIMILAR), so it is recommended to test the application in Test environment before implementing in Production. Applications like Oracle Apps do not certify use of this parameter so also check with your application vendor if this can be used.

You can refer to following articles where I have discussed similar issue

ORA-4031 – A Case Study

Application Design and ORA-4031

Multiple Child Cursors/High Version Count

This is also one of the cause for high usage of memory in SQLAREA region. Child cursors are generated in Shared pool when the SQL text is same but Oracle cannot share it because the underlying objects are different or different optimizer settings, etc. To know about child cursors, refer to following Metalink note

Note 296377.1 – Handling and resolving unshared cursors/large version_counts

In case of Oracle 10g, you can use Statspack/AWR report for finding the child cursors under category “SQL ordered by Version Counts”. Following statements can also be run to identify if child cursors are being generated in your database


For 10g

SQL> select sa.sql_text,sa.version_count ,ss.*from v$sqlarea sa,v$sql_shared_cursor ss
where sa.address=ss.address and sa.version_count > 50 order by sa.version_count ;

For 8i/9i

select sa.sql_text,sa.version_count ,ss.*from v$sqlarea sa,v$sql_shared_cursor ss
where sa.address=ss.KGLHDPAR and sa.version_count > 50 order by sa.version_count ;

Results returned by above query reports SQL which are not being shared due to some reason. You should find column with Value Y to find the cause. Most of these issues are encountered while using CURSOR_SHARING=SIMILAR. In case you are using this parameter with columns having Histograms, then it is expected behavior.Read more about Cursor issues related to Histograms in Note:261020.1 – High Version Count with CURSOR_SHARING = SIMILAR or FORCE

There are cases where none of the column value returns Y value. Most of these cases, you need to work with Oracle support to find the cause as this could be a bug.

Child Cursors are problematic as they increase shared pool memory consumption, High parsing and also as the number of child cursors increase, Oracle will take more time to span all the child cursors to match if it can reuse them, if not then it spawns a new child cursor. This results in High Parsing time and CPU contention.

High Sharable Memory per SQL

One more cause for high value of SQLAREA in V$SGASTAT is high memory consumption for SQL statement. This can be due to poorly written SQL statement or due to Oracle Bugs.

In case of Oracle 10g, you can use Statspack/AWR report for finding the statements with high value of Sharable Memory. You can also use Sharable_mem column in V$SQLAREA to find these queries.

Step6:What Next?

You have followed all the above steps and find everything is ok. Now what do we check next?

We can look for any trace file which got generated during the time of error and see which component was taking more memory. You can try searching in metalink with that component. Else you can take a heapdump at time of error and upload the file to support.

Heapdump event
The Heapdump event is used to dump memory from different subheaps. Errors ora-4030 are associated with problems in the pga, uga or cga heaps, and error ora-4031 is related only to problems with the shared pool/large pool/Java Pool/Streams Pool.

command - >  alter system set events ‘4031 trace name heapdump level 2’;
init.ora - >events=’4031 trace name heapdump, level 2’
SQL>oradebug setmypid
SQL>oradebug dump heapdump 2
SQL>oradebug tracefile_name

Staring from 9.2.0.5, level 536870914 can be used for generating heapdump which will gather more diagnostic information for support to diagnose the cause.

Also it is not recommended to set Heapdump event in init.ora or spfile since it will force multiple dumps at time of Shared Pool memory issues. Oracle requires Shared pool Latch for dumping heapdump, so this can worsen the Latching situation. You can set Errorstack event to generate trace file at time of ORA-4031 error

alter system set events '4031 trace name errorstack level 3';

Use immediate trace option or Oradebug command at time of error

SQL> connect / as sysdba
SQL> alter session set events 'immediate trace name heapdump level 536870914';

OR

sqlplus "/ as sysdba"
oradebug setmypid
oradebug unlimit
oradebug dump heapdump 536870914
oradebug tracefile_name
exit

Upload the tracefile to Oracle support.

Using the above approach will help you to resolve ORA-4031 in Shared Pool.

Large Pool

While working on ORA-4031 in large pool, you need to follow below approach

1)Check size for LARGE_POOL_SIZE. If possible increase it.

2)Check number of subpools. Ensure that you have sufficient memory in each subpool. _kghdsidx_count is used to control the number of subpools in large pool also. So you would have to either increase memory available in each subpool or decrease the count.

3)In case of MTS, check if any session is consuming lot of memory. It’s a case where instead of getting ORA-4030, you get ORA-4031 in large pool (In MTS, UGA is part of large pool).

4)If all above suggestions have been tried, then capture heapdump and upload the file to Oracle Support. You can use level 32 or 536870944 i.e

SQL> connect / as sysdba
SQL> alter session set events 'immediate trace name heapdump level 32';
or
SQL> alter session set events 'immediate trace name heapdump level 536870944';

I hope this article helps in following a methodology for resolving ORA-4031. At present this article is not exhaustive article on this error and it will be more useful if it can be used as a approach after you have gone through below metalink notes.

Note:62143.1 – Understanding and Tuning the Shared Pool

Note:396940.1 – Troubleshooting and Diagnosing ORA-4031 Error

Note:146599.1 – Diagnosing and Resolving Error ORA-04031

Oracle Database 11G Installation on Solaris 10

I had installed Oracle Database 11g on Solaris 10 few days back. I thought of documenting the steps for easy reference. Please find below steps for the same

Installation Pre-Requisite

Refer to Oracle Database Installation Guide 11g Release 1 (11.1) for Solaris Operating System for checking Hardware and Software Requirements.

User Creation and Environment Settings

1)Create groups for Oracle account

<span style="font-family: arial,helvetica,sans-serif; font-size: small;">#groupadd oinstall
#groupadd dba
#groupadd oper</span>

2)Create Oracle Default Home directory

<span style="font-family: arial,helvetica,sans-serif; font-size: small;"># mkdir /export/home
# mkdir /export/home/oracle</span>

3)Create Oracle user

<span style="font-family: arial,helvetica,sans-serif; font-size: small;"># useradd -g oinstall -G dba -d /export/home/oracle -s /usr/bin/bash oracle
# chown oracle:oinstall /export/home/oracle</span>

4)Create Project for Oracle for setting the kernel parameters

In case of Solaris 10, you can use projects to configure the kernel parameters instead of /etc/system file. This can be done as following

<span style="font-family: arial,helvetica,sans-serif; font-size: small;"># projadd -U oracle -K "project.max-shm-memory=(priv,4g,deny)" oracle
# projmod -sK "project.max-sem-nsems=(priv,256,deny)" oracle
# projmod -sK "project.max-sem-ids=(priv,100,deny)" oracle
# projmod -sK "project.max-shm-ids=(priv,100,deny)" oracle</span>

There are many more ways of creating project entries such as group.group-name or user.user-name. For more details refer to Solaris Administration documents.

Update

Last three settings made by projmod command are not required as these values are lower than the default. This was pointed by Mike Madland and he also gave a Sun documentation link

http://docs.sun.com/app/docs/doc/819-2724/6n50b0795?l=en&a=view#chapter1-33
You can check the values for max-sem-ids and max-shm-ids with this command:

<span style="font-family: arial,helvetica,sans-serif; font-size: small;">prctl -n project.max-sem-ids -i task `ps -o taskid= -p $$`</span>

5)Create .bash_profile for Oracle user

<span style="font-family: arial,helvetica,sans-serif; font-size: small;">#Oracle Environment Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_BASE=/u03/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.1.0/db_1; export ORACLE_HOME
ORACLE_SID=TESTDB11G; export ORACLE_SID
PATH=$PATH:/usr/local/bin:/usr/ccs/bin:/usr/sfw/bin:$ORACLE_HOME/bin</span>

Now Set the Display to a X-windowing enabled system.

<span style="font-family: arial,helvetica,sans-serif; font-size: small;">$ export DISPLAY=192.168.4.47:0.0</span>

Also allow the host to accept the connection by

<span style="font-family: arial,helvetica,sans-serif; font-size: small;">$xhost +</span>

Oracle Software Installation

Go to the Oracle dump location and run runInstaller as Oracle user

<span style="font-family: arial,helvetica,sans-serif; font-size: small;">$./runInstaller</span>

This will open Oracle Universal Installer(OUI) screen. If Oracle Universal Installer is not displayed, then ensure DISPLAY variable is set correctly. Select “Software only” option and install the software. If any of the pre-requisite’s are not met , then installation will fail. You would be required to make necessary changes to proceed.

Database Creation

We will be using ASM for the Database files. For this we need to perform some configuration

1)Prepare the Raw device for using as ASM Disks

<span style="font-family: arial,helvetica,sans-serif; font-size: small;"># ls -l
total 0
crw------- 1 root root 125, 1 Jun 20 10:39 1</span>

Disk should be owned by Oracle user and should have permission set to 660

<span style="font-family: arial,helvetica,sans-serif; font-size: small;"># chown oracle:dba 1
# chmod 660 1

- # ls -ltr
total 0
crw-rw---- 1 oracle dba 125, 1 Jun 20 10:39 1</span>

2)Configure CSS Service

In case of Solaris 10, we need to use Service Management Facility (SMF) for configuring CSS service else it will not start. Refer to my earlier post for this step i.e 11.1.0.6 ASM installation on Solaris fails -II

3) Configure ASM Instance

a)Go to $ORACLE_HOME/bin

b)Execute dbca from this directory (ensure dbca is properly set)

<span style="font-family: arial,helvetica,sans-serif; font-size: small;">$./dbca</span>

c) Select Configure ASM Instance option. This will create ASM instance for you. After this you can create Diskgroups using GUI or else use sqlplus to do the same.

4)Now continue creating database normally and enter Diskgroup Name after selecting Oracle Managed files as database file location.

While you navigate through GUI screens, it will prompt you to Specifying Security Settings

– Keep the enhanced 11g security settings(recommended)
– Revert to pre 11g settings

Select the 11g settings which will enable Auditing by default and also enable Case sensitive passwords with Stronger password hashing algorithm.

I have not discussed GUI screens for DBCA and OUI in this article. These are pretty much standard screens. In case you need more information about it, then you can refer to Oracle 11g Install guide for Solaris

Effect of OS Terminal Setting “STTY” on Oracle Database

Few days back, while trying to create a synonym over dblink I experienced following error:

SQL> create synonym synonymn_name for dual@DBLINK9i;
SP2-0042: unknown command "DBLINK9i" - rest of line ignored

Strange error !!!

Also while trying to perform a simple select statement over another dblink:

SQL > select * from dual@dblink10g;
SP2-0042: unknown command "DBLINK10g" - rest of line ignored.

It also failed with a similar error, though the select over this dblink was working fine earlier.The tnsnames.ora entry was correct and double checked.So what happened to this dblink now?

From other database I was able to perform a simple select over same dblink.

While I was researching this, I came to know that sql queries also started failing :

SQL> UPDATE HRRECORD_TEST SET EMAIL_ADDRESS = '[email protected]' WHERE NAME = 'XYZ';
SP2-0734: unknown command beginning "123.com'..." - rest of line ignored.
SQL>

One Notable thing was that all the queries which failed contained “@” symbol and I also got this error  while create a synonym over dblink (having “@” symbol).

These two issues were related, Then I tried to create a synonym locally and it went fine. This led to conclusion  that sql prompt was not recognising “@” symbol.

So it was clear that the problem was with terminal settings at OS level for oracle user.

Searcing google for terminal setting at OS, I found a command which is used to Sets options for your terminal i.e “stty”. By using stty we can change the terminal settings.

Now checking the current terminal settings using:

# stty -a

speed 9600 baud; line = 0;
rows = 36; columns = 80
min = 4; time = 0;
intr = DEL; quit = ^\; erase = DEL; kill = @
eof = ^D; eol = ^@; eol2 <undef>; swtch <undef>
stop = ^S; start = ^Q; susp <undef>; dsusp <undef>
werase <undef>; lnext <undef>
parenb -parodd cs7 -cstopb hupcl -cread -clocal -loblk -crts

From this output I found that there were two settings causing problem:

1) kill = @

2) eol = ^@

Then, to overcome this I added the following two lines in the oracle user .profile

stty kill ^U

stty eol ^E

Above change will force oracle user to use ^U to kill any session and ^E to end a line instead of using @ for these purpose.

After making these changes in oracle user’s .profile everything went fine. In the end it turned out OS issue whose settings was modified by sysadmin. 😉

High Database Sessions

Yesterday I was working on issue where our Monitoring system reported increased number of database connections. We were already aware of this issue and application team was asked to look into it.

On checking GV$RESOURCE_LIMIT, we found that it had touched the limit value on Instance 1 (since startup).

 SELECT INST_ID "ID",RESOURCE_NAME "RESOURCE",CURRENT_UTILIZATION
CURRENT",MAX_UTILIZATION "MAX",INITIAL_ALLOCATION "INITIAL",LIMIT_VALUE FROM
V$RESOURCE_LIMIT WHERE RESOURCE_NAME='sessions';

ID RESOURCE CURRENT MAX  INITIAL LIMIT_VALUE
--  -------   --------- ----- ----     -----------
1   sessions     232       335   335        335
2   sessions     245       314   335        335

This was quite surprising as we did not get any Critical tickets. (I think this is because polling setting for tool for this parameter was set to 10 minutes).

I checked the alert log for any ORA-18 errors on Instance 1, but could not find any error. I had faced this situation sometime back (with different client) when user complained about getting ORA-18 error but during that time too, there were no entries in Alert log. It could be that ORA-18 errors are not recorded (unlike ORA -20, number of processes exceeding errors).

Since it was not recorded in Alert log (nor any trace file), I turned to AWR to help me out.In this case view WRH$_RESOURCE_LIMIT was the one which we needed to look as it stores statistics captured using AWR snapshots.

SQL> col BEGIN_INTERVAL_TIME for a28
SQL> col END_INTERVAL_TIME for a28
SQL>col RESOURCE_NAME for a10
SELECT A.SNAP_ID,A.INSTANCE_NUMBER "ID",B.BEGIN_INTERVAL_TIME,B.END_INTERVAL_TIME,A.RESOURCE_NAME,
CURRENT_UTILIZATION "CURRENT",MAX_UTILIZATION "MAX"
FROM WRH$_RESOURCE_LIMIT A, WRM$_SNAPSHOT B
WHERE A.RESOURCE_NAME LIKE '%session%'
AND A.SNAP_ID=B.SNAP_ID
AND A.INSTANCE_NUMBER= B.INSTANCE_NUMBER
AND ((A.INSTANCE_NUMBER=1
AND B.BEGIN_INTERVAL_TIME > (SELECT STARTUP_TIME FROM GV$INSTANCE WHERE INSTANCE_NUMBER=1)) OR
(A.INSTANCE_NUMBER=2 AND B.BEGIN_INTERVAL_TIME >
(SELECT STARTUP_TIME FROM GV$INSTANCE WHERE INSTANCE_NUMBER=2)))
order by A.INSTANCE_NUMBER,A.SNAP_ID;

SNAP ID BEGIN_INTERVAL_TIME       END_INTERVAL_TIME          RESOURCE CUR   MAX
---- ---------- ---------------    ------------------------  --------- ---  ---

936  1 28-JUN-08 05.00.04.133 AM  28-JUN-08 06.00.14.342 AM  sessions  212  297
937  1 28-JUN-08 06.00.14.342 AM  28-JUN-08 07.00.54.865 AM  sessions  255  323
938  1 28-JUN-08 07.00.54.865 AM  28-JUN-08 08.00.18.080 AM  sessions  227  323
939  1 28-JUN-08 08.00.18.080 AM  28-JUN-08 09.00.29.881 AM  sessions  101  335

936  2 28-JUN-08 05.00.03.797 AM  28-JUN-08 06.00.14.117 AM  sessions  233  292
937  2 28-JUN-08 06.00.14.117 AM  28-JUN-08 07.00.54.521 AM  sessions  247  310
938  2 28-JUN-08 07.00.54.521 AM  28-JUN-08 08.00.17.725 AM  sessions  253  314
939  2 28-JUN-08 08.00.17.725 AM  28-JUN-08 09.00.29.648 AM  sessions  101  314

We cannot get the exact time, but we can see that number of Database sessions reached 335 sessions on Instance 1 between 28-JUN-08 08.00.18.080 AM AND 28-JUN-08 09.00.29.881

Similarly for Instance 2, number of Database sessions reached to value of 314 between 28-JUN-08 07.00.54.521 AM AND 28-JUN-08 08.00.17.725 AM

This information was sufficient at this time to ask Application support team to check from their end. Though it would have been really nice to find the exact time. Also finding which users were having most sessions would have also helped to point the module.