Amit Bansal

ORA-01873 error running SAP pre-upgrade scripts

While trying to run pre_upgrade_status.sql script for DB upgrade (10.2.0.4 to 11.2.0.4), we got following error

@pre_upgrade_status.sql
ERROR at line 1:
ORA-01873: the leading precision of the interval is too small
ORA-06512: at line 8

On debugging further , found out that it was due to limit imposed on “INTERVAL DAY TO SECOND” datatype. By default precision is 2 i.e. interval day to second is equivalent to interval day(2) to second(2). As per Oracle this is not bug and you should use internal datatype DSINTERVAL_UNCONSTRAINED

I couldn’t get any hit on web for this as this error will only be encountered if the AWR retention is set to 100 or more days. This is easily reproducible

--Set retention to >100 days
execute dbms_workload_repository.modify_snapshot_settings (interval => 60,retention =>145440)

--execute following code

declare
sap_awr_small      BOOLEAN  := FALSE;
sap_awr_retention_days NUMBER;
sap_awr_retention  INTERVAL DAY TO SECOND;
 BEGIN
       sap_awr_small := FALSE;

       SELECT RETENTION INTO sap_awr_retention from DBA_HIST_WR_CONTROL WHERE  dbid = (select dbid from v$database);
       -- dbms_output.put_line(sap_awr_retention);
       sap_awr_retention_days := extract (day from sap_awr_retention);
       -- dbms_output.put_line(sap_awr_retention_days);
       if (sap_awr_retention_days < 42) THEN
         sap_awr_small := TRUE;
       END IF;
end;
/

ERROR at line 1:
ORA-01873: the leading precision of the interval is too small
ORA-06512: at line 8

Easy workaround is to set retention to lower value say 99 days and revert it post upgrade

execute dbms_workload_repository.modify_snapshot_settings (interval => 60,retention =>142560)

In case you are facing this error in your application code, then you can Refer MyOracle Support article ORA-1873 : When Passing Datetime Interval with Precision > 2 in a Parameter (Doc ID 394260.1) for modifying your code to use custom type.

LGWR terminating instance due to error 338

Recently we came across a issue where our DB crashed with ORA-00338 error .

Errors in file /oracle/diag/rdbms/orcl11g/orc11g/trace/orc11g_lgwr_24118.trc:
ORA-00338: log 2 of thread 1 is more recent than control file
ORA-00312: online log 2 thread 1: '/oracle/oradata/orcl11g/redo02.log'
LGWR (ospid: 24118): terminating the instance due to error 338

DB couldn’t be restarted as it gave same errors while opening. For multiplexed redo log files, it reports error for both log files.

Error Description: (Reference :http://psoug.org/oraerror/ORA-00338.htm)
Log string of thread string is more recent than control file

Error Cause:

The control file change sequence number in the log file is greater than the number in the control file. This implies that the wrong control file is being used. Note that repeatedly causing this error can make it stop happening without correcting the real problem. Every attempt to open the database will advance the control file change sequence number until it is great enough.

Action:
Use the current control file or do backup control file recovery to make the control file current. Be sure to follow all restrictions on doing a backup control file recovery.

Above explanation suggests that there is problem with controlfile. Normally these errors are seen when doing incomplete recovery. To troubleshoot it ,we took dumps of redo log files (On recommendation of Oracle support)

Note: Below logs are from test system which was used to reproduce the issue

SQL> alter system dump logfile '/oracle/oradata/orcl11g/redo01.log' validate;
System altered.
SQL> alter system dump logfile '/oracle/oradata/orcl11g/redo02.log' validate;
ERROR at line 1:
ORA-00339: archived log does not contain any redo
ORA-00334: archived log: '/oracle/oradata/orcl11g/redo02.log'
SQL> alter system dump logfile '/oracle/oradata/orcl11g/redo03.log' validate;
*
ERROR at line 1:
ORA-00339: archived log does not contain any redo
ORA-00334: archived log: '/oracle/oradata/orcl11g/redo03.log'

redo02.log and redo03.log dump failed with errors that it does not contain redo. Since redo01.log dump was successful, we looked at trace.

DUMP OF REDO FROM FILE '/oracle/oradata/orcl11g/redo01.log'
 Opcodes *.*
 RBAs: 0x000000.00000000.0000 thru 0xffffffff.ffffffff.ffff
 SCNs: scn: 0x0000.00000000 thru scn: 0xffff.ffffffff
 Times: creation thru eternity
 VALIDATE ONLY
 FILE HEADER:
 Compatibility Vsn = 186647552=0xb200400
 Db ID=970369526=0x39d6a9f6, Db Name='TESTDB'
 Activation ID=2650290266=0x9df8385a
 Control Seq=5124=0x1404, File size=102400=0x19000
 File Number=1, Blksiz=512, File Type=2 LOG
 descrip:"Thread 0001, Seq# 0000000001, SCN 0x00000016f528-0xffffffffffff"
 thread: 1 nab: 0xffffffff seq: 0x00000001 hws: 0x3 eot: 1 dis: 0
 resetlogs count: 0x33acc28a scn: 0x0000.0016f528 (1504552)
 prev resetlogs count: 0x3377bd37 scn: 0x0000.000e2006 (925702)
 Low scn: 0x0000.0016f528 (1504552) 12/22/2014 06:13:30
 Next scn: 0xffff.ffffffff 01/01/1988 00:00:00
 Enabled scn: 0x0000.0016f528 (1504552) 12/22/2014 06:13:30
 Thread closed scn: 0x0000.0016f528 (1504552) 12/22/2014 06:13:30
 Disk cksum: 0xcec6 Calc cksum: 0xcec6
 Terminal recovery stop scn: 0x0000.00000000
 Terminal recovery 01/01/1988 00:00:00
 Most recent redo scn: 0x0000.00000000
 Largest LWN: 0 blocks
 End-of-redo stream : No
 Unprotected mode
 Miscellaneous flags: 0x800000
 Thread internal enable indicator: thr: 0, seq: 0 scn: 0x0000.00000000
 Zero blocks: 0
 Format ID is 2
 redo log key is 1679de3ad36cdd2684143daaa1635b8
 redo log key flag is 5
 Enabled redo threads: 1
END OF REDO DUMP
----- Redo read statistics for thread 1 -----

If you look at dumpfile it says that sequence is set to 1 and DB name=”TESTDB’ .Our instance name is orc11g and last sequence# was more than 1 (can also be confirmed from v$log). This indicated that our redo logs were overwritten by some other process. DB name in redo log hinted that it was done by reporting clone refresh process. Problem was that redo log volume was cross mounted on a reporting clone which overwrote the redo logs during the refresh process .

As a fix, new volume was provisioned for the clone redo logs and volume export was revoked. Since current redo log also got overwritten, we had to restore last hot backup and perform incomplete recovery till the last ETL start time (This was a datawarehouse db). In OLTP this would have caused data loss.

This issue is easily reproducible. If you try to clone a instance by using same name for redo log and open the database, you will receive the error in your source database. DB files are protected by DBWR so we are protected from risk of other Database opening datafiles and get ORA-01157

ORA-01157: cannot identify/lock data file 3 - see DBWR trace file
ORA-01110: data file 3: '/oracle/oradata/orcl11g/undotbs01.dbf'

MGMTDB: Grid Infrastructure Management Repository

MGMTDB is new database instance which is used for storing Cluster Health Monitor (CHM) data. In 11g this was being stored in berkley database but starting Oracle database 12c it is configured as  Oracle Database Instance.
In 11g, .bdb files were stored in $GRID_HOME/crf/db/hostname and used to take up lot of space (>100G) due to bug in 11.2.0.2

During 12c Grid infrastructure installation, there is option to configure Grid Infrastructure Management Repository.

grid_management_db

If you choose YES, then you will see instance -MGMTDB running on one of the node on your cluster.

[oracle@oradbdev02]~% ps -ef|grep mdb_pmon
oracle    7580     1  0 04:57 ?        00:00:00 mdb_pmon_-MGMTDB

This is a Oracle single instance which is being managed by Grid Infrastructure and fails over to surviving node if existing node crashes.You can identify the current master using below command

-bash-4.1$ oclumon manage -get MASTER

Master = oradbdev02

This DB instance can be managed using srvctl commands. Current master can also be identified using status command

$srvctl status mgmtdb 
Database is enabled
Instance -MGMTDB is running on node oradbdev02

We can look at mgmtdb config using

$srvctl config mgmtdb
Database unique name: _mgmtdb
Database name: 
Oracle home: /home/oragrid
Oracle user: oracle
Spfile: +VDISK/_mgmtdb/spfile-MGMTDB.ora
Password file: 
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: -MGMTDB
Type: Management

Replace config with start/stop to start/stop database.
Databases files for repository database are stored in same location as OCR/Voting disk

SQL> select file_name from dba_data_files union select member file_name from V$logfile;

FILE_NAME
------------------------------------------------------------
+VDISK/_MGMTDB/DATAFILE/sysaux.258.819384615
+VDISK/_MGMTDB/DATAFILE/sysgridhomedata.261.819384761
+VDISK/_MGMTDB/DATAFILE/sysmgmtdata.260.819384687
+VDISK/_MGMTDB/DATAFILE/system.259.819384641
+VDISK/_MGMTDB/DATAFILE/undotbs1.257.819384613
+VDISK/_MGMTDB/ONLINELOG/group_1.263.819384803
+VDISK/_MGMTDB/ONLINELOG/group_2.264.819384805
+VDISK/_MGMTDB/ONLINELOG/group_3.265.819384807

We can verify the same using oclumon command

-bash-4.1$ oclumon manage -get reppath

CHM Repository Path = +VDISK/_MGMTDB/DATAFILE/sysmgmtdata.260.819384687

Since this is stored at same location as Voting disk, if you have opted for configuring Management database, you will need to use voting disk with size >5G (3.2G+ is being used by MGMTDB). During GI Installation ,I had tried adding voting disk of 2G but it failed saying that it is of insufficient size. Error didnot indicate that its needed for Management repository but now I think this is because of repository sharing same location as OCR/Voting disk.
Default (also Minimum) size for CHM repository is 2048 M . We can increase respository size by issuing following command

-bash-4.1$ oclumon manage -repos changerepossize 4000
The Cluster Health Monitor repository was successfully resized.The new retention is 266160 seconds.

This command internally runs resize command on datafile and we can see that it changed datafile size from 2G to 4G

SQL> select file_name,bytes/1024/1024,maxbytes/1024/1024,autoextensible from dba_data_files;

FILE_NAME					   BYTES/1024/1024 MAXBYTES/1024/1024 AUT
-------------------------------------------------- --------------- ------------------ ---
+VDISK/_MGMTDB/DATAFILE/sysmgmtdata.260.819384687	      4000		    0 NO

If we try to reduce the size from 4Gb to 3Gb, it will warn and upon user confirmation drop all repository data

-bash-4.1$ oclumon manage -repos changerepossize 3000
Warning: Entire data in Cluster Health Monitor repository will be deleted.Do you want to continue(Yes/No)?
Yes
The Cluster Health Monitor repository was successfully resized.The new retention is 199620 seconds.

Tracefiles for db are stored under DIAG_HOME/_mgmtdb/-MGMTDB/trace. Alert log for instance can be found at this location. Since file name start with -MGMTDB*, we need to use ./ to access files. e.g

[oracle@oradbdev02]~/diag/rdbms/_mgmtdb/-MGMTDB/trace% vi -MGMTDB_mmon_7670.trc
VIM - Vi IMproved 7.2 (2008 Aug 9, compiled Feb 17 2012 10:23:31)
Unknown option argument: "-MGMTDB_mmon_7670.trc"
More info with: "vim -h"
[oracle@oradbdev02]~/diag/rdbms/_mgmtdb/-MGMTDB/trace% vi ./-MGMTDB_mmon_7670.trc

Sample output from a 3 node RAC setup

[oracle@oradbdev02]~% oclumon dumpnodeview -allnodes

----------------------------------------
Node: oradbdev02 Clock: '13-07-23 07.19.00' SerialNo:1707 
----------------------------------------

SYSTEM:
#pcpus: 4 #vcpus: 4 cpuht: N chipname: Dual-Core cpu: 5.15 cpuq: 1 physmemfree: 469504 physmemtotal: 7928104 mcache: 5196464 swapfree: 8191992 swaptotal: 8191992 hugepagetotal: 0 hugepagefree: 0 hugepagesize: 2048 ior: 0 iow: 51 ios: 9 swpin: 0 swpout: 0 pgin: 134 pgout: 140 netr: 223.768 netw: 176.523 procs: 461 rtprocs: 25 #fds: 24704 #sysfdlimit: 779448 #disks: 6 #nics: 3 nicErrors: 0

TOP CONSUMERS:
topcpu: 'oraagent.bin(7090) 2.59' topprivmem: 'java(7247) 149464' topshm: 'ora_mman_snowy1(7783) 380608' topfd: 'ocssd.bin(6249) 273' topthread: 'crsd.bin(6969) 42' 

----------------------------------------
Node: oradbdev03 Clock: '13-07-23 07.19.02' SerialNo:47 
----------------------------------------

SYSTEM:
#pcpus: 4 #vcpus: 4 cpuht: N chipname: Dual-Core cpu: 3.65 cpuq: 2 physmemfree: 1924468 physmemtotal: 7928104 mcache: 4529232 swapfree: 8191992 swaptotal: 8191992 hugepagetotal: 0 hugepagefree: 0 hugepagesize: 2048 ior: 1 iow: 83 ios: 17 swpin: 0 swpout: 0 pgin: 45 pgout: 55 netr: 67.086 netw: 55.042 procs: 373 rtprocs: 22 #fds: 21280 #sysfdlimit: 779448 #disks: 6 #nics: 3 nicErrors: 0

TOP CONSUMERS:
topcpu: 'osysmond.bin(19281) 1.99' topprivmem: 'ocssd.bin(19323) 83528' topshm: 'ora_mman_snowy2(20306) 261508' topfd: 'ocssd.bin(19323) 249' topthread: 'crsd.bin(19617) 40' 

----------------------------------------
Node: oradbdev04 Clock: '13-07-23 07.18.58' SerialNo:1520 
----------------------------------------

SYSTEM:
#pcpus: 4 #vcpus: 4 cpuht: N chipname: Dual-Core cpu: 3.15 cpuq: 1 physmemfree: 1982828 physmemtotal: 7928104 mcache: 4390440 swapfree: 8191992 swaptotal: 8191992 hugepagetotal: 0 hugepagefree: 0 hugepagesize: 2048 ior: 0 iow: 25 ios: 4 swpin: 0 swpout: 0 pgin: 57 pgout: 27 netr: 81.148 netw: 41.761 procs: 355 rtprocs: 24 #fds: 20064 #sysfdlimit: 779450 #disks: 6 #nics: 3 nicErrors: 0

TOP CONSUMERS:
topcpu: 'ocssd.bin(6745) 2.00' topprivmem: 'ocssd.bin(6745) 83408' topshm: 'ora_mman_snowy3(8168) 381768' topfd: 'ocssd.bin(6745) 247' topthread: 'crsd.bin(7202) 40'

You can learn more about oclumon usage by referring to Oclumon Command Reference

I faced error in my setup where I was getting ora-28000 error while using oclumon command. I tried unlocking account and it didn’t succeed.

oclumon dumpnodeview

dumpnodeview: Node name not given. Querying for the local host
CRS-9118-Grid Infrastructure Management Repository connection error 
 ORA-28000: the account is locked

SQL> alter user chm account unlock;

User altered.

dumpnodeview: Node name not given. Querying for the local host
CRS-9118-Grid Infrastructure Management Repository connection error 
 ORA-01017: invalid username/password; logon denied

This issue occurred as post configuration tasks had failed during GI installation. Solution is to run mgmtca from grid home which fixed the issue by unlocking and setting password for users. Wallet was configured for oclumon to be able to access the repository without hard coding password.

[main] [ 2013-07-23 05:32:41.619 UTC ] [Mgmtca.main:102]  Running mgmtca
[main] [ 2013-07-23 05:32:41.651 UTC ] [Mgmtca.execute:192]  Adding internal user1
[main] [ 2013-07-23 05:32:41.653 UTC ] [Mgmtca.execute:194]  Adding internal user2
[main] [ 2013-07-23 05:32:42.028 UTC ] [Mgmtca.isMgmtdbOnCurrentNode:306]  Management DB is running on blr-devdb-003local node is blr-devdb-003
[main] [ 2013-07-23 05:32:42.074 UTC ] [MgmtWallet.createWallet:54]  Wallet created
[main] [ 2013-07-23 05:32:42.084 UTC ] [Mgmtca.execute:213]  MGMTDB Wallet created
[main] [ 2013-07-23 05:32:42.085 UTC ] [Mgmtca.execute:214]  Adding user/passwd to MGMTDB Wallet
[main] [ 2013-07-23 05:32:42.210 UTC ] [MgmtWallet.terminate:122]  Wallet closed
[main] [ 2013-07-23 05:32:42.211 UTC ] [Mgmtca.execute:227]  Unlocking user and setting password in database
[main] [ 2013-07-23 05:32:42.211 UTC ] [Mgmtjdbc.connect:66]  Connection String=jdbc:oracle:oci:@(DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/home/oragrid/bin/oracle)(ARGV0=oracle-MGMTDB)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))')(ENVS='ORACLE_HOME=/home/oragrid,ORACLE_SID=-MGMTDB')))
[main] [ 2013-07-23 05:32:42.823 UTC ] [Mgmtjdbc.connect:72]  Connection Established

These are two internal users being referred above

select username,account_status from dba_users where username like 'CH%';

USERNAME		       ACCOUNT_STATUS
------------------------------ --------------------------------
CHM			       OPEN
CHA			       OPEN

12c: Exporting Database Views as Tables

Starting Oracle Database 12c, you can export view to be imported as a table. There is no need to individually export each table, Data Pump will dump a table with the same columns as the view and with row data fetched from the view.
It also exports objects dependent on the view, such as grants and constraints. To use this view,we need to use parameter VIEWS_AS_TABLES.

Let’s see this feature in action using a example.

We have created a view on emp,dept table to show employee details along with manager name

create view emp_view as select emp.EMPNO, emp.ENAME , emp.JOB,mgr.ename MGRNAME, emp.HIREDATE,emp.SAL ,emp.COMM,dept.DNAME DEPTNAME FROM
EMP emp,DEPT dept,EMP mgr
where emp.deptno=dept.deptno 
and mgr.empno(+)=emp.mgr order by 1;

Create a datapump directory to store datapump dumpfile

create directory dpdir as '/home/oracle/datapump';

Now we take datapump export and specify view name in views_as_tables

$ expdp system views_as_tables=scott.emp_view directory=dpdir dumpfile=emp_view.dmp logfile=emp_view_exp.log

Export: Release 12.1.0.1.0 - Production on Mon Jul 22 12:05:26 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** views_as_tables=scott.emp_view directory=dpdir dumpfile=emp_view.dmp logfile=emp_view_exp.log 
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE_DATA
Total estimation using BLOCKS method: 16 KB
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
. . exported "SCOTT"."EMP_VIEW"                          8.781 KB      14 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /home/oracle/datapump/emp_view.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at Mon Jul 22 12:05:52 2013 elapsed 0 00:00:20

We can see that this has exported 14 rows. To see if it actually works, we will import it but in a different schema (remap_schema does the trick here)

$impdp system remap_schema=scott:amitbans directory=dpdir dumpfile=emp_view.dmp logfile=emp_view_imp.log

Import: Release 12.1.0.1.0 - Production on Mon Jul 22 12:36:33 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** remap_schema=scott:amitbans directory=dpdir dumpfile=emp_view.dmp logfile=emp_view_imp.log 
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE_DATA
. . imported "AMITBANS"."EMP_VIEW"                       8.781 KB      14 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at Mon Jul 22 12:36:48 2013 elapsed 0 00:00:09

Let’s verify the data

SQL> show user
USER is "AMITBANS"
SQL> select * from emp_view;

     EMPNO ENAME      JOB	MGRNAME    HIREDATE	    SAL       COMM DEPTNAME
---------- ---------- --------- ---------- --------- ---------- ---------- --------------
      7369 SMITH      CLERK	FORD	   17-DEC-80	    800 	   RESEARCH
      7499 ALLEN      SALESMAN	BLAKE	   20-FEB-81	   1600        300 SALES
      7521 WARD       SALESMAN	BLAKE	   22-FEB-81	   1250        500 SALES
      7566 JONES      MANAGER	KING	   02-APR-81	   2975 	   RESEARCH
      7654 MARTIN     SALESMAN	BLAKE	   28-SEP-81	   1250       1400 SALES
      7698 BLAKE      MANAGER	KING	   01-MAY-81	   2850 	   SALES
      7782 CLARK      MANAGER	KING	   09-JUN-81	   2450 	   ACCOUNTING
      7788 SCOTT      ANALYST	JONES	   19-APR-87	   3000 	   RESEARCH
      7839 KING       PRESIDENT 	   17-NOV-81	   5000 	   ACCOUNTING
      7844 TURNER     SALESMAN	BLAKE	   08-SEP-81	   1500 	 0 SALES
      7876 ADAMS      CLERK	SCOTT	   23-MAY-87	   1100 	   RESEARCH
      7900 JAMES      CLERK	BLAKE	   03-DEC-81	    950 	   SALES
      7902 FORD       ANALYST	JONES	   03-DEC-81	   3000 	   RESEARCH
      7934 MILLER     CLERK	CLARK	   23-JAN-82	   1300 	   ACCOUNTING

14 rows selected.

We can see from dictionary that this is now imported as a table and not view

SQL> select * from tab;

TNAME			       TABTYPE	CLUSTERID
------------------------------ ------- ----------
EMP_VIEW		       TABLE

There are few restrictions for using this feature

-The view must exist and it must be a relational view with only scalar, non-LOB columns.
-VIEWS_AS_TABLES parameter cannot be used with the TRANSPORTABLE=ALWAYS parameter.

Reference

http://docs.oracle.com/cd/E16655_01/server.121/e17639/dp_export.htm#BEHDIADG 

12c: Sqlplus Displays Last Login Time For Non – Sys Users

12c database has introduced a pretty nifty Security feature which allows you to check last login time for non-sys user. e.g If I connect to scott user as below, it displays that I last logged in at Mon Jul 22 2013 09:06:07 +00:00. Time is displayed in local format (UTC in this case)

[oracle@oradbdev01]~% sqlplus scott/oracle

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 22 09:14:25 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Mon Jul 22 2013 09:06:07 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

If you wish to disable this feature, you can use -nologintime option

 [oracle@oradbdev01]~% sqlplus -nologintime scott/oracle

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 22 09:16:37 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Using connect on sqlplus will not display the last login time but will change the last login time counter

12c:Limiting PGA with pga_aggregate_limit

This is a interesting feature for Oracle DBA’s. Till now there has been no way of limiting total PGA being used by oracle instance. _pga_max_size (hidden parameter) could be used to limit per process memory. SGA_TARGET used to be hard limit for Oracle SGA but PGA_AGGREGATE_TARGET only controlled allocation of PGA memory to each process via oracle algorithm. (I know some folks who thought pga_aggregate_target is hard limit, but that is incorrect)

Due to this many times we end up in a situation when we get paged for High CPU /Low memory utilization and found that high swapping was being caused by excessive PGA usage. Since kswapd (linux) is busy writing to swap, you will also see high CPU utilization.

At this moment, your memory might look like below

$free -m
 total used free shared buffers cached
Mem: 16057 15953 103 0 4 209
-/+ buffers/cache: 15739 317
Swap: 12287 5359 6928

PGA_AGGREGATE_LIMIT is supposed to put a hard limit on the PGA Memory usage. But unlike SGA_TARGET, this doesn’t mean that processes will reduce the memory consumption. Instead this will lead to process/session failure.

As per Oracle docs oracle will terminate sessions in following order

-Calls for sessions that are consuming the most untunable PGA memory are aborted.

– If PGA memory usage is still over the PGA_AGGREGATE_LIMIT, then the sessions and processes that are consuming the most untunable PGA memory are terminated.

Note: I don’t see any difference in above conditions 🙂

PGA_AGGREGATE_LIMIT will be set to higher of following values

– 2 GB (default value)
– 200% of the PGA_AGGREGATE_TARGET value
– 3 MB times the value of the PROCESSES parameter.
– Less then 1.2 X RAM – SGA
To simulate this, I did following test using big_table (Code from Asktom)

create table big_table tablespace data_4m
as
select rownum id,
 OWNER, OBJECT_NAME, SUBOBJECT_NAME,
 OBJECT_ID, DATA_OBJECT_ID,
 OBJECT_TYPE, CREATED, LAST_DDL_TIME,
 TIMESTAMP, STATUS, TEMPORARY,
 GENERATED, SECONDARY
 from all_objects a
 where 1=0
/

alter table big_table nologging;
--Load the table
declare
 l_cnt number;
 l_rows number := &1;
begin
 insert /*+ append */
 into big_table
 select rownum,
 OWNER, OBJECT_NAME, SUBOBJECT_NAME,
 OBJECT_ID, DATA_OBJECT_ID,
 OBJECT_TYPE, CREATED, LAST_DDL_TIME,
 TIMESTAMP, STATUS, TEMPORARY,
 GENERATED, SECONDARY
 from all_objects a
 where rownum <= &1;
l_cnt := sql%rowcount;
commit;
while (l_cnt < l_rows)
 loop
 insert /*+ APPEND */ into big_table
 select rownum+l_cnt,
 OWNER, OBJECT_NAME, SUBOBJECT_NAME,
 OBJECT_ID, DATA_OBJECT_ID,
 OBJECT_TYPE, CREATED, LAST_DDL_TIME,
 TIMESTAMP, STATUS, TEMPORARY,
 GENERATED, SECONDARY
 from big_table
 where rownum <= l_rows-l_cnt;
 l_cnt := l_cnt + sql%rowcount;
 commit;
 end loop;
end;
/ 
--gather stats
exec dbms_stats.gather_table_stats ( ownname => 'AMIT',tabname => 'BIG_TABLE' )

Let’s check the size of table

select owner,table_name,num_rows from dba_tables where table_name='BIG_TABLE';

OWNER		     TABLE_NAME 	    NUM_ROWS
-------------------- -------------------- ----------
AMIT          	     BIG_TABLE		   110000000

PGA_AGGREGATE_LIMIT is set to 2G (default value). This can’t be lowered as 2G is least value allowed for it

SQL> alter system set pga_aggregate_limit=1g;
alter system set pga_aggregate_limit=1g
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00093: pga_aggregate_limit must be between 2048M and 100000G

Before proceeding with this test PGA_AGGREGATE_TARGET was bumped to 30g so that processes can get more PGA memory

Created a shell file with following code

cat 1.sh
modname=$1
sqlplus -s amit/amit <<EOF
set pages 1000 lines 100
set autot traceonly
exec dbms_application_info.set_module(module_name=>'${modname}',action_name=>'')
 select * from big_table t1,big_table t2 where t1.object_id =t2.object_id;
exit
EOF

This code is executed via for loop with 20 sessions

for ((i=1;i<=20;i++))
do
nohup sh 1.sh PGA_LIMIT${i} &
done

In separate session ran following queries

set pages 1000 lines 200
col event for a30
col module for a30
select s.sid,s.serial#,p.spid,p.pga_used_mem/1024/1024 PGA_MB,s.status,s.event,s.module from v$session s ,v$process p where s.paddr=p.addr and s.module like 'PGA_LIMIT%' and wait_class<>'Idle' order by module
/
select sum(pga_used_mem)/1024/1024 pga_used_mb from v$process;

Output of the query was like this

SQL> @pga

       SID    SERIAL# SPID			   PGA_MB STATUS   EVENT			  MODULE
---------- ---------- ------------------------ ---------- -------- ------------------------------ ------------------------------
       262	  243 20941		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT1
       390	   77 20957		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT10
       154	   11 20938		       130.816335 ACTIVE   direct path write temp	  PGA_LIMIT11
	35	   77 20921		       130.944379 ACTIVE   direct path write temp	  PGA_LIMIT12
       266	   35 20906		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT13
       141	   17 20917		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT14
       263	   59 20945		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT15
	25	   55 20943		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT16
       135	  119 20942		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT17
       383	  217 20923		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT18
       371	  391 20937		       130.816335 ACTIVE   direct path write temp	  PGA_LIMIT19
       382	  163 20955		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT2
	29	   35 20954		       130.816335 ACTIVE   direct path write temp	  PGA_LIMIT20
       144	  103 20947		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT3
       136	   15 20952		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT4
	38	    5 20926		       130.882039 ACTIVE   direct path write temp	  PGA_LIMIT5
       257	  139 20902		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT6
       260	   73 20939		       130.816335 ACTIVE   direct path write temp	  PGA_LIMIT7
       381	   13 20951		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT8
	23	  101 20915		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT9

20 rows selected.

Elapsed: 00:00:00.00

PGA_USED_MB
-----------
 2684.47882

Note that this is more then 2G limit and we still didn’t see failures. Reason is that we have set pga_aggregate_target to 30G . Checking parameter value for pga_aggregate_limit still shows 2G but it seems it has risen to higher value (since pga_aggregate_target is 30G now). At this moment, I reduced the pga_aggregate_target to 500m. Immediately some of the db sessions were automatically cleared. Excerpt from alert log

ALTER SYSTEM SET pga_aggregate_target=500M SCOPE=BOTH;
2013-07-07 11:27:34.015000 +00:00
Errors in file /home/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_20923.trc  (incident=12476):
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT
Errors in file /home/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_20939.trc  (incident=12433):
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT

Checking V$Session confirms same

      SID    SERIAL# SPID			   PGA_MB STATUS   EVENT			  MODULE
---------- ---------- ------------------------ ---------- -------- ------------------------------ ------------------------------
       262	  243 20941		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT1
       390	   77 20957		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT10
       266	   35 20906		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT13
       141	   17 20917		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT14
	25	   55 20943		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT16
       135	  119 20942		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT17
       382	  163 20955		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT2
       144	  103 20947		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT3
       136	   15 20952		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT4
       257	  139 20902		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT6
       381	   13 20951		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT8
	23	  101 20915		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT9

12 rows selected.

PGA_USED_MB
-----------
 1637.30552

Since limit is set to 2G, it has cleared sessions automatically. login_time is not considered for killing session as we can find PGA_LIMIT7, 11,12 missing from the remaining sessions.