database

12c: Sqlplus Displays Last Login Time For Non – Sys Users

12c database has introduced a pretty nifty Security feature which allows you to check last login time for non-sys user. e.g If I connect to scott user as below, it displays that I last logged in at Mon Jul 22 2013 09:06:07 +00:00. Time is displayed in local format (UTC in this case)

[oracle@oradbdev01]~% sqlplus scott/oracle

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 22 09:14:25 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Mon Jul 22 2013 09:06:07 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

If you wish to disable this feature, you can use -nologintime option

 [oracle@oradbdev01]~% sqlplus -nologintime scott/oracle

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 22 09:16:37 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Using connect on sqlplus will not display the last login time but will change the last login time counter

12c:Limiting PGA with pga_aggregate_limit

This is a interesting feature for Oracle DBA’s. Till now there has been no way of limiting total PGA being used by oracle instance. _pga_max_size (hidden parameter) could be used to limit per process memory. SGA_TARGET used to be hard limit for Oracle SGA but PGA_AGGREGATE_TARGET only controlled allocation of PGA memory to each process via oracle algorithm. (I know some folks who thought pga_aggregate_target is hard limit, but that is incorrect)

Due to this many times we end up in a situation when we get paged for High CPU /Low memory utilization and found that high swapping was being caused by excessive PGA usage. Since kswapd (linux) is busy writing to swap, you will also see high CPU utilization.

At this moment, your memory might look like below

$free -m
 total used free shared buffers cached
Mem: 16057 15953 103 0 4 209
-/+ buffers/cache: 15739 317
Swap: 12287 5359 6928

PGA_AGGREGATE_LIMIT is supposed to put a hard limit on the PGA Memory usage. But unlike SGA_TARGET, this doesn’t mean that processes will reduce the memory consumption. Instead this will lead to process/session failure.

As per Oracle docs oracle will terminate sessions in following order

-Calls for sessions that are consuming the most untunable PGA memory are aborted.

– If PGA memory usage is still over the PGA_AGGREGATE_LIMIT, then the sessions and processes that are consuming the most untunable PGA memory are terminated.

Note: I don’t see any difference in above conditions 🙂

PGA_AGGREGATE_LIMIT will be set to higher of following values

– 2 GB (default value)
– 200% of the PGA_AGGREGATE_TARGET value
– 3 MB times the value of the PROCESSES parameter.
– Less then 1.2 X RAM – SGA
To simulate this, I did following test using big_table (Code from Asktom)

create table big_table tablespace data_4m
as
select rownum id,
 OWNER, OBJECT_NAME, SUBOBJECT_NAME,
 OBJECT_ID, DATA_OBJECT_ID,
 OBJECT_TYPE, CREATED, LAST_DDL_TIME,
 TIMESTAMP, STATUS, TEMPORARY,
 GENERATED, SECONDARY
 from all_objects a
 where 1=0
/

alter table big_table nologging;
--Load the table
declare
 l_cnt number;
 l_rows number := &1;
begin
 insert /*+ append */
 into big_table
 select rownum,
 OWNER, OBJECT_NAME, SUBOBJECT_NAME,
 OBJECT_ID, DATA_OBJECT_ID,
 OBJECT_TYPE, CREATED, LAST_DDL_TIME,
 TIMESTAMP, STATUS, TEMPORARY,
 GENERATED, SECONDARY
 from all_objects a
 where rownum <= &1;
l_cnt := sql%rowcount;
commit;
while (l_cnt < l_rows)
 loop
 insert /*+ APPEND */ into big_table
 select rownum+l_cnt,
 OWNER, OBJECT_NAME, SUBOBJECT_NAME,
 OBJECT_ID, DATA_OBJECT_ID,
 OBJECT_TYPE, CREATED, LAST_DDL_TIME,
 TIMESTAMP, STATUS, TEMPORARY,
 GENERATED, SECONDARY
 from big_table
 where rownum <= l_rows-l_cnt;
 l_cnt := l_cnt + sql%rowcount;
 commit;
 end loop;
end;
/ 
--gather stats
exec dbms_stats.gather_table_stats ( ownname => 'AMIT',tabname => 'BIG_TABLE' )

Let’s check the size of table

select owner,table_name,num_rows from dba_tables where table_name='BIG_TABLE';

OWNER		     TABLE_NAME 	    NUM_ROWS
-------------------- -------------------- ----------
AMIT          	     BIG_TABLE		   110000000

PGA_AGGREGATE_LIMIT is set to 2G (default value). This can’t be lowered as 2G is least value allowed for it

SQL> alter system set pga_aggregate_limit=1g;
alter system set pga_aggregate_limit=1g
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00093: pga_aggregate_limit must be between 2048M and 100000G

Before proceeding with this test PGA_AGGREGATE_TARGET was bumped to 30g so that processes can get more PGA memory

Created a shell file with following code

cat 1.sh
modname=$1
sqlplus -s amit/amit <<EOF
set pages 1000 lines 100
set autot traceonly
exec dbms_application_info.set_module(module_name=>'${modname}',action_name=>'')
 select * from big_table t1,big_table t2 where t1.object_id =t2.object_id;
exit
EOF

This code is executed via for loop with 20 sessions

for ((i=1;i<=20;i++))
do
nohup sh 1.sh PGA_LIMIT${i} &
done

In separate session ran following queries

set pages 1000 lines 200
col event for a30
col module for a30
select s.sid,s.serial#,p.spid,p.pga_used_mem/1024/1024 PGA_MB,s.status,s.event,s.module from v$session s ,v$process p where s.paddr=p.addr and s.module like 'PGA_LIMIT%' and wait_class<>'Idle' order by module
/
select sum(pga_used_mem)/1024/1024 pga_used_mb from v$process;

Output of the query was like this

SQL> @pga

       SID    SERIAL# SPID			   PGA_MB STATUS   EVENT			  MODULE
---------- ---------- ------------------------ ---------- -------- ------------------------------ ------------------------------
       262	  243 20941		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT1
       390	   77 20957		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT10
       154	   11 20938		       130.816335 ACTIVE   direct path write temp	  PGA_LIMIT11
	35	   77 20921		       130.944379 ACTIVE   direct path write temp	  PGA_LIMIT12
       266	   35 20906		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT13
       141	   17 20917		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT14
       263	   59 20945		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT15
	25	   55 20943		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT16
       135	  119 20942		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT17
       383	  217 20923		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT18
       371	  391 20937		       130.816335 ACTIVE   direct path write temp	  PGA_LIMIT19
       382	  163 20955		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT2
	29	   35 20954		       130.816335 ACTIVE   direct path write temp	  PGA_LIMIT20
       144	  103 20947		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT3
       136	   15 20952		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT4
	38	    5 20926		       130.882039 ACTIVE   direct path write temp	  PGA_LIMIT5
       257	  139 20902		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT6
       260	   73 20939		       130.816335 ACTIVE   direct path write temp	  PGA_LIMIT7
       381	   13 20951		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT8
	23	  101 20915		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT9

20 rows selected.

Elapsed: 00:00:00.00

PGA_USED_MB
-----------
 2684.47882

Note that this is more then 2G limit and we still didn’t see failures. Reason is that we have set pga_aggregate_target to 30G . Checking parameter value for pga_aggregate_limit still shows 2G but it seems it has risen to higher value (since pga_aggregate_target is 30G now). At this moment, I reduced the pga_aggregate_target to 500m. Immediately some of the db sessions were automatically cleared. Excerpt from alert log

ALTER SYSTEM SET pga_aggregate_target=500M SCOPE=BOTH;
2013-07-07 11:27:34.015000 +00:00
Errors in file /home/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_20923.trc  (incident=12476):
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT
Errors in file /home/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_20939.trc  (incident=12433):
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT

Checking V$Session confirms same

      SID    SERIAL# SPID			   PGA_MB STATUS   EVENT			  MODULE
---------- ---------- ------------------------ ---------- -------- ------------------------------ ------------------------------
       262	  243 20941		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT1
       390	   77 20957		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT10
       266	   35 20906		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT13
       141	   17 20917		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT14
	25	   55 20943		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT16
       135	  119 20942		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT17
       382	  163 20955		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT2
       144	  103 20947		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT3
       136	   15 20952		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT4
       257	  139 20902		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT6
       381	   13 20951		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT8
	23	  101 20915		       130.878675 ACTIVE   direct path write temp	  PGA_LIMIT9

12 rows selected.

PGA_USED_MB
-----------
 1637.30552

Since limit is set to 2G, it has cleared sessions automatically. login_time is not considered for killing session as we can find PGA_LIMIT7, 11,12 missing from the remaining sessions.

 

ORA-7445 core dump [kokscold()+849] from EM 12c

The first problem reported by my EM 12c is an ORA-7445 error. After checking the alert log following is the exact message:

Sun Feb 19 12:09:41 2012
Errors in file /u01/app/oracle/diag/rdbms/pkldb/pkldb/trace/pkldb_j035_14712.trc (incident=52961):
ORA-07445: exception encountered: core dump [kokscold()+849] [SIGSEGV] [ADDR:0x50] [PC:0x115E16F] [Address not mapped to object] []
Incident details in: /u01/app/oracle/diag/rdbms/pkldb/pkldb/incident/incdir_52961/pkldb_j035_14712_i52961.trc

From the incident trace file:

*** 2012-02-19 12:09:43.016
dbkedDefDump(): Starting a non-incident diagnostic dump (flags=0x3, level=3, mask=0x0)
----- Current SQL Statement for this session (sql_id=azmq8z48r7wk4) -----
/* SQL Analyze(117,1) */ MERGE INTO MGMT_ARU_PATCH_RECOMMENDATIONS p USING (SELECT :1  as patch_guid, :2  as PATCH_TYPE, :3  as BUG_ID, :4  as REQUEST_ID, :5  as PLATFORM_ID, :6  as PRODUCT_ID, :7  as RELEASE_ID , :8  as LANGUAGE_ID, :9  as CLASSIFICATION_ID, :10  as PLATFORM, :11  as PRODUCT, :12  as RELEASE, :13  as LANGUAGE, :14  as CLASSIFICATION, :15  as ARU_TARGET_TYPE, :16  as ABSTRACT, :17  as RELEASE_DATE, :18  as LOAD_TIMESTAMP, :19  as TEXT_CONTENT, :20  as SUPPORT_LEVEL, :21  as SUPPORT_LEVEL_ID FROM DUAL) n ON (p.patch_guid = n.patch_guid) WHEN MATCHED THEN UPDATE SET p.PATCH_TYPE = n.PATCH_TYPE, p.BUG_ID = n.BUG_ID, p.REQUEST_ID = n.REQUEST_ID, p.PLATFORM_ID = n.PLATFORM_ID, p.PRODUCT_ID = n.PRODUCT_ID, p.RELEASE_ID = n.RELEASE_ID, p.LANGUAGE_ID = n.LANGUAGE_ID, p.CLASSIFICATION_ID  = n.CLASSIFICATION_ID, p.PLATFORM = n.PLATFORM, p.PRODUCT = n.PRODUCT, p.RELEASE = n.RELEASE, p.LANGUAGE = n.LANGUAGE, p.CLASSIFICATION = n.CLASSIFICATION, p.ARU_TARGET_TYPE = n.ARU_TARGET_TYPE, p.ABSTRACT = n.ABSTRACT, p.RELEASE_DATE = n.RELEASE_DATE,p.LOAD_TIMESTAMP = n.LOAD_TIMESTAMP, p.TEXT_CONTENT = n.TEXT_CONTENT, p.SUPPORT_LEVEL = n.SUPPORT_LEVEL, p.SUPPORT_LEVEL_ID = n.SUPPORT_LEVEL_ID WHEN NOT MATCHED THEN  INSERT (PATCH_GUID, PATCH_TYPE, BUG_ID, REQUEST_ID, PLATFORM_ID, PRODUCT_ID, RELEASE_ID, LANGUAGE_ID, CLASSIFICATION_ID, PLATFORM, PRODUCT, RELEASE, LANGUAGE, CLASSIFICATION, ARU_TARGET_TYPE, ABSTRACT, RELEASE_DATE,LOAD_TIMESTAMP, TEXT_CONTENT, SUPPORT_LEVEL, SUPPORT_LEVEL_ID) VALUES (n.PATCH_GUID, n.PATCH_TYPE, n.BUG_ID, n.REQUEST_ID, n.PLATFORM_ID, n.PRODUCT_ID, n.RELEASE_ID, n.LANGUAGE_ID, n.CLASSIFICATION_ID, n.PLATFORM, n.PRODUCT, n.RELEASE, n.LANGUAGE, n.CLASSIFICATION, n.ARU_TARGET_TYPE, n.ABSTRACT, n.RELEASE_DATE,n.LOAD_TIMESTAMP, n.TEXT_CONTENT, n.SUPPORT_LEVEL, n.SUPPORT_LEVEL_ID)


----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0x80bd47c0     11774  package body SYS.DBMS_SQLTUNE_INTERNAL
0x80b50a80         7  SYS.WRI$_ADV_SQLTUNE
0x82d550f8       545  package body SYS.PRVT_ADVISOR
0x82d550f8      2613  package body SYS.PRVT_ADVISOR
0x80bf0da8       241  package body SYS.DBMS_ADVISOR
0x81987268       772  package body SYS.DBMS_SQLTUNE
0x8187d290         4  anonymous block

*** 2012-02-19 12:09:42.839
> (0x115e16f) mov 0x50(%r11),%rdx
  (0x115e173) movzbl 0x1(%r13),%ecx
  (0x115e178) movzbl 0x1(%r15),%r9d
  (0x115e17d) mov 0x8(%rdx),%edx
  (0x115e180) lea 0x10(%r13),%r8

The error seems to be coming from some automated scheduled SQL Tuning Task (Yet to figureout that). Searching at MOS gives reference to bug 9594372, which shows that the problem is with merge statement. It says that applying the patch will fix the issue.

While checking the error from my EM12c, it also gives a search button to search the error at MOS( I have already installed the MOS plug-in) which I found is very good thing as I need not to login to separate MOS windows.

Copying Oracle Scheduler jobs with Arguments

Oracle provides dbms_scheduler.copy_job function to copy oracle scheduler jobs which works great. But problem is that it cannot copy  jobs which have  arguments.

Need to copy jobs can arise due to following scenarios

1)Runaway jobs – When Scheduler job is running without any SID. This means OS session is not present and this will not clear. These jobs have to be recreated to resolve the issue.
2)Jobs can not be stopped – many times jobs cannot be stopped by dbms_scheduler.stop_job(<jobname>,true) as they error out with “Internal error”
3)We need to have similar job but under different schema or with different name

I have written below code which would prompt for username and jobname and will copy the job along with its arguments

declare
l_owner varchar2(30) :='&job_owner';
l_old_job varchar2(30) :='&job_name';
l_new_job varchar2(30);
cnt integer;
cursor c_arg is select argument_position,value from DBA_SCHEDULER_JOB_ARGS where job_name = l_old_job and owner = l_owner order by argument_position;
begin
l_new_job :='INTERMEDIATE_CPY';
select count(*) into cnt from dba_scheduler_jobs where job_name = l_new_job and owner = l_owner;
		if ( cnt > 0 ) then
			dbms_output.put_line ('dropping scheduler job INTERMEDIATE_CPY');
			 dbms_scheduler.drop_job(l_owner||'.'||l_new_job,true);
			end if;
sys.dbms_scheduler.copy_job(l_owner||'.'||l_old_job,l_owner||'.'||l_new_job);
sys.dbms_scheduler.drop_job(l_owner||'.'||l_old_job,true);
sys.dbms_scheduler.copy_job(l_owner||'.'||l_new_job,l_owner||'.'||l_old_job);
for v_arg in c_arg
	loop
		dbms_output.put_line('Setting ARGUMENT_POSITION '||v_arg.argument_position||' to value '||v_arg.value);
	 sys.dbms_scheduler.set_job_argument_value(job_name =>l_owner||'.'||l_old_job,ARGUMENT_POSITION=>v_arg.argument_position,ARGUMENT_VALUE =>v_arg.value);
	 end loop;
	 sys.dbms_scheduler.enable(l_owner||'.'||l_old_job);
	sys.dbms_scheduler.drop_job(l_owner||'.'||l_new_job,true);
end;
/

In case you don’t copy the arguments, jobs will fail with status as STOPPED and give REASON as “Job slave process was terminated”. Also it can generate following ORA-07445 error

ORA-07445: exception encountered: core dump [kpubsuuc()+197] [SIGSEGV] [Address not mapped to object] [0x000000000] [] []

11g: Multiple failed login attempt can block New Application connections

In Oracle database 11g, a new feature/enhancement has potential of bringing down the application due to multiple sessions hitting database with wrong password.

As per Bug 7715339 – Logon failures causes “row cache lock” waits – Allow disable of logon delay [ID 7715339.8]

In 11g there is an intentional delay between allowing failed logon attempts to retry. After 3 successive failures a sleep delay is introduced starting
at 3 seconds and extending to 10 seconds max. During each delay the user X row cache lock is held in exclusive mode preventing any concurrent logon attempt as user X (and preventing any other operation which would need the row cache lock for user X).

Let’s simulate this using a test case.I am using a 11.2.0.3 database.Open two or more database server  connections

Connection 1

create user amit identified by amit;
grant connect to amit;
alter user amit profile MONITORING_PROFILE;

MONITORING_PROFILE has failed_login_attempts set to unlimited.

Create two scripts

$cat 1.sh
sqlplus -s amit/test123 <<EOF
EOF
$cat lock_db.sh
#!/bin/bash
for i in {1..1000}
do
nohup sh 1.sh &
done

I have set loop to 1000 which you can increase depending on your process limit. This is test database with process parameter set to 150. Execute script

sh lock_db.sh 

Now if you go to Connection 2, you will see following issues

a) Database will not allow fresh connections for short time as you will get ORA-00020: maximum number of processes (150) exceeded errors.
After that database connections will be possible
b) You cannot connect to AMIT user for considerable time even with correct password. All sessions will be waiting on library cache lock

 select event,count(*) from V$session group by event order by 2;

EVENT								   COUNT(*)
---------------------------------------------------------------- ----------
Streams AQ: waiting for time management or cleanup tasks		  1
VKTM Logical Idle Wait							  1
Space Manager: slave idle wait						  1
SQL*Net message from client						  1
ges remote message							  1
ASM background timer							  1
GCR sleep								  1
smon timer								  1
pmon timer								  1
Streams AQ: qmn coordinator idle wait					  1
asynch descriptor resize						  1
Streams AQ: qmn slave idle wait 					  1
PING									  1
class slave wait							  2
gcs remote message							  2
DIAG idle wait								  2
wait for unread message on broadcast channel				  2
rdbms ipc message							 18
library cache lock							101

select
 distinct
   ses.ksusenum sid, ses.ksuseser serial#, ses.ksuudlna username,KSUSEMNM module,
   ob.kglnaown obj_owner, ob.kglnaobj obj_name
   ,lk.kgllkcnt lck_cnt, lk.kgllkmod lock_mode, lk.kgllkreq lock_req
   , w.state, w.event, w.wait_Time, w.seconds_in_Wait
 from
  x$kgllk lk,  x$kglob ob,x$ksuse ses
  , v$session_wait w
where lk.kgllkhdl in
(select kgllkhdl from x$kgllk where kgllkreq >0 )
and ob.kglhdadr = lk.kgllkhdl
and lk.kgllkuse = ses.addr
and w.sid = ses.indx
order by seconds_in_wait desc
/

      SID    SERIAL# USERNAME	 MODULE 			     OBJ_OWNER	     OBJ_NAME		LCK_CNT  LOCK_MODE   LOCK_REQ STATE		  EVENT 			  WAIT_TIME SECONDS_IN_WAIT
---------- ---------- ---------- ----------------------------------- --------------- --------------- ---------- ---------- ---------- ------------------- ------------------------------ ---------- ---------------
       153	   77		 testbox01		     84 		      0 	 0	    3 WAITING		  library cache lock			  0		  3
	36	  101		 testbox01		     84 		      0 	 0	    3 WAITING		  library cache lock			  0		  3
	16	  137		testbox01		     84 		      0 	 0	    3 WAITING		  library cache lock			  0		  3
       143	  203		 testbox01		     84 		      0 	 0	    3 WAITING		  library cache lock			  0		  3
	26	  261		 testbox01		     84 		      1 	 3	    0 WAITED KNOWN TIME   library cache lock			119		  3

You will notice that seconds_in_wait value will increase . Also ‘alter system kill session’  takes long time to kill session (it seems as if hung). As per bug ,any operation involving row cache lock for that user will not be allowed. I confirmed by trying to reset profile to Default and session was stuck.

Till all these sessions are cleared from database, you will not be allowed to login. This can severy impact applications as fresh connection would not be possible. Pre-existing sessions will work
If you repeat same experiment in 10g, you will not face similar issue.

To disable this behavior i.e no sleep delay set following event

event=”28401 trace name context forever, level 1″

You can set it dynamically too using

alter system set events ‘28401 trace name context forever, level 1’;

To go back to original behavior

alter system set events ‘28401 trace name context off’;

Bug 7715339 mentions that if FAILED_LOGIN_ATTEMPTS is set to some value, then further attempts to log in will then correctly fail immediately with no delay

I tested this and found that this is not entirely true as my script was able to again spawn 101 sessions which were waiting on library cache lock again. Fresh connection with correct password was again stuck. DBA_USERS view was showing that account is locked. Though here the session cleared in quick time as compared to test case involving failed_login_attempts to unlimited.

Changing CRS/Database timezone in 11.2.0.2 post install

I had installed a 11.2.0.2 RAC setup few days back with incorrect timezone. It had to be PDT but I installed with UTC.
Starting/stopping clusteware with correct timezone didn’t solve the issue.

In 11.2.0.2 Oracle stores timezone information in file $GRID_HOME/crs/install/s_config_(hostname).txt. In my case file looked like this

cd /oragrid/product/11.2/crs/install
cat s_crsconfig_prod1.txt
<strong>TZ=UTC</strong>
NLS_LANG=AMERICAN_AMERICA.AL32UTF8
TNS_ADMIN=
ORACLE_BASE=

To resolve the issue we need to change TZ to US/Pacific on all nodes and restart clusterware. So entry would be like

<strong>TZ=US/Pacific</strong>

On Restarting clusteware , database and clusteware starts with correct timezone.

In case you wish to have different database timezone only for Oracle database, then it is possible using srvctl command. E.g

srvctl setenv database -d orcl -t TZ=US/Pacific

You can confirm this with getenv command

[oracle@prod1]~% srvctl getenv database -d orcl
orcl:
TZ=US/Pacific

This would require database bounce. Also note that in case database is started manually it would not start with correct timezone. To unset the parameter use following command

[oracle@prod1]~% srvctl unsetenv database -d orcl -t TZ
[oracle@prod1]~% srvctl getenv database -d orcl
orcl:

Hope this helps