Amit Bansal

Mview Complete Refresh and Atomic_refresh parameter

This post discusses issue faced by me while working on Materialized view (mview) creation at new site(database). Starting Oracle 10g, how atomic_refresh works with complete refresh has undergone a change.
Atomic_refresh parameter is to allow either all of the tasks of a transaction are performed or none of them (Atomicity). If set to TRUE, then all refreshes are done in one transaction. If set to FALSE, then the refresh of each specified materialized view is done in a separate transaction.
In case of 9i if a single mview was refreshed , it used to be truncated and then refreshed (which actually was not atomic). In case of multiple mviews being refreshed (part of refresh group), rows were deleted and then the insert was done.
As part of change in 10g, if atomic_refresh is set to true (By defualt) , rows will be deleted one by one and then insert will take place for both single mview or multiple mviews in refresh group. So if you need to truncate the mview before then set atomic_refresh =>false (default True) while doing a complete refresh.
Due to this change you will find many issues like high redo generation and longer time for complete refresh of mviews using dbms_mview.refresh as now in 10g it will go for deleting the rows and not truncate.
In our case a DBA issued a complete refresh for  mview with size of 195Gb (having around 8 indexes ). After 16 hours of running, refresh job was still deleting the rows and had not started inserting the rows. At this moment we decided to kill the refresh session. As this session had generated a lot of undo, smon kicked in to recover the transaction.
On checking the V$FAST_START_TRANSACTION , it reported that it will take around 2 months to perform the rollback.

This post discusses issue faced by me while working on Materialized view (mview) creation at new site(database). Starting Oracle 10g, how atomic_refresh works with complete refresh has undergone a change.

Atomic_refresh parameter is to allow either all of the tasks of a transaction are performed or none of them (Atomicity). If set to TRUE, then all refreshes are done in one transaction. If set to FALSE, then the refresh of each specified materialized view is done in a separate transaction.If set to FALSE, Oracle can optimize refresh by using parallel DML and truncate DDL on a materialized views.

In case of 9i  (atomic_refresh =>true), if a single mview was refreshed , it used to be truncated and then refreshed (which actually was not atomic). In case of multiple mviews being refreshed (part of refresh group), rows were deleted and then the insert was done.

As part of change in 10g, if atomic_refresh is set to true (By defualt) , rows will be deleted one by one and then insert will take place for both single mview or multiple mviews in refresh group. So if you need to truncate the mview before then set atomic_refresh =>false (default True) while doing a complete refresh.

Due to this change you will find many issues like high redo generation and longer time for complete refresh of mviews using dbms_mview.refresh as now in 10g it will go for deleting the rows and not truncate.

In our case a DBA issued a complete refresh without setting atomic_refresh to false for  mview with size of 195Gb (plus 8 indexes ). After 16 hours of running, refresh job was still deleting the rows and had not started inserting the rows. At this moment we decided to kill the refresh session. As this session had generated a lot of undo, smon kicked in to recover the transaction.

On checking the V$FAST_START_TRANSACTION , it reported that it will take around 2 months to perform the rollback.

SQL> select usn, state, undoblockstotal "Total", undoblocksdone "Done", undoblockstotal-undoblocksdone "ToDo",   decode(cputime,0,'unknown',sysdate+(((undoblockstotal-undoblocksdone) / (undoblocksdone / cputime)) / 86400))   "Estimated time to complete"
  2     from v$fast_start_transactions;

       USN STATE                 Total       Done       ToDo Estimated time to co
---------- ---------------- ---------- ---------- ---------- --------------------
        51 RECOVERING          3514498        354    3514144 12-SEP-2009 02:52:36

SQL> select * from V$fast_start_servers;

STATE       UNDOBLOCKSDONE        PID XID
----------- -------------- ---------- ----------------
RECOVERING             354         32 0033000200001C81
RECOVERING               0         33 0033000200001C81
RECOVERING               0         85 0033000200001C81
RECOVERING               0         38 0033000200001C81
RECOVERING               0         39 0033000200001C81
RECOVERING               0         40 0033000200001C81
RECOVERING               0         42 0033000200001C81
RECOVERING               0         43 0033000200001C81
RECOVERING               0         44 0033000200001C81
RECOVERING               0         45 0033000200001C81
RECOVERING               0         46 0033000200001C81
---- some output truncated -------
RECOVERING               0         68 0033000200001C81
RECOVERING               0         69 0033000200001C81
RECOVERING               0         53 0033000200001C81
RECOVERING               0         78 0033000200001C81
RECOVERING               0         79 0033000200001C81
RECOVERING               0         80 0033000200001C81
RECOVERING               0         81 0033000200001C81
RECOVERING               0         82 0033000200001C81
RECOVERING               0         83 0033000200001C81

fast_start_parallel_rollback was set to value of low. We see 50 processes were spawned and only one of the parallel server was doing the recovery indicating that the parallel servers might be interfering with each other. Metalink Note 144332.1 (- Parallel Rollback may hang database, Parallel query servers get 100% cpu) discusses this behavior. Now I decided to use the serial recovery by changing fast_start_parallel_rollback to false. But to do this , we had to disable the SMON to do transaction recovery using event 10513 at level 2.

SQL> oradebug setorapid  22
Unix process pid: 2728024, image: oracle@ods1 (SMON)
SQL> oradebug Event 10513 trace name context forever, level 2
Statement processed.
SQL> select 'kill -9 '||spid||' ' from V$process where pid in (select pid from V$FAST_START_SERVERS);

'KILL-9'||SPID||''
---------------------
kill -9 3014818
kill -9 3010772
kill -9 2916434
kill -9 2887716
kill -9 2678958
kill -9 2511030
kill -9 2224314
kill -9 2142210
kill -9 2822282
kill -9 2625696
kill -9 2506808
kill -9 2486520
kill -9 2314492
kill -9 2310186
kill -9 2752764
kill -9 2445478
kill -9 2326692
kill -9 2457716
kill -9 2654394
kill -9 2621630
kill -9 2580502
kill -9 2633960
kill -9 2412686

alter system set fast_start_parallel_rollback=false;

SQL>   select usn, state, undoblockstotal "Total", undoblocksdone "Done", undoblockstotal-undoblocksdone "ToDo",   decode(cputime,0,'unknown',sysdate+(((undoblockstotal-undoblocksdone) / (undoblocksdone / cputime)) / 86400))   "Estimated time to complete"
  2     from v$fast_start_transactions;

       USN STATE                 Total       Done       ToDo Estimated time to co
---------- ---------------- ---------- ---------- ---------- --------------------
        51 RECOVERING          3513444       6002    3507442 06-JUL-2009 17:58:03

SQL> select * from V$FAST_START_SERVERS;

no rows selected

Using serial recovery reduced estimated time to 5 hours.

To summarize, if you are still in process of upgrading 9i database to 10g database, then revisit your shell scripts and oracle dbms_jobs used for performing complete refresh of mviews and set atomic_refresh =>false explicitly to truncate the mview before refresh.

Use awk/sed in vi

Thought of sharing some useful info which can help you to do your work faster in vi.You can use awk/sed scripts in vi using following command in vi

:%!scriptname

Here the scriptname file should have execute privilges for user. I used this to create a useful script which I was doing by typing multiple substitution command in vi.

e.g Your file contains list of table
$cat t.lst
BANK005
BJSTM
BJS_ORG
CHAINED_ROWS
CORR_BAM
CORR_CAM
CORR_EIT
CORR_GAC
CORR_GAM
CORR_ITC
CORR_LDT
CORR_LHT
Create script (quotes)  with following command and give execute permission to user.

sed -e “s/^/’/g” -e “s/$/’,/” $1|awk ‘{printf (“%s”,$0)}’|sed -e “s/^/(/g” -e “s/,$/)/g”

open t.lst in vi and type :%!quotes
('BANK005','BJSTM','BJS_ORG','CHAINED_ROWS','CORR_BAM','CORR_CAM','CORR_EIT','CORR_GAC','CORR_GAM','CORR_ITC','CORR_LDT','CORR_LHT')

Similarly if you wish to remove blank lines, have a file blank like

awk ‘!NF==0 {print $0}’ $1
Blank lines can also be directly removed from vi using :g/^$/d
Isn’t it cool.. 🙂

dbms_stats.copy_table_stats does not alter low/high value

I was working on a performance issue arising due to missing table stats on a new partition for large Partitioned table.This was a 10.2.0.4 database on IBM AIX 5L. So as to resolve the issue , I used dbms_stats.copy_table_stats to copy the stats to new parition from the previous partition.

EXEC DBMS_STATS.COPY_TABLE_STATS (‘owner’, ‘tabname’, ‘SourcePart’, ‘TargetPart’)

You can find more details about it in below link

http://optimizermagic.blogspot.com/2009/02/maintaining-statistics-on-large.html

There is one more link explaining this with example

http://dioncho.wordpress.com/2009/02/12/how-to-copy-partition-stats-we-got-easier-way/

Article from Optimizer development group states that the low and high value for the target partition is set from the partition key values. But when I used dbms_stats.copy_table_stats, I found that it is not the case and it actually copies the low and high values from the previous partition.

You can use following function to check the high and low values ( Thanks to Greg Rahn’s article )

create or replace function display_raw (rawval raw, type varchar2)
return varchar2
is
   cn     number;
   cv     varchar2(32);
   cd     date;
   cnv    nvarchar2(32);
   cr     rowid;
   cc     char(32);
begin
   if (type = 'NUMBER') then
      dbms_stats.convert_raw_value(rawval, cn);
      return to_char(cn);
   elsif (type = 'VARCHAR2') then
      dbms_stats.convert_raw_value(rawval, cv);
      return to_char(cv);
   elsif (type = 'DATE') then
      dbms_stats.convert_raw_value(rawval, cd);
      return to_char(cd);
   elsif (type = 'NVARCHAR2') then
      dbms_stats.convert_raw_value(rawval, cnv);
      return to_char(cnv);
   elsif (type = 'ROWID') then
      dbms_stats.convert_raw_value(rawval, cr);
      return to_char(cnv);
   elsif (type = 'CHAR') then
      dbms_stats.convert_raw_value(rawval, cc);
      return to_char(cc);
   else
      return 'UNKNOWN DATATYPE';
   end if;
end;
/

alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';
col DATA_TYPE for a20
col TABLE_NAME for a25
col low_val for a25
col high_val for a25
col PARTITION_NAME for a15
col COLUMN_NAME for a25
select a.table_name,
   a.column_name,
   display_raw(a.low_value,b.data_type) as low_val,
   display_raw(a.high_value,b.data_type) as high_val,
   b.data_type
from
   dba_tab_col_statistics a, dba_tab_cols b
where
   a.table_name ='&tabname' and
   a.table_name=b.table_name and
   a.column_name=b.column_name and a.owner='&owner' and a.column_name='&colname'
/

select a.table_name,a.partition_name,
   a.column_name,
   display_raw(a.low_value,b.data_type) as low_val,
   display_raw(a.high_value,b.data_type) as high_val,
   b.data_type
from
   dba_part_col_statistics a, dba_tab_cols b
where
   a.table_name ='&tab_name' and
   a.table_name=b.table_name and
   a.column_name=b.column_name and a.owner='&owner' and a.column_name='&colname' and a.partition_name='&part_name'
/

This is attributed to bug 8318020. You need to install one-off patch for same. This is also mentioned in the article from optimizer devlopment group. I am quoting the same

“It is necessary to install the one off patch for bug 8318020 if you are using copy_table_stats procedure or APPROX_GLOBAL option in 10.2.0.4 (patch 8413548) or in 11.1.0.7 (patch 8411968).”

This is important as CBO while calculating the selectivity uses the low/high value . In this case as the values are old (my case last month values), when you try to query with new month data , CBO will treat it as out of range predicate and can pick suboptimal plans.

e.g You can see below when you enable 10053 trace for the query, how the CBO has estimated a very low selectivity for the index (based on the partitioned column) and thus estimating a very low cost. Note that CBO will not immediately pick incorrect selectivity if it goes out of range as it tries to linearly deduce a value within range of density of that column and 0.5/num_rows. Below selectivity is actually the extreme value i.e 0.5/num_rows.

Column (#1): PART_DATE(DATE)
    AvgLen: 8.00 NDV: 21 Nulls: 0 Density: 0.047619 Min: 2454953 Max: 2454977
  Using prorated density: 8.8550e-09 of col #1 as selectivity of out-of-range value pred
Access Path: index (RangeScan)
    Index: IDX_HIST_TABLE
    resc_io: 5.00  resc_cpu: 34401
    ix_sel: 8.8980e-09  ix_sel_with_filters: 8.8980e-09
		Resp: 1.00  Degree: 1

As of now I have used dbms_stats.set_column_stats to set the values but I would need to apply the fix so as to use copy_table_stats which is much easier to use. Note that you will have to set the stats at partition name and also table level to alter the high value to contain the latest partition upper bound.

Update:-
Please find enclosed shell script ( combination of shell and pl/sql..can be better but I am still learning both 🙂 ) to copy index and table stats. It first uses dbms_stats.copy_table_stats and then alters table and partition’s low-high value based on the partition definition. It then copies stats for local indexes.
Inputs

1)Owner
2)Table Name
3)Source partition name
4) Target partition name
5) Column_name (Has to be column of date datatype on which table is partitioned)

One more requirement is that when you execute the script, you should have write privilege in directory where you have copied the script as this will create temp files. You can download the script here

#!/usr/bin/ksh

### Script to Set the stats for  a Range partitioned table which is partitioned on a Date Column#######
### Written by Amit Bansal ######
#### Please test in a test environment before executing in production database############

###### Environment Variables ########
echo "Enter the table owner for which stats need's to be copied : \t"
read l_tab_owner
echo "Enter the table name for which stats need's to be copied : \t"
read l_tab_name
echo "Enter the Source Partition_name from which stats need's to be copied  \t"
read l_src_part
echo "Enter the target Partition_name to which stats need's to be copied : \t"
read l_targ_part
echo "Enter the column_name for range partitioned table (date datatype only) for which stats need's to be copied : \t"
read l_col_type

###################End of User Input #############

echo "You have selected following options"
echo "Table owner \t" $l_tab_owner
echo "Table name \t" $l_tab_name
echo "Source partition_name \t" $l_src_part
echo "Target  partition_name \t" $l_targ_part
echo "Column name for range partititioned  table (date datatype) \t" $l_col_type
cat <<!

		Do you want to proceed Y/N  

!
read z
     case $z in
     Y|y) 
sqlplus -s /nolog <<EOF
conn / as sysdba
set head off
set feedb off
set trims on
set trim on
set long 10000
set longc 10000
spool ${l_src_part}
select HIGH_VALUE from dba_tab_partitions where table_name='${l_tab_name}' and table_owner='${l_tab_owner}' and partition_name in ('${l_src_part}');
spool ${l_targ_part}
select HIGH_VALUE from dba_tab_partitions where table_name='${l_tab_name}' and table_owner='${l_tab_owner}' and partition_name in ('${l_targ_part}');
exit
EOF

vi $l_src_part.lst <<!
:%j!
:wq
!
vi $l_targ_part.lst<<!
:%j!
:wq
!
l_targ_start_dt=`cat $l_src_part.lst`
l_targ_end_dt=`cat  $l_targ_part.lst`

echo $l_targ_start_dt 
echo $l_targ_end_dt

sqlplus -s /nolog <<EOF
conn / as sysdba
exec dbms_stats.copy_table_stats('${l_tab_owner}','${l_tab_name}','${l_src_part}','${l_targ_part}');

set serveroutput on
 declare
 l_srec sys.dbms_stats.statrec;
 l_distcnt number;
 l_density number;
 l_nullcnt number;
 l_avgclen number;
 l_statown varchar2(80);
minvv DATE;
maxvv DATE;
dt date;
maxrv raw(32) := l_srec.maxval;
minrv raw(32) := l_srec.minval;
SREC SYS.DBMS_STATS.STATREC;
DATEVALS SYS.DBMS_STATS.DATEARRAY;
 begin
sys.dbms_stats.get_column_stats('${l_tab_owner}','${l_tab_name}','${l_col_type}',distcnt=>l_distcnt,density=>l_density,nullcnt=>l_nullcnt,srec=>l_srec,avgclen=>l_avgclen,statown=>l_statown);
--dbms_output.put_line('No of Distinct Values::::'||l_distcnt);
--dbms_output.put_line('Density              ::::'||l_density);
--dbms_output.put_line('NullCount            ::::'||l_nullcnt);
--dbms_output.put_line('Average Row Length   ::::'||l_avgclen);
--dbms_output.put_line('Stat Owner           ::::'||l_statown);
maxrv := l_srec.maxval;
minrv := l_srec.minval;
--dbms_output.put_line(maxrv) ;
dbms_stats.convert_raw_value(minrv,minvv) ;
dbms_stats.convert_raw_value(maxrv,maxvv) ;
dbms_output.put_line('PRE MAX VALUE ::'||TO_CHAR(maxvv,'DD-MM-RRRR HH24:MI:SS')) ;
dbms_output.put_line('PRE MIN VALUE ::'||TO_CHAR(minvv,'DD-MM-RRRR HH24:MI:SS')) ;
SREC.EPC:=2;
SREC.EAVS:=NULL;
--DATEVALS:= SYS.DBMS_STATS.DATEARRAY(to_date('18-06-2000 00:00:00','DD-MM-YYYY HH24:MI:SS'),${l_targ_end_dt} -1/(24*60*60));
DATEVALS:= SYS.DBMS_STATS.DATEARRAY(minvv,${l_targ_end_dt} -1/(24*60*60));
SYS.DBMS_STATS.PREPARE_COLUMN_VALUES(SREC,DATEVALS);
SYS.DBMS_STATS.SET_COLUMN_STATS('${l_tab_owner}','${l_tab_name}','${l_col_type}',SREC=>SREC,NO_INVALIDATE =>false);
COMMIT;
 sys.dbms_stats.get_column_stats('${l_tab_owner}','${l_tab_name}','${l_col_type}',distcnt=>l_distcnt,density=>l_density,nullcnt=>l_nullcnt,srec=>l_srec,avgclen=>l_avgclen,statown=>l_statown);

maxrv := l_srec.maxval;
minrv := l_srec.minval;

dbms_stats.convert_raw_value(minrv,minvv) ;
dbms_stats.convert_raw_value(maxrv,maxvv) ;

dbms_output.put_line('POST MAX VALUE ::'||TO_CHAR(maxvv,'DD-MM-RRRR HH24:MI:SS')) ;
dbms_output.put_line('POST MIN VALUE ::'||TO_CHAR(minvv,'DD-MM-RRRR HH24:MI:SS')) ;

END;
/

set serveroutput on
 declare
 l_srec sys.dbms_stats.statrec;
 l_distcnt number;
 l_density number;
 l_nullcnt number;
 l_avgclen number;
 l_statown varchar2(80);
minvv DATE;
maxvv DATE;
dt date;
maxrv raw(32) := l_srec.maxval;
minrv raw(32) := l_srec.minval;
SREC SYS.DBMS_STATS.STATREC;
DATEVALS SYS.DBMS_STATS.DATEARRAY;
 begin
 sys.dbms_stats.get_column_stats('${l_tab_owner}','${l_tab_name}','${l_col_type}','${l_src_part}',distcnt=>l_distcnt,density=>l_density,nullcnt=>l_nullcnt,srec=>l_srec,avgclen=>l_avgclen,statown=>l_statown);
--dbms_output.put_line('No of Distinct Values::::'||l_distcnt);
--dbms_output.put_line('Density              ::::'||l_density);
--dbms_output.put_line('NullCount            ::::'||l_nullcnt);
--dbms_output.put_line('Average Row Length   ::::'||l_avgclen);
--dbms_output.put_line('Stat Owner           ::::'||l_statown);
maxrv := l_srec.maxval;
minrv := l_srec.minval;
--dbms_output.put_line(maxrv) ;
dbms_stats.convert_raw_value(minrv,minvv) ;
dbms_stats.convert_raw_value(maxrv,maxvv) ;
dbms_output.put_line('PRE MAX VALUE ::'||TO_CHAR(maxvv,'DD-MM-RRRR HH24:MI:SS')) ;
dbms_output.put_line('PRE MIN VALUE ::'||TO_CHAR(minvv,'DD-MM-RRRR HH24:MI:SS')) ;
SREC.EPC:=2;
SREC.EAVS:=NULL;
--DATEVALS:= SYS.DBMS_STATS.DATEARRAY(to_date('01-10-2009 00:00:00','DD-MM-YYYY HH24:MI:SS'),to_date('30-11-2009 23:59:59','DD-MM-YYYY HH24:MI:SS'));
DATEVALS:= SYS.DBMS_STATS.DATEARRAY(${l_targ_start_dt},${l_targ_end_dt} -1/(24*60*60));
SYS.DBMS_STATS.PREPARE_COLUMN_VALUES(SREC,DATEVALS);
SYS.DBMS_STATS.SET_COLUMN_STATS('${l_tab_owner}','${l_tab_name}','${l_col_type}','${l_targ_part}',SREC=>SREC,NO_INVALIDATE =>false);

COMMIT;

 sys.dbms_stats.get_column_stats('${l_tab_owner}','${l_tab_name}','${l_col_type}','${l_targ_part}',distcnt=>l_distcnt,density=>l_density,nullcnt=>l_nullcnt,srec=>l_srec,avgclen=>l_avgclen,statown=>l_statown);

maxrv := l_srec.maxval;
minrv := l_srec.minval;

dbms_stats.convert_raw_value(minrv,minvv) ;
dbms_stats.convert_raw_value(maxrv,maxvv) ;

dbms_output.put_line('POST MAX VALUE ::'||TO_CHAR(maxvv,'DD-MM-RRRR HH24:MI:SS')) ;
dbms_output.put_line('POST MIN VALUE ::'||TO_CHAR(minvv,'DD-MM-RRRR HH24:MI:SS')) ;

END;
/

declare
 l_source_part varchar2(30);
 l_target_part varchar2(30);
n_numrows number;
n_numlblks number;
n_numdist number;
n_avglblk number;
n_avgdblk number;
n_clstfct number;
n_indlevel number;
n_guessq number;
cursor ind_list is select index_name from dba_part_indexes where table_name='${l_tab_name}' and owner='${l_tab_owner}' and LOCALITY='LOCAL';
ind_list_rec ind_list%ROWTYPE;
begin
open ind_list;
LOOP
FETCH ind_list into ind_list_rec;
EXIT WHEN ind_list%NOTFOUND;
select partition_name into l_source_part from dba_ind_partitions where index_name=ind_list_rec.index_name and index_owner='${l_tab_owner}' and partition_name like '%${l_src_part}';
select partition_name into l_target_part from dba_ind_partitions where index_name=ind_list_rec.index_name and index_owner='${l_tab_owner}' and partition_name like '%${l_targ_part}';
dbms_stats.get_index_stats
        (
          ownname      => '${l_tab_owner}',
          indname      => ind_list_rec.index_name,
          partname     => l_source_part,
          numrows      => n_numrows,
          numlblks     => n_numlblks,
          numdist      => n_numdist,
          avglblk      => n_avglblk,
          avgdblk      => n_avgdblk,
          clstfct      => n_clstfct,
          indlevel     => n_indlevel,
  guessq       => n_guessq  );
dbms_output.put_line('.');
     dbms_output.put_line('Setting Index Stats for index '||ind_list_rec.index_name||' partition '||l_target_part|| ' to following values');
	 dbms_output.put_line('----------------------------------------------------------------');
      dbms_output.put_line('.     NumRows = ' || n_numrows );
      dbms_output.put_line('.    NumLBlks = ' || n_numlblks);
      dbms_output.put_line('.   NumDstnct = ' || n_numdist);
      dbms_output.put_line('.     AvgLBlk = ' || n_avglblk);
      dbms_output.put_line('.     AvgDBlk = ' || n_avgdblk);
      dbms_output.put_line('.     ClstFct = ' || n_clstfct);
      dbms_output.put_line('.    IndLevel = ' || n_indlevel);
      dbms_output.put_line('.   GuessQual = ' || n_guessq);
      dbms_output.put_line('.');
			  dbms_stats.set_index_stats
        (
          ownname      => '${l_tab_owner}',
          indname      => ind_list_rec.index_name,
          partname     => l_target_part,
          numrows       => n_numrows,
          numlblks      => n_numlblks,
          numdist       => n_numdist,
          avglblk       => n_avglblk,
          avgdblk       => n_avgdblk,
          clstfct       => n_clstfct,
          indlevel      => n_indlevel,
          no_invalidate => false,
  guessq       => n_guessq  );
  end loop;
  close ind_list;
end;
/
exit
EOF
;;
N|n) echo "Exiting Script"
;;

*) sleep 1
;;

esac

Hope this helps

Link: Best practices for AIX RAC Database on OTN

A new whitepaper has been published on OTN which primarily discusses best practices for AIX 5.2,5.3 and 6.1 for RAC database for avoiding node eviction due to Oprocd.  It also mentions recommended AIX VMO parameters and recommended patches for Oracle RAC.

You can find the article at below location

http://www.oracle.com/technology/products/database/clusterware/pdf/rac_aix_system_stability.pdf

WordPress 2.8 “Baker” Released

WordPress version 2.8 has been released. Following link contains video demo of changes in V2.8.

I found option of installing theme directly from WordPress themes the coolest feature. It is not new for people using the free wordpress account’s  but for people using wordpress on their own domains had to first download the theme from net and then ftp back to their  hosting server (I am using yahoo webhosting which provides only ftp access , no ssh access). 

I do not see any note on how to Upgrade V2.7.1 to V2.8. If you try using the WordPress Automatic upgrade tool in V2.7.1, it gives a message that ” You are already on latest Release”. Anyways I am looking forward to upgrading it to V2.8 asap as it gives me option of trying out new themes 🙂 . In case anyone find’s any article on how to do it, then do let me know.

Update : – Now Upgrade link has come up on Dashboard ..May be the developers forgot about it initially 🙂

– Amit

Cloning database..Just do some Post checks !!

Well I will be covering some points which I believe should be part of post checklist for any cloned database environment. This might not be complete list as it contains point which I have encountered or heard of.

1) Change database Name and Database Id

You should try to change the database name/instance name for UAT/ Cloned environment. In case if there is specific requirement to have the same instance_name/db_name, then atleast you should try changing the database id. If you are using RMAN duplicate command, dbid will be changed automatically. But in case you do not use duplicate command, then dbid will remain same.

DBid change becomes very important if you are using rman catalog database. In case you connect to rman catalog database from new cloned DB (without changing DBID),it would resync the resetlog information in the rman catalog database. Next time you try to take RMAN database backup on production database, you will get following errors

RMAN-06004: ORACLE error from recovery catalog database:
RMAN-20011: target database incarnation is not current in recovery catalog
 

 Following metalink notes discuss how to handle these issues
 
Note 1070453.6 : RMAN: Point-in-Time Recovery of a Backup From Before Last Resetlogs
Note 237232.1 : How to Recover Through a Resetlogs Command Using RMAN


As you see, you will be better off changing Dbid (Database Id) at first place. DBNEWID can be used to change the DB_NAME or DBId or Both

http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dbnewid.htm

2) Check for database link

Check for database link present in the cloned environment. Ensure that these are select only dblinks and will not perform any DML in production databases. If you find any ,then you can either drop these or recreate them to point to any UAT or simply remove/hash out tnsnames entry corresponding to these hosts. Also check for any hard coded IP address in host column in DBA_DB_LINKS.

3)Remove or hash out any entries in tnsnames.ora pointing to production database and recovery catalog database

This again is to avoid any issues with dblink or RMAN catalog issues. Also note that when you hash the tnsnames.ora you need to place # in front of each line.

Incorrect Correct
#TESTDB10G =
(DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = test10g)(PORT = 1521))
      )
      (CONNECT_DATA =
      (SERVICE_NAME = testdb10g)
    )
 )
#TESTDB10G =
#(DESCRIPTION =
#   (ADDRESS_LIST =
#      (ADDRESS = (PROTOCOL = TCP)(HOST = test10g)(PORT = 1521))
 #     )
#    (CONNECT_DATA =
#     (SERVICE_NAME = testdb10g)
#   )
#)

 

 

 

4) Check cronjobs for oracle database user
If you have copied cron entries for server from production ( as part of database migration activity) , then crosscheck which all jobs need to be enabled for this cloned environment. e.g You can easily disable any cron for RMAN database, archivelog backup.

5)Modify listener.ora file

Modify listener.ora file to include new host entry and port number. This is also quite important as in case you copied it from production server, you could turn off production listener by mistake. Check following metalink note for details

Note 460666.1 – How To Remotely Administer a Listener

6) Check for local_listener and remote_listener parameter

Check for local_listener and remote_listener parameter and modify accordingly. Note that not changing remote_listener parameter can also lead to issues where in your UAT/Test database can get registered with Listener running on Production server.

7)Modify /etc/hosts entries

Modify /etc/hosts entries to remove entries for production database. Also try to use /etc/hosts for resolving host instead of DNS.

8) Add tempfiles to Temporary tablespace

After you clone the environment, tempfiles need to be added to the database.

9) Check for Archivelog mode

Generally archivelog mode is disabled for UAT/Cloned databases. In case your production database is in archivelog mode, ensure that you disable the archiving. 

10) Verify Initialization Parameters

Verify all initialization parameter’s like *_dump_dest locations,utl_file_dir ,sga_max_size , etc.