Mview Complete Refresh and Atomic_refresh parameter

This post discusses issue faced by me while working on Materialized view (mview) creation at new site(database). Starting Oracle 10g, how atomic_refresh works with complete refresh has undergone a change.
Atomic_refresh parameter is to allow either all of the tasks of a transaction are performed or none of them (Atomicity). If set to TRUE, then all refreshes are done in one transaction. If set to FALSE, then the refresh of each specified materialized view is done in a separate transaction.
In case of 9i if a single mview was refreshed , it used to be truncated and then refreshed (which actually was not atomic). In case of multiple mviews being refreshed (part of refresh group), rows were deleted and then the insert was done.
As part of change in 10g, if atomic_refresh is set to true (By defualt) , rows will be deleted one by one and then insert will take place for both single mview or multiple mviews in refresh group. So if you need to truncate the mview before then set atomic_refresh =>false (default True) while doing a complete refresh.
Due to this change you will find many issues like high redo generation and longer time for complete refresh of mviews using dbms_mview.refresh as now in 10g it will go for deleting the rows and not truncate.
In our case a DBA issued a complete refresh for  mview with size of 195Gb (having around 8 indexes ). After 16 hours of running, refresh job was still deleting the rows and had not started inserting the rows. At this moment we decided to kill the refresh session. As this session had generated a lot of undo, smon kicked in to recover the transaction.
On checking the V$FAST_START_TRANSACTION , it reported that it will take around 2 months to perform the rollback.

This post discusses issue faced by me while working on Materialized view (mview) creation at new site(database). Starting Oracle 10g, how atomic_refresh works with complete refresh has undergone a change.

Atomic_refresh parameter is to allow either all of the tasks of a transaction are performed or none of them (Atomicity). If set to TRUE, then all refreshes are done in one transaction. If set to FALSE, then the refresh of each specified materialized view is done in a separate transaction.If set to FALSE, Oracle can optimize refresh by using parallel DML and truncate DDL on a materialized views.

In case of 9i  (atomic_refresh =>true), if a single mview was refreshed , it used to be truncated and then refreshed (which actually was not atomic). In case of multiple mviews being refreshed (part of refresh group), rows were deleted and then the insert was done.

As part of change in 10g, if atomic_refresh is set to true (By defualt) , rows will be deleted one by one and then insert will take place for both single mview or multiple mviews in refresh group. So if you need to truncate the mview before then set atomic_refresh =>false (default True) while doing a complete refresh.

Due to this change you will find many issues like high redo generation and longer time for complete refresh of mviews using dbms_mview.refresh as now in 10g it will go for deleting the rows and not truncate.

In our case a DBA issued a complete refresh without setting atomic_refresh to false for  mview with size of 195Gb (plus 8 indexes ). After 16 hours of running, refresh job was still deleting the rows and had not started inserting the rows. At this moment we decided to kill the refresh session. As this session had generated a lot of undo, smon kicked in to recover the transaction.

On checking the V$FAST_START_TRANSACTION , it reported that it will take around 2 months to perform the rollback.

SQL> select usn, state, undoblockstotal "Total", undoblocksdone "Done", undoblockstotal-undoblocksdone "ToDo",   decode(cputime,0,'unknown',sysdate+(((undoblockstotal-undoblocksdone) / (undoblocksdone / cputime)) / 86400))   "Estimated time to complete"
  2     from v$fast_start_transactions;

       USN STATE                 Total       Done       ToDo Estimated time to co
---------- ---------------- ---------- ---------- ---------- --------------------
        51 RECOVERING          3514498        354    3514144 12-SEP-2009 02:52:36

SQL> select * from V$fast_start_servers;

STATE       UNDOBLOCKSDONE        PID XID
----------- -------------- ---------- ----------------
RECOVERING             354         32 0033000200001C81
RECOVERING               0         33 0033000200001C81
RECOVERING               0         85 0033000200001C81
RECOVERING               0         38 0033000200001C81
RECOVERING               0         39 0033000200001C81
RECOVERING               0         40 0033000200001C81
RECOVERING               0         42 0033000200001C81
RECOVERING               0         43 0033000200001C81
RECOVERING               0         44 0033000200001C81
RECOVERING               0         45 0033000200001C81
RECOVERING               0         46 0033000200001C81
---- some output truncated -------
RECOVERING               0         68 0033000200001C81
RECOVERING               0         69 0033000200001C81
RECOVERING               0         53 0033000200001C81
RECOVERING               0         78 0033000200001C81
RECOVERING               0         79 0033000200001C81
RECOVERING               0         80 0033000200001C81
RECOVERING               0         81 0033000200001C81
RECOVERING               0         82 0033000200001C81
RECOVERING               0         83 0033000200001C81

fast_start_parallel_rollback was set to value of low. We see 50 processes were spawned and only one of the parallel server was doing the recovery indicating that the parallel servers might be interfering with each other. Metalink Note 144332.1 (- Parallel Rollback may hang database, Parallel query servers get 100% cpu) discusses this behavior. Now I decided to use the serial recovery by changing fast_start_parallel_rollback to false. But to do this , we had to disable the SMON to do transaction recovery using event 10513 at level 2.

SQL> oradebug setorapid  22
Unix process pid: 2728024, image: oracle@ods1 (SMON)
SQL> oradebug Event 10513 trace name context forever, level 2
Statement processed.
SQL> select 'kill -9 '||spid||' ' from V$process where pid in (select pid from V$FAST_START_SERVERS);

'KILL-9'||SPID||''
---------------------
kill -9 3014818
kill -9 3010772
kill -9 2916434
kill -9 2887716
kill -9 2678958
kill -9 2511030
kill -9 2224314
kill -9 2142210
kill -9 2822282
kill -9 2625696
kill -9 2506808
kill -9 2486520
kill -9 2314492
kill -9 2310186
kill -9 2752764
kill -9 2445478
kill -9 2326692
kill -9 2457716
kill -9 2654394
kill -9 2621630
kill -9 2580502
kill -9 2633960
kill -9 2412686

alter system set fast_start_parallel_rollback=false;

SQL>   select usn, state, undoblockstotal "Total", undoblocksdone "Done", undoblockstotal-undoblocksdone "ToDo",   decode(cputime,0,'unknown',sysdate+(((undoblockstotal-undoblocksdone) / (undoblocksdone / cputime)) / 86400))   "Estimated time to complete"
  2     from v$fast_start_transactions;

       USN STATE                 Total       Done       ToDo Estimated time to co
---------- ---------------- ---------- ---------- ---------- --------------------
        51 RECOVERING          3513444       6002    3507442 06-JUL-2009 17:58:03

SQL> select * from V$FAST_START_SERVERS;

no rows selected

Using serial recovery reduced estimated time to 5 hours.

To summarize, if you are still in process of upgrading 9i database to 10g database, then revisit your shell scripts and oracle dbms_jobs used for performing complete refresh of mviews and set atomic_refresh =>false explicitly to truncate the mview before refresh.

CleanUp Temporary Segments Occupying Permanent Tablespace

There are situations when we see “temporary segments” in permanent tablespaces hanging around and not getting cleaned up.
These temporary segments in permanent tablespace can be created by DDL operations like CTAS and “alter index..rebuild” because
the new object is created as a temporary segment in the target tablespace and when the DDL action finishes it will be changed to permanent type.
These temporary segments take actual disk space when SMON fails to perform its assigned job to cleanup stray temporary segments.
Following query finds out these segments:

 SQL > select tablespace_name, owner, sum(bytes/1024/1024) from dba_segments
where segment_type = 'TEMPORARY' group by tablespace_name, owner;

TABLESPACE_NAME     OWNER          SEGMENT_NAME     SUM(BYTES/1024/1024)
------------------- ------------   ------------     --------------------
xxxx_DATA           SYS              123.8365          137706
BDEPST_INDEX        SYS              345.8756            8910
KMRPT_DATA          SYS                345.87       25284.875
BILL_INDEX          SYS                                   .25
DSS_DATA            SYS                                   798
MRKT_INDEX          SYS                                   208
SPCT_DATA           SYS                              69642.25
SPCT_INDEX          SYS                              956.4375

Here we can see that tablespace KMRPT_DATA, SPCT_INDEX and SPCT_DATA have large temporary segments.

To know if any DDL is active which can create temporary segments we can use the following:

SQL> conn / as sysdba
SQL> select owner FROM dba_segments WHERE segment_name='345.87';
SQL> select pid from v$process where username='owner from above query';
SQL> alter session set tracefile_identifier='TEMPORARY_SEGMENTS';
SQL> oradebug setorapid <pid obtained>
SQL> oradebug dump errorstack 3
SQL > oradebug tracefile_name

It will give you the tracefile name, open that file and check for the “current sql”
If it is a DDL like CTAS or index rebuild, then wait for the operation to complete. If there is no pid
returned then these segments are “stray segements” and needs to cleaned up manually.

There are two ways to force the drop of temporary segments:

1. Using event DROP_SEGMENTS
2. Corrupting the segments and dropping these corrupted segments.

1. Using DROP_segments:

Find out the tablespace number (ts#) which contains temporary segments:
SQL> select ts# from sys.ts$ where name = 'tablespace name';

Suppose it comes out to be 10, use the following command to cleanup temporary segments:

SQL> alter session set events 'immediate trace name DROP_SEGMENTS level 11';

level is ts#+1 i.e 10+1=11 in this case.

2. Corrupting temporary segments for drop:
For this following procedures are used:
– DBMS_SPACE_ADMIN.TABLESPACE_VERIFY
– DBMS_SPACE_ADMIN.SEGMENT_CORRUPT
– DBMS_SPACE_ADMIN.SEGMENT_DROP_CORRUPT

— Verify the tablespace that contains temporary segments (In this case it is KMRPT_DATA)

SQL>DBMS_SPACE_ADMIN.TABLESPACE_VERIFY('KMRPT_DATA');

— Corrupt the temporary segments in tablespace KMRPT_DATA

SQL>DBMS_SPACE_ADMIN.SEGMENT_CORRUPT(' || chr(39) || tablespace_name || chr(39) || ',' || HEADER_FILE || ',' || HEADER_BLOCK || ');'  from dba_segments where SEGMENT_TYPE like 'TEMP%' and tablespace_name = 'KMRPT_DATA';

— Drop the corrupted temporary segments

SQL> select 'exec DBMS_SPACE_ADMIN.SEGMENT_DROP_CORRUPT (' || chr(39) || tablespace_name || chr(39) || ',' || HEADER_FILE || ',' || HEADER_BLOCK || ');' from dba_segments where SEGMENT_TYPE like 'TEMP%' and tablespace_name = 'KMRPT_DATA';

— Verify the tablespace again to update the new dictionary information:

SQL>DBMS_SPACE_ADMIN.TABLESPACE_VERIFY('KMRPT_DATA');

This will remove temporary segments from permanent tablespace.

Cheers!!!

Saurabh Sood

Use awk/sed in vi

Thought of sharing some useful info which can help you to do your work faster in vi.You can use awk/sed scripts in vi using following command in vi

:%!scriptname

Here the scriptname file should have execute privilges for user. I used this to create a useful script which I was doing by typing multiple substitution command in vi.

e.g Your file contains list of table
$cat t.lst
BANK005
BJSTM
BJS_ORG
CHAINED_ROWS
CORR_BAM
CORR_CAM
CORR_EIT
CORR_GAC
CORR_GAM
CORR_ITC
CORR_LDT
CORR_LHT
Create script (quotes)  with following command and give execute permission to user.

sed -e “s/^/’/g” -e “s/$/’,/” $1|awk ‘{printf (“%s”,$0)}’|sed -e “s/^/(/g” -e “s/,$/)/g”

open t.lst in vi and type :%!quotes
('BANK005','BJSTM','BJS_ORG','CHAINED_ROWS','CORR_BAM','CORR_CAM','CORR_EIT','CORR_GAC','CORR_GAM','CORR_ITC','CORR_LDT','CORR_LHT')

Similarly if you wish to remove blank lines, have a file blank like

awk ‘!NF==0 {print $0}’ $1
Blank lines can also be directly removed from vi using :g/^$/d
Isn’t it cool.. 🙂

dbms_stats.copy_table_stats does not alter low/high value

I was working on a performance issue arising due to missing table stats on a new partition for large Partitioned table.This was a 10.2.0.4 database on IBM AIX 5L. So as to resolve the issue , I used dbms_stats.copy_table_stats to copy the stats to new parition from the previous partition.

EXEC DBMS_STATS.COPY_TABLE_STATS (‘owner’, ‘tabname’, ‘SourcePart’, ‘TargetPart’)

You can find more details about it in below link

http://optimizermagic.blogspot.com/2009/02/maintaining-statistics-on-large.html

There is one more link explaining this with example

http://dioncho.wordpress.com/2009/02/12/how-to-copy-partition-stats-we-got-easier-way/

Article from Optimizer development group states that the low and high value for the target partition is set from the partition key values. But when I used dbms_stats.copy_table_stats, I found that it is not the case and it actually copies the low and high values from the previous partition.

You can use following function to check the high and low values ( Thanks to Greg Rahn’s article )

create or replace function display_raw (rawval raw, type varchar2)
return varchar2
is
   cn     number;
   cv     varchar2(32);
   cd     date;
   cnv    nvarchar2(32);
   cr     rowid;
   cc     char(32);
begin
   if (type = 'NUMBER') then
      dbms_stats.convert_raw_value(rawval, cn);
      return to_char(cn);
   elsif (type = 'VARCHAR2') then
      dbms_stats.convert_raw_value(rawval, cv);
      return to_char(cv);
   elsif (type = 'DATE') then
      dbms_stats.convert_raw_value(rawval, cd);
      return to_char(cd);
   elsif (type = 'NVARCHAR2') then
      dbms_stats.convert_raw_value(rawval, cnv);
      return to_char(cnv);
   elsif (type = 'ROWID') then
      dbms_stats.convert_raw_value(rawval, cr);
      return to_char(cnv);
   elsif (type = 'CHAR') then
      dbms_stats.convert_raw_value(rawval, cc);
      return to_char(cc);
   else
      return 'UNKNOWN DATATYPE';
   end if;
end;
/

alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';
col DATA_TYPE for a20
col TABLE_NAME for a25
col low_val for a25
col high_val for a25
col PARTITION_NAME for a15
col COLUMN_NAME for a25
select a.table_name,
   a.column_name,
   display_raw(a.low_value,b.data_type) as low_val,
   display_raw(a.high_value,b.data_type) as high_val,
   b.data_type
from
   dba_tab_col_statistics a, dba_tab_cols b
where
   a.table_name ='&tabname' and
   a.table_name=b.table_name and
   a.column_name=b.column_name and a.owner='&owner' and a.column_name='&colname'
/

select a.table_name,a.partition_name,
   a.column_name,
   display_raw(a.low_value,b.data_type) as low_val,
   display_raw(a.high_value,b.data_type) as high_val,
   b.data_type
from
   dba_part_col_statistics a, dba_tab_cols b
where
   a.table_name ='&tab_name' and
   a.table_name=b.table_name and
   a.column_name=b.column_name and a.owner='&owner' and a.column_name='&colname' and a.partition_name='&part_name'
/

This is attributed to bug 8318020. You need to install one-off patch for same. This is also mentioned in the article from optimizer devlopment group. I am quoting the same

“It is necessary to install the one off patch for bug 8318020 if you are using copy_table_stats procedure or APPROX_GLOBAL option in 10.2.0.4 (patch 8413548) or in 11.1.0.7 (patch 8411968).”

This is important as CBO while calculating the selectivity uses the low/high value . In this case as the values are old (my case last month values), when you try to query with new month data , CBO will treat it as out of range predicate and can pick suboptimal plans.

e.g You can see below when you enable 10053 trace for the query, how the CBO has estimated a very low selectivity for the index (based on the partitioned column) and thus estimating a very low cost. Note that CBO will not immediately pick incorrect selectivity if it goes out of range as it tries to linearly deduce a value within range of density of that column and 0.5/num_rows. Below selectivity is actually the extreme value i.e 0.5/num_rows.

Column (#1): PART_DATE(DATE)
    AvgLen: 8.00 NDV: 21 Nulls: 0 Density: 0.047619 Min: 2454953 Max: 2454977
  Using prorated density: 8.8550e-09 of col #1 as selectivity of out-of-range value pred
Access Path: index (RangeScan)
    Index: IDX_HIST_TABLE
    resc_io: 5.00  resc_cpu: 34401
    ix_sel: 8.8980e-09  ix_sel_with_filters: 8.8980e-09
		Resp: 1.00  Degree: 1

As of now I have used dbms_stats.set_column_stats to set the values but I would need to apply the fix so as to use copy_table_stats which is much easier to use. Note that you will have to set the stats at partition name and also table level to alter the high value to contain the latest partition upper bound.

Update:-
Please find enclosed shell script ( combination of shell and pl/sql..can be better but I am still learning both 🙂 ) to copy index and table stats. It first uses dbms_stats.copy_table_stats and then alters table and partition’s low-high value based on the partition definition. It then copies stats for local indexes.
Inputs

1)Owner
2)Table Name
3)Source partition name
4) Target partition name
5) Column_name (Has to be column of date datatype on which table is partitioned)

One more requirement is that when you execute the script, you should have write privilege in directory where you have copied the script as this will create temp files. You can download the script here

#!/usr/bin/ksh

### Script to Set the stats for  a Range partitioned table which is partitioned on a Date Column#######
### Written by Amit Bansal ######
#### Please test in a test environment before executing in production database############

###### Environment Variables ########
echo "Enter the table owner for which stats need's to be copied : \t"
read l_tab_owner
echo "Enter the table name for which stats need's to be copied : \t"
read l_tab_name
echo "Enter the Source Partition_name from which stats need's to be copied  \t"
read l_src_part
echo "Enter the target Partition_name to which stats need's to be copied : \t"
read l_targ_part
echo "Enter the column_name for range partitioned table (date datatype only) for which stats need's to be copied : \t"
read l_col_type

###################End of User Input #############

echo "You have selected following options"
echo "Table owner \t" $l_tab_owner
echo "Table name \t" $l_tab_name
echo "Source partition_name \t" $l_src_part
echo "Target  partition_name \t" $l_targ_part
echo "Column name for range partititioned  table (date datatype) \t" $l_col_type
cat <<!

		Do you want to proceed Y/N  

!
read z
     case $z in
     Y|y) 
sqlplus -s /nolog <<EOF
conn / as sysdba
set head off
set feedb off
set trims on
set trim on
set long 10000
set longc 10000
spool ${l_src_part}
select HIGH_VALUE from dba_tab_partitions where table_name='${l_tab_name}' and table_owner='${l_tab_owner}' and partition_name in ('${l_src_part}');
spool ${l_targ_part}
select HIGH_VALUE from dba_tab_partitions where table_name='${l_tab_name}' and table_owner='${l_tab_owner}' and partition_name in ('${l_targ_part}');
exit
EOF

vi $l_src_part.lst <<!
:%j!
:wq
!
vi $l_targ_part.lst<<!
:%j!
:wq
!
l_targ_start_dt=`cat $l_src_part.lst`
l_targ_end_dt=`cat  $l_targ_part.lst`

echo $l_targ_start_dt 
echo $l_targ_end_dt

sqlplus -s /nolog <<EOF
conn / as sysdba
exec dbms_stats.copy_table_stats('${l_tab_owner}','${l_tab_name}','${l_src_part}','${l_targ_part}');

set serveroutput on
 declare
 l_srec sys.dbms_stats.statrec;
 l_distcnt number;
 l_density number;
 l_nullcnt number;
 l_avgclen number;
 l_statown varchar2(80);
minvv DATE;
maxvv DATE;
dt date;
maxrv raw(32) := l_srec.maxval;
minrv raw(32) := l_srec.minval;
SREC SYS.DBMS_STATS.STATREC;
DATEVALS SYS.DBMS_STATS.DATEARRAY;
 begin
sys.dbms_stats.get_column_stats('${l_tab_owner}','${l_tab_name}','${l_col_type}',distcnt=>l_distcnt,density=>l_density,nullcnt=>l_nullcnt,srec=>l_srec,avgclen=>l_avgclen,statown=>l_statown);
--dbms_output.put_line('No of Distinct Values::::'||l_distcnt);
--dbms_output.put_line('Density              ::::'||l_density);
--dbms_output.put_line('NullCount            ::::'||l_nullcnt);
--dbms_output.put_line('Average Row Length   ::::'||l_avgclen);
--dbms_output.put_line('Stat Owner           ::::'||l_statown);
maxrv := l_srec.maxval;
minrv := l_srec.minval;
--dbms_output.put_line(maxrv) ;
dbms_stats.convert_raw_value(minrv,minvv) ;
dbms_stats.convert_raw_value(maxrv,maxvv) ;
dbms_output.put_line('PRE MAX VALUE ::'||TO_CHAR(maxvv,'DD-MM-RRRR HH24:MI:SS')) ;
dbms_output.put_line('PRE MIN VALUE ::'||TO_CHAR(minvv,'DD-MM-RRRR HH24:MI:SS')) ;
SREC.EPC:=2;
SREC.EAVS:=NULL;
--DATEVALS:= SYS.DBMS_STATS.DATEARRAY(to_date('18-06-2000 00:00:00','DD-MM-YYYY HH24:MI:SS'),${l_targ_end_dt} -1/(24*60*60));
DATEVALS:= SYS.DBMS_STATS.DATEARRAY(minvv,${l_targ_end_dt} -1/(24*60*60));
SYS.DBMS_STATS.PREPARE_COLUMN_VALUES(SREC,DATEVALS);
SYS.DBMS_STATS.SET_COLUMN_STATS('${l_tab_owner}','${l_tab_name}','${l_col_type}',SREC=>SREC,NO_INVALIDATE =>false);
COMMIT;
 sys.dbms_stats.get_column_stats('${l_tab_owner}','${l_tab_name}','${l_col_type}',distcnt=>l_distcnt,density=>l_density,nullcnt=>l_nullcnt,srec=>l_srec,avgclen=>l_avgclen,statown=>l_statown);

maxrv := l_srec.maxval;
minrv := l_srec.minval;

dbms_stats.convert_raw_value(minrv,minvv) ;
dbms_stats.convert_raw_value(maxrv,maxvv) ;

dbms_output.put_line('POST MAX VALUE ::'||TO_CHAR(maxvv,'DD-MM-RRRR HH24:MI:SS')) ;
dbms_output.put_line('POST MIN VALUE ::'||TO_CHAR(minvv,'DD-MM-RRRR HH24:MI:SS')) ;

END;
/

set serveroutput on
 declare
 l_srec sys.dbms_stats.statrec;
 l_distcnt number;
 l_density number;
 l_nullcnt number;
 l_avgclen number;
 l_statown varchar2(80);
minvv DATE;
maxvv DATE;
dt date;
maxrv raw(32) := l_srec.maxval;
minrv raw(32) := l_srec.minval;
SREC SYS.DBMS_STATS.STATREC;
DATEVALS SYS.DBMS_STATS.DATEARRAY;
 begin
 sys.dbms_stats.get_column_stats('${l_tab_owner}','${l_tab_name}','${l_col_type}','${l_src_part}',distcnt=>l_distcnt,density=>l_density,nullcnt=>l_nullcnt,srec=>l_srec,avgclen=>l_avgclen,statown=>l_statown);
--dbms_output.put_line('No of Distinct Values::::'||l_distcnt);
--dbms_output.put_line('Density              ::::'||l_density);
--dbms_output.put_line('NullCount            ::::'||l_nullcnt);
--dbms_output.put_line('Average Row Length   ::::'||l_avgclen);
--dbms_output.put_line('Stat Owner           ::::'||l_statown);
maxrv := l_srec.maxval;
minrv := l_srec.minval;
--dbms_output.put_line(maxrv) ;
dbms_stats.convert_raw_value(minrv,minvv) ;
dbms_stats.convert_raw_value(maxrv,maxvv) ;
dbms_output.put_line('PRE MAX VALUE ::'||TO_CHAR(maxvv,'DD-MM-RRRR HH24:MI:SS')) ;
dbms_output.put_line('PRE MIN VALUE ::'||TO_CHAR(minvv,'DD-MM-RRRR HH24:MI:SS')) ;
SREC.EPC:=2;
SREC.EAVS:=NULL;
--DATEVALS:= SYS.DBMS_STATS.DATEARRAY(to_date('01-10-2009 00:00:00','DD-MM-YYYY HH24:MI:SS'),to_date('30-11-2009 23:59:59','DD-MM-YYYY HH24:MI:SS'));
DATEVALS:= SYS.DBMS_STATS.DATEARRAY(${l_targ_start_dt},${l_targ_end_dt} -1/(24*60*60));
SYS.DBMS_STATS.PREPARE_COLUMN_VALUES(SREC,DATEVALS);
SYS.DBMS_STATS.SET_COLUMN_STATS('${l_tab_owner}','${l_tab_name}','${l_col_type}','${l_targ_part}',SREC=>SREC,NO_INVALIDATE =>false);

COMMIT;

 sys.dbms_stats.get_column_stats('${l_tab_owner}','${l_tab_name}','${l_col_type}','${l_targ_part}',distcnt=>l_distcnt,density=>l_density,nullcnt=>l_nullcnt,srec=>l_srec,avgclen=>l_avgclen,statown=>l_statown);

maxrv := l_srec.maxval;
minrv := l_srec.minval;

dbms_stats.convert_raw_value(minrv,minvv) ;
dbms_stats.convert_raw_value(maxrv,maxvv) ;

dbms_output.put_line('POST MAX VALUE ::'||TO_CHAR(maxvv,'DD-MM-RRRR HH24:MI:SS')) ;
dbms_output.put_line('POST MIN VALUE ::'||TO_CHAR(minvv,'DD-MM-RRRR HH24:MI:SS')) ;

END;
/

declare
 l_source_part varchar2(30);
 l_target_part varchar2(30);
n_numrows number;
n_numlblks number;
n_numdist number;
n_avglblk number;
n_avgdblk number;
n_clstfct number;
n_indlevel number;
n_guessq number;
cursor ind_list is select index_name from dba_part_indexes where table_name='${l_tab_name}' and owner='${l_tab_owner}' and LOCALITY='LOCAL';
ind_list_rec ind_list%ROWTYPE;
begin
open ind_list;
LOOP
FETCH ind_list into ind_list_rec;
EXIT WHEN ind_list%NOTFOUND;
select partition_name into l_source_part from dba_ind_partitions where index_name=ind_list_rec.index_name and index_owner='${l_tab_owner}' and partition_name like '%${l_src_part}';
select partition_name into l_target_part from dba_ind_partitions where index_name=ind_list_rec.index_name and index_owner='${l_tab_owner}' and partition_name like '%${l_targ_part}';
dbms_stats.get_index_stats
        (
          ownname      => '${l_tab_owner}',
          indname      => ind_list_rec.index_name,
          partname     => l_source_part,
          numrows      => n_numrows,
          numlblks     => n_numlblks,
          numdist      => n_numdist,
          avglblk      => n_avglblk,
          avgdblk      => n_avgdblk,
          clstfct      => n_clstfct,
          indlevel     => n_indlevel,
  guessq       => n_guessq  );
dbms_output.put_line('.');
     dbms_output.put_line('Setting Index Stats for index '||ind_list_rec.index_name||' partition '||l_target_part|| ' to following values');
	 dbms_output.put_line('----------------------------------------------------------------');
      dbms_output.put_line('.     NumRows = ' || n_numrows );
      dbms_output.put_line('.    NumLBlks = ' || n_numlblks);
      dbms_output.put_line('.   NumDstnct = ' || n_numdist);
      dbms_output.put_line('.     AvgLBlk = ' || n_avglblk);
      dbms_output.put_line('.     AvgDBlk = ' || n_avgdblk);
      dbms_output.put_line('.     ClstFct = ' || n_clstfct);
      dbms_output.put_line('.    IndLevel = ' || n_indlevel);
      dbms_output.put_line('.   GuessQual = ' || n_guessq);
      dbms_output.put_line('.');
			  dbms_stats.set_index_stats
        (
          ownname      => '${l_tab_owner}',
          indname      => ind_list_rec.index_name,
          partname     => l_target_part,
          numrows       => n_numrows,
          numlblks      => n_numlblks,
          numdist       => n_numdist,
          avglblk       => n_avglblk,
          avgdblk       => n_avgdblk,
          clstfct       => n_clstfct,
          indlevel      => n_indlevel,
          no_invalidate => false,
  guessq       => n_guessq  );
  end loop;
  close ind_list;
end;
/
exit
EOF
;;
N|n) echo "Exiting Script"
;;

*) sleep 1
;;

esac

Hope this helps

Generating Datapump Export Dump with Dumpfile name Containing Current Date and Time

Few days back I got a request from development team to generate hourly export dumps of few schemas.

Following were the requirements:

1. Dumpfile name should contain current date and timestamp information.
2. The generated dumpfile should be moved to a specific location.
3. All users should have read privileges on the export dumpfile.
4. The export dump should be taken on hourly basis.

To accomplish this task I generated a shell script and scheduled it in crontab:

#!/bin/ksh
#Script to Perform Datapump Export Every Hour
################################################################
#Change History
#================
#DATE         AUTHOR                       cHANGE
#---------   -----------------------  -------------------
#23-jUN-2009 SAURABH SOOD        New Script Created
#
#
#
################################################################
export ORACLE_HOME=/u01/app/oracle/product/11.1.0/db
export ORACLE_SID=orcl
export PATH=$PATH:$ORACLE_HOME/bin
expdp username/password@orcl dumpfile=expdp-`date '+%d%m%Y_%H%M%S'`.dmp directory=DATA_PUMP_DIR logfile=expdp-`date '+%d%m%Y_%H%M%S'`.log schemas=SCHEMA_A,SCHEMA_B
mv /tmp/expdp*.dmp /u01/backup/daily_export_orcl/
mv /tmp/expdp*.log /u01/backup/daily_export_orcl/
chmod o+r /s01/backup/daily_export_orcl/*

This script will do the following:


1. Set the ORACLE_HOME,ORACLE_SID and PATH in the environment settings.
2. Taken the datapump export to /tmp location as DATA_PUMP_DIR points to /tmp location.
3. Move the dump and log file to location /u01/backup/daily_export_orcl/
4. Change the permissions of the dumpfile so that any user can read the file.

The main thing here is to set the dumpfile name format. The following syntax is used for that:

DUMPFILE=expdp-`date ‘+%d%m%y_%H%M%S’`.dmp

The dumpfiles will be generated as expdp-23062009_090000.dmp, means that the export dump was taken on 23rd June 2009 at 9AM.

To schedule it on hourly basis crontab was modified as:
$ crontab -e

##############################################################
#Script Used To Create Hourly Exports Of orcl database Schemas
###############################################################
00 09-18 * * 1-6 /u01/backup/daily_export_orcl/export.sh >/dev/null

It will taken the export at 9AM,10AM,11AM,12AM,13PM,14PM,15PM,16PM,17PM,18PM on everyday except sunday.

Cheers!!!

– Saurabh Sood