This site has been destroyed by Google forced upgrade to new way of WEB site.
All files links are not working. Many images has been lost in conversation.
Have to edit 190 pages manually. Will try to do ASAP but for this I need time ...
THANK YOU GOOGLE !

Monday, February 28, 2011

How to ... redo logs generation?

All around Internet there are many articles which says how to narrow redo log generation in Oracle. However I still found out my self in position to ask what is the real cause of redo log generation. In this post I'll try to answer all aspects of the cause redo log generation. This subject is more interesting for those who are running stand by databases which redo log transport on remote location may caused many problems.

Who is generating redo logs now?

One of the first question, which cover firefighter is: "Who is generating redo logs in this moment". Idea is to determine action on ad hoc basis. For that I use following script, which I call top_redo.sql:
/* -----------------------------------------------------------------------------
 Filename: top_redo.sql
 
 CR/TR#  : 

 Purpose : Shows current redo logs generation info (RAC-non RAC environment)
           
 Date    : 12.08.2008.
 Author  : Damir Vadas, damir.vadas@gmail.com
 
 Remarks : run as privileged user

 Changes (DD.MM.YYYY, Name, CR/TR#):
--------------------------------------------------------------------------- */
col machine for a15
col username for a10
col redo_MB for 999G990 heading "Redo |Size MB"
column sid_serial for a13;

select b.inst_id, 
       lpad((b.SID || ',' || lpad(b.serial#,5)),11) sid_serial, 
       b.username, 
       machine, 
       b.osuser, 
       b.status, 
       a.redo_mb  
from (select n.inst_id, sid, 
             round(value/1024/1024) redo_mb
        from gv$statname n, gv$sesstat s
        where n.inst_id=s.inst_id
              and n.name = 'redo size'
              and s.statistic# = n.statistic#
        order by value desc
     ) a,
     gv$session b
where b.inst_id=a.inst_id
  and a.sid = b.sid
and   rownum <= 30
;

PROMPT Top 30 from gv$sesstat view according generated redo logs
Result is something like:
SQL> @top_redo
                                                                                               Redo
   INST_ID SID_SERIAL    USERNAME   MACHINE         OSUSER                         STATUS    Size MB
---------- ------------- ---------- --------------- ------------------------------ -------- --------
         1   788,    1              iis1            oracle                         ACTIVE      2,073
         4   788,    1              iis4            oracle                         ACTIVE      1,928
         1   792,    1              iis1            oracle                         ACTIVE      1,168
         1   791,    1              iis1            oracle                         ACTIVE      1,149
         3   788,    1              iis3            oracle                         ACTIVE      1,111
         4   792,    1              iis4            oracle                         ACTIVE      1,092
         1   785,    1              iis1            oracle                         ACTIVE      1,064
         4   791,    1              iis4            oracle                         ACTIVE      1,064
         3   792,    1              iis3            oracle                         ACTIVE        757
         3   791,    1              iis3            oracle                         ACTIVE        738
         3   785,    1              iis3            oracle                         ACTIVE        436
         4   785,    1              iis4            oracle                         ACTIVE        411
         1   764,    4   SYS        iis1            oracle                         ACTIVE        340
         1   737,61477   DBSNMP     iis1            oracle                         ACTIVE        117
         3   703,33361   DBSNMP     iis3            oracle                         ACTIVE        113
         4   677,30159   DBSNMP     iis4            oracle                         ACTIVE         86
         4   795,    1              iis4            oracle                         ACTIVE         81
         1   795,    1              iis1            oracle                         ACTIVE         77
         4   794,    1              iis4            oracle                         ACTIVE         76
         3   795,    1              iis3            oracle                         ACTIVE         75
         1   794,    1              iis1            oracle                         ACTIVE         74
         3   794,    1              iis3            oracle                         ACTIVE         70
         1   645, 5393   ANPI       USR\APINTARIC   apintaric                      INACTIVE       50
         3   758,    5              iis3            oracle                         ACTIVE         39
         3   755,    2              iis3            oracle                         ACTIVE         24
         3   754,    2              iis3            oracle                         ACTIVE         22
         3   756,    2              iis3            oracle                         ACTIVE         21
         3   757,    3              iis3            oracle                         ACTIVE         21
         4   774, 5835   JAGO       CLT\JGOLUZA     jgoluza                        INACTIVE       10
         1   619,61303   LIMI       NIO\LMIHALIC    lmihalic                       INACTIVE        9

30 rows selected.

Top 30 from gv$sesstat view according generated redo logs
SQL> 
If you want to concentrate on real oracle users (avoid core Oracle processes in result) place next condition in outer where clause:
and b.username is not null

When and how many redo logs generation occurred?

Beside current analyze in many times wider analyze/compare is even more interesting. So questions like:
  • When do we have most of redo log generation?
  • Where was the peak of log generation?
  • Did we have any "strange" redo log generation?
need a little different approach-query v$log_history view. It holds historic data which retention period is initially controlled with MAXLOGHISTORY, defined while creating database (fixed not changeable without recreation of control file) and CONTROL_FILE_RECORD_KEEP_TIME which is changeable. In mine case it was set to 31 days (exact number of days for longest month):
SQL> show parameter CONTROL_FILE_RECORD_KEEP_TIME

NAME_COL_PLUS_SHOW_PARAM       TYPE        VALUE_COL_PLUS_SHOW_PARAM
------------------------------ ----------- ---------------------------------------------
control_file_record_keep_time  integer     31
Script to gather data through mentioned period looks like.
/* -----------------------------------------------------------------------------
 Filename: rl.sql
 
 CR/TR#  : 

 Purpose : redo logs distribution per hours on each day ...

 Date    : 07.08.2007.
 Author  : Damir Vadas, damir.vadas@hypo-alpe-adria.com
 
 Remarks : run as privileged user

 Changes (DD.MM.YYYY, Name, CR/TR#):
--------------------------------------------------------------------------- */
set pagesize 120;
set linesize 200;
col day for a8;
spool rl.txt
PROMPT Archive log distribution per hours on each day ...

select 
  to_char(first_time,'YY-MM-DD') day,
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
  COUNT(*) TOT
from v$log_history
group by to_char(first_time,'YY-MM-DD')
order by day
;
Result looks like:
SQL>@rl
Archive log distribution per hours on each day ...

DAY      00   01   02   03   04   05   06   07   08   09   10   11   12   13   14   15   16   17   18   19   20   21   22   23     TOT
-------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- -----
11-01-14    0    0   23   16   17   16   16   16   22   39   23   18   22   18   18   18   22   18   19   16   19   16   16   17   425
11-01-15   24   23   17   18   21   16   16   16   22   18   20   19   19   18   18   21   20   18   20   16   18   16   16   17   447
11-01-16   40   39   43   24   17   16   16   16   22   18   18   21   21   18   19   19   22   18   19   17   18   16   16   16   509
11-01-17   25   23   18   16   17   16   16   16   22   18   21   18   20   18   18   18   24   18   18   17   20   16   16   17   446
11-01-18   25   23   21   16   18   17   16   17   21   43   18   19   26   18   20   40   21   18   20   16   18   16   16   17   500
11-01-19   24   24   22   16   17   16   16   16   23   18   19   18   23   19   18   19   19   18   22   16   21   16   16   17   453
11-01-20   24   24   19   16   17   16   16   16   22   18   19   18   24   18   18   19   70   18   19   18   17   16   16   17   495
11-01-21   24   23   20   16   17   16   16   16   22   18   18   20   48   25   46   57   22   18   18   17   20   16   16   17   546
11-01-22   24   27   22   16   17   16   16   16   22   18   18   19   21   18   18   21   20   18   18   17   20   16   16   17   451
11-01-23   24   19   19   16   17   16   16   16   22   18   18   18   21   18   20   18   21   18   20   17   18   16   16   17   439
11-01-24   24   21   19   16   17   16   16   16   22   18   20   18   21   18   21   18   21   18   19   17   17   16   16   17   442
11-01-25   24   23   18   16   17   16   16   16   43   18   18   18   25   18  115   41   23   18   19   17   20   16   16   18   589
11-01-26   23   22   19   16   17   16   16   16   22   18   21   18   30   18   43   18   20   18   22   16   21   16   16   16   478
11-01-27   21   23   23   16   17   16   16   16   22   18   18   20   22   20   40   97  155  145  155   93  109  116  164  214  1556
11-01-28   93   24   23   16   17   16   16   16   29   18   19   19   30  101   19   18   75   30   19   16   20   16   22   16   688
11-01-29   21   16   16   16   17   16   27   20   22   18   18   18   31   18   18   19   23   18   19   18   18   16   16   17   456
11-01-30   24   22   17   16   17   16   16   16   22   18   18   19   23   18   19   18   23   18   18   17   20   16   16   17   444
11-01-31   24   19   21   16   17   16   16   16   23   18   19   18   22   19   18   20   20   18   18   16   21   16   16   17   444
11-02-01   24   25   22   16   17   16   16   16   23   18   18   20   24   18   18   19   21   18   18   17   19   16   16   17   452
11-02-02   24   22   20   16   17   16   16   16   23   18   18   40   23   18   39   18   21   18   22   16   21   16   16   17   491
11-02-03   24   23   22   16   18   16   16   16   22   18   19   18   24   18   20   18   20   18   19   18   19   16   16   17   451
11-02-04   24   23   23   16   17   16   16   16   22   18   19   19   48   18   20   20   23   18   18   17   19   16   16   17   479
11-02-05   24   23   17   17   17   16   16   16   22   18   20   19   23   18   19   20   22   18   18   17   20   16   16   17   449
11-02-06   24   23   17   16   18   16   16   16   22   18   19   19   21   18   19   18   24   18   20   16   21   16   16   17   448
11-02-07   24   22   18   16   18   16   16   16   22   18   52   18   44   25   29   24   21   18   18   16   19   16   16   17   519
11-02-08   24   23   19   16   17   16   16   16   22   18   20   19   28   18   19   18   22   18   19   17   34   32   32   31   514
11-02-09   36   39   35   31   37   31   32   32   32   34   34   34   39   34   34   34   36   34   38   32   37   31   32   33   821
11-02-10   37   36   38   31   33   32   31   32   37   34   34   34   41   34   35   35   37   34   34   31   34   32   31   33   820
11-02-11   39   35   38   31   33   32   31   32   37   34   34   34   38   34   34   34   35   34   34   32   34   32   31   32   814
11-02-12   40   34   34   33   35   32   31   32   37   34   34   34   36   34   35   34   36   34   34   31   36   32   31   32   815
11-02-13   40   34   35   31   34   32   31   32   37   34   34   34   38   34   34   34   37   34   34   32   36   32   31   32   816
11-02-14   40   34   33   31   35   32   32   32   37   66   34   35   38   34   34   34   37   34   34   31   36   32   31   33   849
11-02-15   48   50   50   44   48   32   32   32   35   34   34   34   38   34   37   34   39   34   34   32   36   31   32   33   887
11-02-16   39   34   33   32   33   31   32   32   37   53   37   34   38   38   34    2    0    0    0    0    0    0    0    0   539

34 rows selected.

SQL>
Redo logs generation is grouped by hours where last column (TOT) is sum of all redo logs in one day. According this it is more then obvious where redo log generation was highest, so our interest may be focused on presented point in time.

How much is that in Mb?

Total redo logs size (and according that, archived log size) cannot be computed from previous query because not all redo log switches occur when redo log was full. For that you might want to use this very easy query:
SQL> select sum(value)/1048576 redo_MB from sys.gv_$sysstat where name = 'redo size';

   REDO_MB
----------
1074623.75

SQL>
If you want to calculate on instance grouping, then use this:
SQL> select inst_id, sum(value)/1048576 redo_MB from sys.gv_$sysstat where name = 'redo size'
  2  group by inst_id;

   INST_ID    REDO_MB
---------- ----------
         1 370325.298
         2   4712.567
         4 405129.283
         3 294457.100

SQL>
Both queries works on single instances as well.

Which segments are generating redo logs?

After we found out our point of interest, in mine case where were most of the redo logs generation, it is very useful to find out which segments (not tables only) are causing redo log generation. For that we need to use "dba_hist" based tables, part of "Oracle AWR (Automated Workload Repository)", which usage I have described in topic Automated AWR reports in Oracle 10g/11g. For this example I'll focus on data based on time period: 11-01-28 13:00-11-01-28 14:00. Query for such a task should be:
SELECT to_char(begin_interval_time,'YY-MM-DD HH24') snap_time,
        dhso.object_name,
        sum(db_block_changes_delta) BLOCK_CHANGED
  FROM dba_hist_seg_stat dhss,
       dba_hist_seg_stat_obj dhso,
       dba_hist_snapshot dhs
  WHERE dhs.snap_id = dhss.snap_id
    AND dhs.instance_number = dhss.instance_number
    AND dhss.obj# = dhso.obj#
    AND dhss.dataobj# = dhso.dataobj#
    AND begin_interval_time BETWEEN to_date('11-01-28 13:00','YY-MM-DD HH24:MI') 
                                AND to_date('11-01-28 14:00','YY-MM-DD HH24:MI')
  GROUP BY to_char(begin_interval_time,'YY-MM-DD HH24'),
           dhso.object_name
  HAVING sum(db_block_changes_delta) > 0
ORDER BY sum(db_block_changes_delta) desc ;
Reduced result from previously shown query would be:
SNAP_TIME   OBJECT_NAME                    BLOCK_CHANGED
----------- ------------------------------ -------------
11-01-28 13 USR_RACUNI_MV                        1410112
11-01-28 13 TROK_TAB_RESEAU_I                     734592
11-01-28 13 TROK_VOIE_I                           638496
11-01-28 13 TROK_DATUM_ULAZA_I                    434688
11-01-28 13 TROK_PAIEMENT_I                       428544
11-01-28 13 D_DPX_VP_RAD                          351760
11-01-28 13 TROK_SVE_OK_I                         161472
11-01-28 13 I_DATPBZ_S002                         135296
11-01-28 13 IDS2_DATUM_I                          129904
11-01-28 13 IDS2_PZNBR                            129632
11-01-28 13 IDS2_IDS1_FK_I                        128848
11-01-28 13 IDS2_DATTRAN_I                        127440
11-01-28 13 IDS2_DATSOC_I                         127152
11-01-28 13 IDS2_VRSTA_PROD_I                     122816
...
Let us focus on first segment "USR_RACUNI_MV", segment with highest number of changed blocks (what mean directly highest redo log generation). Just for information, this is MATERIALIZED VIEW.

What SQL was causing redo log generation

Now when we know when, how much and what, time is to find out how redo logs are generated. In next query "USR_RACUNI_MV" and mentioned period are hard codded, because we are focused on them. Just to point that SQL that start with "SELECT" are not point of our interest because they do not make any changes.
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24') WHEN,
       dbms_lob.substr(sql_text,4000,1) SQL,
       dhss.instance_number INST_ID,
       dhss.sql_id,
       executions_delta exec_delta,
       rows_processed_delta rows_proc_delta
  FROM dba_hist_sqlstat dhss,
       dba_hist_snapshot dhs,
       dba_hist_sqltext dhst
  WHERE upper(dhst.sql_text) LIKE '%USR_RACUNI_MV%'
    AND ltrim(upper(dhst.sql_text)) NOT LIKE 'SELECT%'
    AND dhss.snap_id=dhs.snap_id
    AND dhss.instance_number=dhs.instance_number
    AND dhss.sql_id=dhst.sql_id 
    AND begin_interval_time BETWEEN to_date('11-01-28 13:00','YY-MM-DD HH24:MI') 
                                AND to_date('11-01-28 14:00','YY-MM-DD HH24:MI')
;
Result is like:
WHEN          SQL                                               inst_id       sql_id  exec_delta rows_proc_delta
------------- ------------------------------------------------- ------- ------------- ---------- ---------------
2011_01_28 13 DECLARE                                                 1 duwxbg5d1dw0q          0                0
                job BINARY_INTEGER := :job; 
                next_date DATE := :mydate;  
                broken BOOLEAN := FALSE; 
              BEGIN 
                dbms_refresh.refresh('"TAB"."USR_RACUNI_MV"'); 
                :mydate := next_date; 
                IF broken THEN :b := 1; 
                ELSE :b := 0; 
                END IF; 
              END;                                                    
2011_01_28 13 delete from "TAB"."USR_RACUNI_MV"                       1 5n375fxu0uv89          0                0
For both of examples it was impossible to find out number of rows changed according operation that was performed. Let us see output of another example (NC_TRANSACTION_OK_T table) where we can meet with DDL that generate redo logs!
WHEN          SQL                                               inst_id       sql_id  exec_delta rows_proc_delta
------------- ------------------------------------------------- ------- ------------- ---------- ---------------
2011_01_28 13 alter table TAB.NC_TRANSACTION_OK_T                     4 g5gvacc8ngnb8          0               0 
              shrink space cascade                                
If you are focused on pure number of changes, then you might to perform query where inst_id and sql_id are irrelevant (excluded from query). Here is a little modified previous example, for "Z_PLACENO" segment (pure oracle table):
SELECT when, sql, SUM(sx) executions, sum (sd) rows_processed
FROM (
      SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24') when,
             dbms_lob.substr(sql_text,4000,1) sql,
             dhss.instance_number inst_id,
             dhss.sql_id,
             sum(executions_delta) exec_delta,
             sum(rows_processed_delta) rows_proc_delta
        FROM dba_hist_sqlstat dhss,
             dba_hist_snapshot dhs,
             dba_hist_sqltext dhst
        WHERE upper(dhst.sql_text) LIKE '%Z_PLACENO%' 
          AND ltrim(upper(dhst.sql_text)) NOT LIKE 'SELECT%'
          AND dhss.snap_id=dhs.snap_id
          AND dhss.instance_Number=dhs.instance_number
          AND dhss.sql_id = dhst.sql_id 
          AND begin_interval_time BETWEEN to_date('11-01-25 14:00','YY-MM-DD HH24:MI') 
                                      AND to_date('11-01-25 15:00','YY-MM-DD HH24:MI')
        GROUP BY to_char(begin_interval_time,'YYYY_MM_DD HH24'),
             dbms_lob.substr(sql_text,4000,1),
             dhss.instance_number,
             dhss.sql_id
)
group by when, sql;
Result is like:
WHEN          SQL                                                                    exec_delta rows_proc_delta
------------- ---------------------------------------------------------------------- ---------- ---------------
2011_01_25 14 DELETE FROM Z_PLACENO                                                           4         7250031
2011_01_25 14 INSERT INTO Z_PLACENO(OBP_ID,MT_SIFRA,A_TOT)                                    4         7250830
              SELECT P.OBP_ID,P.MT_SIFRA,SUM(P.OSNOVICA)   
                FROM (SELECT OPI.OBP_ID,
                              OPO.MT_SIFRA,
                              SUM(OPO.IZNKN) OSNOVICA
                        WHERE OPI.OBP_ID = OPO.OPI_OBP_ID  
                          AND OPI.RBR = OPO.OPI_RBR  
                          AND NVL(OPI.S_PRETPOREZA,'O') IN ( 'O','N','A','Z','S') 
                        GROUP BY OPI.OBP_ID,OPO.MT_SIFRA 
                      ) 
Here you can see directly number executions and number of involved rows.

Query based on segment directly

Sometimes you do not want to focus on period, so your investigation may start with segment as starting point. For such a tasks I use next query. This is small variation of previous example where "USR_RACUNI_MV" segment is hard codded.
SELECT to_char(begin_interval_time,'YY-MM-DD HH24') snap_time,
       sum(db_block_changes_delta)
  FROM dba_hist_seg_stat dhss,
       dba_hist_seg_stat_obj dhso,
       dba_hist_snapshot dhs
  WHERE dhs.snap_id = dhss.snap_id
    AND dhs.instance_number = dhss.instance_number
    AND dhss.obj# = dhso.obj#
    AND dhss.dataobj# = dhso.dataobj#
    AND dhso.object_name = 'USR_RACUNI_MV'
  GROUP BY to_char(begin_interval_time,'YY-MM-DD HH24')
  ORDER BY to_char(begin_interval_time,'YY-MM-DD HH24');
Reduced result is:
   SNAP_TIME   SUM(DB_BLOCK_CHANGES_DELTA)
   ----------- ---------------------------
   ...
   11-01-28 11                     1224240
   11-01-28 12                      702880
 11-01-28 13                     1410112
   11-01-28 14                      806416
   11-01-28 15                     2008912
   11-01-28 16                     1103648
   ...
As you can see in accented row, the numbers are the same as at the begging of topic.

The End

Regardless which approach you use, redo logs generation has important effects in proper management of Oracle RMAN backups as well as stand by databases.

Cheers!

59 comments :

  1. Regardless there was many topic on this subject, this one seems to be most complete in covering the whole subject at once. Nice work.

    ReplyDelete
    Replies
    1. 100% agree.
      Great work,

      Thanks,
      zafar

      Delete
  2. Great Job!!!! Thanks

    ReplyDelete
  3. Excellent Men!!!.... thank you very much!!!!....

    ReplyDelete
  4. Yeaaahhh, Excellent topic.

    Thx !

    ReplyDelete
  5. Glad to announce that this subject will be one of mine two seminars on HROUG 2011, Croatian Oracle Conference Group.
    Cheers!

    ReplyDelete
  6. Great work.very Useful scripts.

    ReplyDelete
  7. Great and very useful scripts

    ReplyDelete
  8. Just came across this post, this is an excellent set of scripts for monitoring redo generation.
    thanks!! Darryl

    ReplyDelete
  9. Hi Damir

    Is using V$log_history ok even in RAC or should we use gv$log_history?

    Many thanks for great post!

    S

    ReplyDelete
    Replies
    1. Hi Anonymous,

      querying v$log_history will give you result based on instance you are logged in.
      querying gv$log_history will give you result based whole RAC (all instances). Look in this RAC example:
      [code]
      SQL> select count(*) from v$log_history;

      COUNT(*)
      ----------
      39944

      1 row selected.

      Elapsed: 00:00:02.46
      SQL> select count(*) from gv$log_history;

      COUNT(*)
      ----------
      159776

      1 row selected.

      Elapsed: 00:00:00.56
      SQL>
      [/code]

      Delete
  10. Thank you Damir.

    Indeed the most complete work I've seen on the subject.

    Although I knew which was the Redo generator, the complete work you've done here helped me indeed.

    Thanks again,
    Mike

    ReplyDelete
  11. Hi,

    Very good post. Just have one question though.

    Is it safe to assume whichever session/process is causing the REDOLOG, this is the same one that is generating the FLASHBACK log ?

    Thanks.

    ReplyDelete
  12. Correct.
    Flash back logs are pure redologs just saved in another place for online restore/recover.

    ReplyDelete
  13. Heу very interеsting blog!
    My web blog - Authentic Talisman

    ReplyDelete
  14. Thank you so much compiling such a useful list of scripts and for explaining their usefulness that both the technical and business managers can both understand and appreciate.

    Great job here.

    ReplyDelete
    Replies
    1. Glad to help you.

      Always happy when breaking distance between tech and managers ...
      :-)

      Delete
  15. gv$log_history vs v$log_history
    GV$ Always gives the double(11g R1)

    If i query v$ from both nodes, it shows the same REDO Logs.

    Even in RAC, should we use v$log_history instead of gv$ ??

    ReplyDelete
    Replies
    1. No not at all!
      Please look in example posted on "April 20, 2012 9:29 AM" on this page.

      Delete
  16. My friend, a really awesome post. Helped me with a work issue. cheers and thanks, rob

    ReplyDelete
  17. Super article, it would have been nice if you could please share historical redo generation using dba_hist tables

    ReplyDelete
    Replies
    1. Do not understand what is dba_hist tables view?

      Delete
  18. woww ... Damir you made is soo easy .. thanks ..

    can you provide me the script to capture the no of redos generated per hour and their size per hour in the last one month ..

    Thanks ,
    Sameer

    ReplyDelete
  19. Great Job , thanks for sharing

    ReplyDelete
  20. Hi Damir

    This is a very useful document and i am following this document for a long time to get the information of redo generation, currently i am struggling with FND_LOB table and SYSLOB segment, there are high redo generation on these tables by when i check the attachments they are hardly 2 to 3 MB attachment uploaded during high redo generation but it created 400 to 500 archivelog file that is very strange.

    Can you please help me troubleshoot the issue, any queries related to this will help

    Thank you

    Regards
    shahrukh

    ReplyDelete
  21. You nailed it. Excellent
    Thank

    ReplyDelete
  22. Does "redo size" equate or proportioonal to db_block_changes_delta for all database operations including NOLOGGING operations?

    ReplyDelete
    Replies
    1. Do not know. Let me know if you find out.

      BTW, it depends on Oracle version.

      Delete
  23. Hats off....Great Work..Really Helpful...

    ReplyDelete
  24. GREAT WORK! It helps me a lot.

    Is there problem with this script:

    If you are focused on pure number of changes, then you might to perform query where inst_id and sql_id are irrelevant (excluded from query). Here is a little modified previous example, for "Z_PLACENO" segment (pure oracle table):
    SELECT when, sql, SUM(sx) executions, sum (sd) rows_processed
    ...

    ReplyDelete
    Replies
    1. Please explain a little bit more ... do not follow you.

      Delete
    2. If you run this query, it can't find 'sx' or 'sd' column.

      Delete
    3. Yap,
      Change first line of query to:
      SELECT when, sql, SUM(exec_delta) executions, sum (rows_proc_delta) rows_processed

      Hpe now is ok....

      Delete
    4. Hi Damir,
      Thanks a lot for this queries. The query SELECT when, sql, SUM(exec_delta) executions, sum (rows_proc_delta) rows_processed .....(full query) when i execute it is getting executed but not giving any output. What might be the reason? The database is 11.2.0.4.

      Regards,
      Imran

      Delete
  25. Hi Damir,

    Your blogs are now officially my favorite. Would be nice if your scripts can be downloaded in one zip file though. :-) [ wishful thinking ]

    Anyway, can you please advise how to modify the rl.sql script so that it shows DAY as the header and HOUR as the column.

    Example as below:

    HOUR 2011-01-14 2011-01-15 .... and so on
    00 0 24
    01 0 23
    02
    ... and so on

    ReplyDelete
    Replies
    1. this will be rows to columns (or crosstab) query... so grep on google for such an olution

      Delete
  26. Great, thank you!

    ReplyDelete
  27. Dear Damir,

    excellent work

    please let me know %Z_PLACENO% in query

    ReplyDelete
    Replies
    1. It is explained ... with it's sql ... just grep by thius string

      Delete
  28. Really cool scripts.
    Shared with all of my friends.

    ReplyDelete
  29. Hi Damir,
    Again its very good work .Looks like you need to make some correction Inbelow query which i have taken from your above post. Please look at first line , looks like the variables used are not what inner query refer.(SUM(sx) executions, sum (sd) rows_processed )



    SELECT when, sql, SUM(sx) executions, sum (sd) rows_processed
    FROM (
    SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24') when,
    dbms_lob.substr(sql_text,4000,1) sql,
    dhss.instance_number inst_id,
    dhss.sql_id,
    sum(executions_delta) exec_delta,
    sum(rows_processed_delta) rows_proc_delta
    FROM dba_hist_sqlstat dhss,
    dba_hist_snapshot dhs,
    dba_hist_sqltext dhst
    WHERE upper(dhst.sql_text) LIKE '%Z_PLACENO%'
    AND ltrim(upper(dhst.sql_text)) NOT LIKE 'SELECT%'
    AND dhss.snap_id=dhs.snap_id
    AND dhss.instance_Number=dhs.instance_number
    AND dhss.sql_id = dhst.sql_id
    AND begin_interval_time BETWEEN to_date('11-01-25 14:00','YY-MM-DD HH24:MI')
    AND to_date('11-01-25 15:00','YY-MM-DD HH24:MI')
    GROUP BY to_char(begin_interval_time,'YYYY_MM_DD HH24'),
    dbms_lob.substr(sql_text,4000,1),
    dhss.instance_number,
    dhss.sql_id
    )
    group by when, sql;


    Thanks
    Zafar

    ReplyDelete
  30. Great work and thanks for sharing with us.

    ReplyDelete
  31. Excellent scripts, very good work.

    ReplyDelete
  32. This is the best redo/archivelog investigation manual!

    ReplyDelete

Zagreb u srcu!

Copyright © 2009-2018 Damir Vadas

All rights reserved.


Sign by Danasoft - Get Your Sign