This site has been destroyed by Google forced upgrade to new way of WEB site.
All files links are not working. Many images has been lost in conversation.
Have to edit 190 pages manually. Will try to do ASAP but for this I need time ...
THANK YOU GOOGLE !

Monday, October 17, 2011

Monitor database size (part II)


When I wrote my previous post on the same subject, Monitor database size, I have shown a way how to monitor database size. In that blog post, beside core methods, all other results are retrieved as pure sqlplus calls.

In this 16th annual HROUG (Croatian Oracle Users Group) conference, I have a chance to present this solution publicly. As a little extension according previous post, Apex visual interface is developed. With new Apex approach, monitoring and analyzing database size will be much easier because finding problems and showing them is clearly presented through graphical interface (opposite to sqlplus solution until now). This doesn't mean that old "sqlplus" interface is obsolete, quite opposite, it is enhanced with all methods what Apex interface has.

Before you continue reading, I do encourage you to read firstly this post and then come back here, because some of information are detailly explained there and here shown only as a reference without too many explanation.

Installation

According fact that some new methods has been implemented (i.e owner history, table set size) some changes has to be done in previous objects as well as add some new objects. For those who has implemented Database Size Stat solution before, good news is that table has not faced any changes (only added some indexes), so upgrading to this new Apex based solution could be done on existing data.

User and required grants

User (in mine case TOOLS) must have some roles and privileges to be able to collect data and show data:
  • grant connect, resource
  • grant select on dba_segments
  • grant select on dba_indexes
  • grant select on dba_tab_partitions
  • grant select on dba_ind_partitions
  • grant select on dba_lob_partitions
  • grant execute on dbms_sql
  • grant execute on dbms_utility

Other objects

Here are other objects that must be placed in Apex schema owner:
  1. required types
  2. db_size table sequence
  3. db_size table (if you have previously running table, there is no need to run this script)
  4. db_size table indexes
  5. db_size_gtt global temporary table (if you have previously running table, there is no need to run this script)
  6. HIST_ONE_SEGMENT_VW view
  7. HIST_ONE_TBLSPC_VW view
  8. DB_SIZE_PKG package source
  9. DB_SIZE_PKG package body
  10. Apex application source
For those who want to use Apex packed application feature, installation step is in one Apex export file (Unix format).
However, because all mentioned in "User and required grants" needs to be done as a user with sysdba privilege, do that before importing this file. If you make grants after, recompile db_size_pkg package.

Do not forget to place db size stat data collector (explained in previous post under point 5.) if you wish to start collecting data

About application

For all not too much familiar with Apex, just small notice. Apex export is done from workspace nr. 1002 (what can be changed) and main page nr. 1. For such a case App url looks like:
http://server:port/apex/f?p=1002:1
where server and port is something defined by your Apex configuration. Application interface is devided in 5 main tabs:
  1. Home
  2. History
  3. Top N segments
  4. Top N tablespace
  5. Difference
Many of resulting columns are links to detailed explanations of chosen value. In this way quick data drill is enabled on all levels.

Home tab

This tab has 4 pages which shows you some general data. Some of them depend of chosen period (defined in format YYYYMMDD) or chosen data (table set page).
Db size stat's main page (home page) is a page where you choose data from all collected periods. This is whole view of db_size table filtered by periods:
Almost every column of this report is link which lead you to detail data explanation of chosen value.

Second page on Home tab is Owners page, which shows cumulative data for chosen period based on on owners statistic:

Third page is general db size page, with total amounts on collected periods:
Keep in mind that on every other page where garphs are shown, blue determine size, orange determine grow values, contrary to this one example where orange represent records.

Fourth page is so called "table set" page, where you come indirectly-if you click on column "parent_id" on home page. Idea is to present cumulative data for one table along with it's indexes.
This way of monitoring data come to my mind when I saw one database with huge amount of indexes where tables are 1/3 or 1/4 of total size. Problem was that all other were ... mostly unuseful, indexes!

History

This tab has 5 pages which are called automatically from other pages (called as link). Their main purpose is to shows briefly history of
  1. segement
  2. partiton
  3. owner
  4. tablespace
  5. segment type (table, index, lob segment etc)
Just to mention that "Db Share %" and "Db Rec Share %" values are percentage against whole database in that moment, what might tell you "who is the boss in database".

First and main page is segment history. Segments are represented by dba_segment view with excluded RECYCLE BIN part.
Important parts are when segment's size (or number of records) grow/fall between two chosen periods. Another important part is that REC/MB values should always be the same. If they are not segment is mostly too big (allocated too much space and should be shrinked) or segment was altered. Think about it.

Second page is similar statistic but based on one partition:
Keep in mind that partition tables are placed under segment history while it's partitions are here.

Third page on this tab is history based on owners (schema):
While on Home tab owners are shown generally, here you can see history data for each owner.

Fourth page is history based on tablespace:
Tablespaces are vital parts when you start to tune performance. It's content and grow could tell you much for one database.

Last page is history based on segment type:
Regardless that tables segment type should always be main part of this statistic, interesting is when other segments "keep a role" to play.

Top N segments

Beside pure history analyze for vital was ability to compare two periods. Comparing them proactive DBA can see and prevent many incoming problems which may arise. PERIOD1 must always be newer then PERIOD2 and this is ensured with cascading LO's. So firstly always choose PERIOD1.

First page is based on pure segment size shown in MB.
Every investigation should start from this point. Because big segments (table based) usually suffer many FTS, so measuring it's size is vital to understand data in database.

Second page is based on size grow in MB.
Grow is rather different value then size. It is mostly common that bigger objects grow more than small, but when this change, DBA should look to see what has happened. Grow is almost equally danger as shrink.

Third page is based on grow but now in percentage value.
This is very interesting statistic because small segments may rise and you might not noticed them until is late, because their size is far beyond first places. So here regardless the segment size, only important is grow in percentage. Here is typical example of uncontrolled segment size explosion.
Keep in mind that only segments with more then 10 MB are calculated in this report. Smaller segments would never be shown here.

Fourth page is based on number of records:
Regardless grow of records is almost ever linear to size grow, sometimes it divert.

Fifth page is based, similar to "size", on records grow:
Again grow is rather different then number of records. Do not have to mention how important is to monitor grow in records for any tables in DB.

Sixth page is based, similar to "size", on records percentage grow:
Keep in mind that here are only segments which has more them 10.000 records are shown in report

Seventh page is based on total statistic. Here periods are unimportant but generally top n by size:

and top n by records, ever regardless chosen period:
It is very interesting that Top N ever might not show the same records as you query Top N size with first and last period as filter!

Top N tablespace

This tab has three pages that show statistic based on Top N tablespaces according: size, grow and grow %. It is interesting to see graphical relations on tablespace level as well.


Sometimes finding huge problems in tablespace may lead to determining segment which is "odd".

Difference

This tab has only one page. But because of the way it shows data I decided to put him on separate tab to be more accented. Main idea is to shown all new and dropped segments between two periods.
Idea come to me when I have to monitor if developers has "missed" to recreate some index in last CR or what is new in database. With this page I see that at once. Because I need this report based on last change, this is why by default PERIOD1 and PERIOD2 are filled initially with automatic values: last snapshot and one before.
For finding last period, use db_size_pkg.get_last_period function (briefly described in package source)

Live demo

And if you turn back to mine starting topic, here are graphical interpretation of to examples mentioned in starting post, which was direct cause of this what you are looking now:
case nr 1:

case nr 2:

The end

And this is where I'm now in this subject. I have several ideas what should be implemented yet as well:
  1. Add prediction for all parts. This is highly recommended feature and first on mine "to do" list when I find time.
  2. Add support for statistic history based on table sets (logical group tables+indexes)
  3. Add support for INDEX PARTITION, currently snapshot is not doing this but only on segment level for whole partition index
  4. add "rank" function instead of "rownum" for determining top_n parts. This is more precise.
When I make some upgrades, I'll post this solution here on mine blog.

However if you find a bug or something interesting or have developed your own extension on db_size table (something new), please let me know-I'd like to include here.

The end II

Just after this year HROUG 2011 has finished, I have some time to fix and rebuild the whole part for Apex 4.1 . Beside new look and feel, tableset history support is added ... as well as some minor fixes and improvemens in GUI). So here are these three parts (package + Apex app), which you should placed over existing ones (steps 8,9,10):
  1. DB_SIZE_PKG package source 1.61 version
  2. DB_SIZE_PKG package body 1.61 version
  3. Apex application source Apex 4.1

The end III

To make this util more friendly, I have removed create_snapshot function (which require more privileges) from db_size_pkg to db_size_admin_pkg. Here is that new package: Now you can place db_size_pkg in any non privileged Apex schema where this part is used only for reporting. Still create_snapshot is left in this package to be compatible with previous version.

Apex cloud version

I have manage time to upload Apex application on Oracle Apex cloud. Version with reduced data might be tested on here.
Cheers!
Here is missing part of common_pkg.
CREATE OR REPLACE PACKAGE "COMMON_PKG" AS
/* $Header: TOOLS.common_pkg.pks 1.00 03/24/2009 14:54 damirv $ */
/*--------------------------------------------------------------------------------------------------------------------

           Copyright(C) 2011-2014 Vadas savjetovanje d.o.o.


 NAME    : COMMON_PKG
 PURPOSE : Package has all important procs that do not depend on any data or
           other package!

 Date    : 12.02.2005.
 Author  : Damir Vadas

 Remarks : Changing anything in this package require full recompile what is
           heavy to succed in real production!

           Here shouldn't exist any function that is depending on any table
           or other custom package so package must be able to recompile with
           no data and custom functions/packages!

   Changes (DD.MM.YYYY, Name, CR/TR#):
            18.02.2010 Damir Vadas
                       Changed NOT_OK         0 => -1
                               NOTHING_TO_DO -1 =>  0
            12.04.2010 Damir vadas
                       Added "NO_G_APP_ID_DEFINED := -5;"
            21.04.2010 Damir VAdas
                       Added exception NO_G_APP_ID_DEFINED_EXCEPTION
            29.03.2011 Damir Vadas
                       Added hr_date_format values and set_session_date procedures (3 peaces)
            08.04.2011 Damir Vadas
                       Changed WHEN_OTHERS_ERROR 100=> 20998, UNHANDLED_ERROR NUMBER -99999 => -20999;
                       "set_session_date_format" changed to "set_session_date_fmt"
            12.04.2011 Damir Vadas
                       added const_max_number to make max value for sequence number
            05.05.2011 Damir Vadas
                       "Neobradena"->"Neobradjena", "pronaden"->"pronadjen"
            10.05.2011 Damir Vadas
                       Added number format part procs "set_session_num_fmt"
-------------------------------------------------------------------------------------------------------------------- */
  const_max_number CONSTANT NUMBER := 999999999999999999999999999;

  long_msg_t VARCHAR2(4000 CHAR);
  unhandled_error_msg long_msg_t%TYPE DEFAULT 'Neobradjena greska!';
  unhandled_error_msg_eng long_msg_t%TYPE DEFAULT 'Unhandled error!';

  others_error_msg long_msg_t%TYPE DEFAULT 'Neobradjena greska! '||CHR(13)||'Error code:' || TO_CHAR(SQLCODE)|| ': ' ||SQLERRM;
  html_others_error_msg long_msg_t%TYPE DEFAULT 'Neobradjena greska! 
Error code:' || TO_CHAR(SQLCODE)|| ': ' ||SQLERRM; no_data_found_msg long_msg_t%TYPE DEFAULT 'Podatak nije pronadjen.'; no_data_found_msg_en long_msg_t%TYPE DEFAULT 'No data found.'; bad_input_data_msg long_msg_t%TYPE DEFAULT 'Nepravilan ulaz podataka.'; bad_input_data_msg_en long_msg_t%TYPE DEFAULT 'Invalid input data.'; too_many_data_found_msg long_msg_t%TYPE DEFAULT 'Pronadjeno previse podataka.'; too_many_data_found_msg_en long_msg_t%TYPE DEFAULT 'To many data found.'; try_later_user VARCHAR2 (64 CHAR) DEFAULT 'Molim pokusajte ponovno kasnije!'; try_later_user_en VARCHAR2 (64 CHAR) DEFAULT 'Please try again later!'; others_error_msg_user_init long_msg_t%TYPE DEFAULT 'Zahtjev se ne moze procesirati. ' || try_later_user; others_error_msg_user_init_en long_msg_t%TYPE DEFAULT 'Request cannot be applied. '|| try_later_user_en; others_error_msg_user long_msg_t%TYPE DEFAULT 'Sustav nije u stanju obraditi Vas zahtjev. ' || try_later_user; others_error_msg_user_en long_msg_t%TYPE DEFAULT 'System cannot apply Your request. '|| try_later_user_en; resource_busy_msg long_msg_t%TYPE DEFAULT 'Slog je zakljucan. '|| try_later_user; resource_busy_msg_en long_msg_t%TYPE DEFAULT 'Record is locked. '|| try_later_user_en; hr_date_format_long VARCHAR2(21 CHAR) := 'DD.MM.YYYY HH24:MI:SS'; hr_date_format_short VARCHAR2(10 CHAR) := 'DD.MM.YYYY'; -- problem "andcompany" znaka prilikom uvlacenja u PLSQL-u H_AND_CO CHAR(1) := CHR(38); --html sign & H_NBSP CHAR(6) := H_AND_CO || 'nbsp;'; --html sign & + nbsp H_SMALL_IDENT CHAR(48) := H_NBSP||H_NBSP||H_NBSP||H_NBSP||H_NBSP||H_NBSP||H_NBSP||H_NBSP; H_BIG_IDENT CHAR(96) := H_SMALL_IDENT||H_SMALL_IDENT; log_file_handle UTL_FILE.FILE_TYPE; log_file_dir VARCHAR2(256 CHAR) := '/oracle/dir'; log_file_name VARCHAR2(256 CHAR) := 'OracleDebug.log'; maxlinesize NUMBER := 32767; -- -20000 to -20999 are custom error numbers ! OK NUMBER := 1; NOTHING_TO_DO NUMBER := 0; NOT_OK NUMBER := -1; NOTHING_SELECTED NUMBER := -2; BUSY_RES NUMBER := -3; INVALID_INPUT NUMBER := -98; WHEN_OTHERS_ERROR INTEGER := -20998; UNHANDLED_ERROR NUMBER := -20999; g_company_mail VARCHAR2(255 CHAR) := 'teb-informatika@teb-informatika.hr'; /*5 min sysdate + 5/(60*24) */ cn_small_lock NUMBER DEFAULT 0.00347222; /*1 day */ cn_big_lock NUMBER DEFAULT 1; cn_MY_FORCED_EXCEPTION CONSTANT NUMBER := -20203; NO_G_APP_ID_DEFINED CONSTANT NUMBER := -20202; NO_G_APP_ID_DEFINED_EXCEPTION EXCEPTION; PRAGMA EXCEPTION_INIT(NO_G_APP_ID_DEFINED_EXCEPTION, -20202); MY_FORCED_EXCEPTION EXCEPTION; PRAGMA EXCEPTION_INIT(MY_FORCED_EXCEPTION, -20203); MY_TOO_MANY_ROWS_EXCEPTION EXCEPTION; PRAGMA EXCEPTION_INIT(MY_TOO_MANY_ROWS_EXCEPTION, -20204); MY_NO_ROWS_FOUND_EXCEPTION EXCEPTION; PRAGMA EXCEPTION_INIT(MY_NO_ROWS_FOUND_EXCEPTION, -20205); PRIMARY_KEY_EXCEPTION EXCEPTION; PRAGMA EXCEPTION_INIT(PRIMARY_KEY_EXCEPTION, -0001); small_char_buffer EXCEPTION; PRAGMA EXCEPTION_INIT(small_char_buffer, -06502); bad_collection_query EXCEPTION; PRAGMA EXCEPTION_INIT(bad_collection_query, -20104); busy_resource EXCEPTION; PRAGMA EXCEPTION_INIT(busy_resource, -00054); bad_db_link EXCEPTION; PRAGMA EXCEPTION_INIT(bad_db_link, -00352); bad_number EXCEPTION; PRAGMA EXCEPTION_INIT(bad_number, -06502); -------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- -- date format part procedure set_session_date_fmt (p_date_format IN VARCHAR2); procedure set_session_date_fmt_hr_long; procedure set_session_date_fmt_hr_short; -------------------------------------------------------------------------------------- -- number format part PROCEDURE set_session_num_fmt (p_number_format IN VARCHAR2); PROCEDURE set_session_num_fmt_hr; PROCEDURE set_session_num_fmt_en; -------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- END; / CREATE OR REPLACE PACKAGE BODY "COMMON_PKG" AS -------------------------------------------------------------------------------------- PROCEDURE set_session_date_fmt (p_date_format IN VARCHAR2) IS BEGIN EXECUTE IMMEDIATE ('alter session set nls_date_format=''' || p_date_format || ''''); END set_session_date_fmt; PROCEDURE set_session_date_fmt_hr_long IS BEGIN set_session_date_fmt (hr_date_format_long); END set_session_date_fmt_hr_long; PROCEDURE set_session_date_fmt_hr_short IS BEGIN set_session_date_fmt (hr_date_format_short); END set_session_date_fmt_hr_short; -------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- PROCEDURE set_session_num_fmt (p_number_format IN VARCHAR2) IS BEGIN EXECUTE IMMEDIATE ('alter session set NLS_NUMERIC_CHARACTERS=''' || p_number_format || ''''); END set_session_num_fmt; PROCEDURE set_session_num_fmt_hr IS BEGIN set_session_num_fmt (',.'); END set_session_num_fmt_hr; PROCEDURE set_session_num_fmt_en IS BEGIN set_session_num_fmt ('.,'); END set_session_num_fmt_en; -------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- END COMMON_PKG; /
d

25 comments :

  1. Very nice work-appreciated!

    ReplyDelete
  2. Now I realize that

    common_pkg.set_session_num_fmt_hr is not defined (it is in separate package not included in any files.
    So here is replacement for this small code part:

    PROCEDURE set_session_num_fmt (p_number_format IN VARCHAR2)
    IS
    BEGIN
    EXECUTE IMMEDIATE
    ('alter session set NLS_NUMERIC_CHARACTERS=''' || p_number_format || '''');
    END set_session_num_fmt;

    PROCEDURE set_session_num_fmt_hr
    IS
    BEGIN
    set_session_num_fmt (',.');
    END set_session_num_fmt_hr;

    or shortly you should place only this line (instead of procedure in package call):
    EXECUTE IMMEDIATE
    ('alter session set NLS_NUMERIC_CHARACTERS=''' || ',.' || '''');


    Rg,
    Damir

    ReplyDelete
  3. Hi,

    I've getting the below error while creating the 1.6.1 version of the pkg body:

    DBA@SQL> @DB_SIZE_PKG_161.plb

    Package created.


    Warning: Package Body created with compilation errors.

    DBA@SQL> sho err
    Errors for PACKAGE BODY "DB_SIZE_PKG":

    LINE/COL ERROR
    -------- -----------------------------------------------------------------
    2268/5 PL/SQL: Statement ignored
    2268/14 PLS-00306: wrong number or types of arguments in call to
    'DB_SIZE_SIZE_GROW_T'

    2281/5 PL/SQL: Statement ignored
    2281/14 PLS-00306: wrong number or types of arguments in call to
    'DB_SIZE_SIZE_GROW_T'


    Thanks

    ReplyDelete
  4. @Anonymous,
    you probably didn't run step 1. in Installation paragraph.
    Rg,
    Damir

    ReplyDelete
  5. Hi,

    What should I use in place of http://server:port/apex/f?p=1002:1 ? I've installed this package on Solaris 10 Sparc for Oracle 11.2 DB. And want to access it from Windows client.

    Thanks

    ReplyDelete
  6. @anonymous,
    If you have import app in different id then use that one.
    If you reuse mine, then use this one.
    URL is for MOD/PLSQL gateway. If you use standalone server, then your url is different.

    ReplyDelete
  7. Hi Damir,

    Sorry, but I could not understand what exactly you meant by import app in different id. I want to know what should I use for port. I tried 8080 & 80, but none works. However, I've exactly compiling your code. I've compiled this package on Solaris 10 Sparc for Oracle 11.2 DB. And want to access it from Windows client.

    Thanks

    ReplyDelete
  8. Hi,

    How will I actually start/call the application from the browser?

    Br

    ReplyDelete
  9. Graphical part is Apex application.
    So you have to install Oracle Apex and then import this app into Apex workspace.

    ReplyDelete
  10. Hi Damir

    Excellent post.
    I have installed the package and other stuffs in the db server exception APEX becos i dont use Oracle APEX. Can guide me how to execute the package to get the report?

    Thank you
    Haris

    ReplyDelete
  11. @Haris Ali
    I do not see your post so I'll reply here.

    Look in mine previous post on this topic (initial one) "http://damir-vadas.blogspot.com/2010/02/monitor-database-size.html" and then read more carefully: The implementation chapter.

    However I do advise to implement after that code you read here, on Part II topic.

    Hope now is clear.

    ReplyDelete
  12. Hi Damir,

    I got EBS database and I got seperate APEX database as reporting database. I use database links to connect from APEX to EBS/ERP database. If I install the objects in APEX database and having database links to other EBS/ERP databases is there any additional steps I need to perform

    Babu

    ReplyDelete
    Replies
    1. Apex should be 4.1 and higher.
      Data repository may be in your Apex database (if you manually transfer then from EBS database) or in EBS database schema with enough security to perform snapshot.
      If in EBS database, just give select grant to db_size table (through db link)

      Delete
    2. I think I could rearrange packages to make this more easier to use.
      Please allow me 7 days and I'll post new version which will cover your needs fully.
      rg
      Damir

      Delete
  13. hi Damir any update on your previous update

    Damir VadasJanuary 25, 2013 at 7:02 AM
    I think I could rearrange packages to make this more easier to use.
    Please allow me 7 days and I'll post new version which will cover your needs fully.
    rg
    Damir

    ReplyDelete
  14. common_pkg.set_session_num_fmt_hr is not defined (it is in separate package not included in any files.
    So here is replacement for this small code part:

    Where should this be created? i mean in which schema.
    I have created this under the apex workspace schema but i am still getting the same error.

    ReplyDelete
    Replies
    1. Ooups ... seems to be mine error.

      Will fix here in 14 days ... now on vacation...

      Delete
    2. Hi Damir,

      Can you please answer Where should common_pkg.set_session_num_fmt_hr be created? we are still no where on this

      Delete
    3. Hi
      Sorry for this delay-just forget that I need something to place here.
      So code for "common_pkg.set_session_num_fmt_hr" will be added at the end of main. Hope this is OK now.
      Brg
      Damir

      Delete
    4. Thanks able to proceed now.
      I have a small query i will be really greatful to you if you can help me with that.
      I have implemented the application as a test instance except apex collections everything else seems to work. I have taken snapshot for continuously 3 days. Let me know if i need to do anything more to get the apex collections pages to work history , tablespace history etc...Top N by size GROW is showing nodata found .Wondering why

      Delete
  15. Hi Damir,

    Many thanks for the application.I was able to successfully implement the same filling in some of the gaps in the documentation provided in this blog.
    I would like to stream line the installation process and present it in my blog referencing your blog so that others can also make use of the application without missing any steps in the implementation process. For this i want to request your approval as you are the owner of this application. Let me know if you are ok for this.

    ReplyDelete
    Replies
    1. no problem as long as you place mine links on blog it is ok to comment or reference it.
      Brg
      Damir

      Delete
  16. Hi Damir,

    Currently we already have snapshots running every 1 hour. can we make use of them instead of scheduling db_size_pkg.create_snapshot on a daily basis.
    If we make use of the existing hourly snapshots dow e need to make any changes to your code. Please let us know. As we do not see the code (being wrapped) we are unable to make this estimation.
    Thanks
    vamsi

    ReplyDelete
    Replies
    1. snapshot is intend to have once a day because pk field is in format "yyyymmdd" only.

      Delete

Zagreb u srcu!

Copyright © 2009-2018 Damir Vadas

All rights reserved.


Sign by Danasoft - Get Your Sign