This site has been destroyed by Google forced upgrade to new way of WEB site.
All files links are not working. Many images has been lost in conversation.
Have to edit 190 pages manually. Will try to do ASAP but for this I need time ...
THANK YOU GOOGLE !

Monday, October 31, 2011

Client side solution for Oracle/Windows service/resource checking

Most of the client side OS is Windows-this is a fact. In classic client-server environments (and those who are strictly not), 99% of the problems lay on the client side. So detecting those problems as quick as possible is always a big challenge to any IT structure and necessary foundation for any help desk structure.

Based on such a premises, recently I got a task to design a client side tool for that will check several services/resources, which would narrow detection of potential errors on client side. You know when user says "Database is down!" or "I cannot logon to database" that real problem is probably far from mentioned, and that mentioned is only the consequence of the core problem. Such a tool should obey several conditions at the very begging:
  1. Windows compatible for XP, Vista, Windows 7 on x86/x64 platforms
  2. Should have nice and clear GUI for average user
  3. There should be no installation
  4. Ordinary Windows user (not Administrator) should be able to run such a tool
  5. Regional settings independent
  6. Should fulfill all required checking
In the first moment I thought to start to develop Delphi application, where I have a lot of experience and looked to me correct solution according mentioned conditions. And just before I started, my chief come to me with idea to make something "WEB based"...

Regardless that this idea was ridiculous in first moment, his insistence on this kind of solution, put me in totally different direction which I didn't like in first moment only because of lack of experience. However, very soon (as many times before, after reading some materials on google), idea to develop Microsoft HTML Application seems very original and handy for this purpose.
HTA is special kind of WEB browser based application that runs only in MS Internet Explorer browsers, without any additional IIS or any other WEB server beside. It's embedded ActiveX support, perfectly fit any need for doing any deeper operation, far beyond any WEB based application.

One file support

Because HTA tool approach perfectly fit our need, firstly I placed pictures in three additional files (picture presentations for "?", "OK", "Bad" result) and connect them as link in HTA application. But this has to be put in single HTA file-without any external files. When I was stuck, I got amazing support from our programmers gurus, Tomislav Bacinger and Joso Mrša. They have encoded (Mime Base64 type) picture files in three separate variables, all put as declarations in HTA file. Then in the run time, these values were decoded and shown as real pictures. Very fast, very original and very flexible!! Bravo!

Unfortunately, chief, encouraged with our cool and fast progress, gave us another condition-HTA tool must be compatible with IE6-IE9 browsers. And IE6/IE7 was unable to decode such a values without external decoders. Because external decoder was not acceptable I was stuck again.

Then, again, comes to play our gurus and define pictures as pure div styles-solution which may be nothing for them but was Universe for me! And this was OK even for IE6/IE7 browsers. Perfect, now I was able to finish mine task to the end and concentrate on pure VBscript codding.

What can this tool do?

Regardless this is called "client side tool", this tool can be run on any kind of Windows machine. This tool can:
  1. Check presence and connectivity through PC network card
  2. Check connectivity to www.google.com
  3. Check pure ping one file server (in mine case server where some forms were placed as ordinary windows share)
  4. Check is file share is available (on mentioned file server)
  5. tnsping/tnsping80 availability (some of clients were using Forms 6 with tnsping80 and some of them were having tnsping utility)
  6. Check is application server available and running (on load balance path, and on single two nodes separately)
  7. Database availability.
    • For that I have created one oracle user TEB_TEST in all three database.
    • The whole statement for that was:
      grant create session to teb_test identified by teb_test;
    • Main part is to perform select 'OK' from dual; on all datbases-HDB, HLDB and TRX.
  8. Keep in mind that I have show both ways how to test availability (OraOLEDB as well as Microsoft ODBC for Oracle), but if you have just one type of connection i.e. Forms 6, which use TNS names entries, then you should not use other then "Microsoft ODBC for Oracle" connection. However look in source to see differences and reduce to your need
  9. Check is primary or backup route used. This was really important because when something on client side is slow. Usually the cause was use of backup connection, which is not so fast as primary network. So this tool can prove this question as well
As you see almost any part of checking is covered and the whole source code is in front of you, available to be customized to your needs. So when you start tool, you should look something like:
When user run "Start All test" button, all tests are run one by another. Untested question mark picture (initial value) is replaced with green check (for passed test) or red X (for error). When tests are run, you'll get a result like:
On this picture file share is not available, so user can easy say what is the problem and technical help may be very quick and accurate.

Warning

If you run a tool over the network you'll probably hit a warning like this:
then you are faced with security warning for 'Access data sources across domains'. This may be solved to set on client side this registry key:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\1 

Create/Change DWORD key 1406 with value "0"

The code

Source code of this client side tool is here with instruction to rename file from ".htax" to HTA because of many security reason that might be a problem in downloading this file with original extension. After that rename/replace your values and add/remove checking that you need. And this is all!

The End

regardless I'm not a fun of MS only based solutions, the easiness of development, maintenance and implementation is really something that in this case beat all other solutions.

For end, this is totally free tool until application title is not changed. Think this is not too much.
:-)

If you find this tool useful or find some bugs or improvements, please let me know-I'd rather see that.

Until then cheers!

Monday, October 17, 2011

Monitor database size (part II)


When I wrote my previous post on the same subject, Monitor database size, I have shown a way how to monitor database size. In that blog post, beside core methods, all other results are retrieved as pure sqlplus calls.

In this 16th annual HROUG (Croatian Oracle Users Group) conference, I have a chance to present this solution publicly. As a little extension according previous post, Apex visual interface is developed. With new Apex approach, monitoring and analyzing database size will be much easier because finding problems and showing them is clearly presented through graphical interface (opposite to sqlplus solution until now). This doesn't mean that old "sqlplus" interface is obsolete, quite opposite, it is enhanced with all methods what Apex interface has.

Before you continue reading, I do encourage you to read firstly this post and then come back here, because some of information are detailly explained there and here shown only as a reference without too many explanation.

Installation

According fact that some new methods has been implemented (i.e owner history, table set size) some changes has to be done in previous objects as well as add some new objects. For those who has implemented Database Size Stat solution before, good news is that table has not faced any changes (only added some indexes), so upgrading to this new Apex based solution could be done on existing data.

User and required grants

User (in mine case TOOLS) must have some roles and privileges to be able to collect data and show data:
  • grant connect, resource
  • grant select on dba_segments
  • grant select on dba_indexes
  • grant select on dba_tab_partitions
  • grant select on dba_ind_partitions
  • grant select on dba_lob_partitions
  • grant execute on dbms_sql
  • grant execute on dbms_utility

Other objects

Here are other objects that must be placed in Apex schema owner:
  1. required types
  2. db_size table sequence
  3. db_size table (if you have previously running table, there is no need to run this script)
  4. db_size table indexes
  5. db_size_gtt global temporary table (if you have previously running table, there is no need to run this script)
  6. HIST_ONE_SEGMENT_VW view
  7. HIST_ONE_TBLSPC_VW view
  8. DB_SIZE_PKG package source
  9. DB_SIZE_PKG package body
  10. Apex application source
For those who want to use Apex packed application feature, installation step is in one Apex export file (Unix format).
However, because all mentioned in "User and required grants" needs to be done as a user with sysdba privilege, do that before importing this file. If you make grants after, recompile db_size_pkg package.

Do not forget to place db size stat data collector (explained in previous post under point 5.) if you wish to start collecting data

About application

For all not too much familiar with Apex, just small notice. Apex export is done from workspace nr. 1002 (what can be changed) and main page nr. 1. For such a case App url looks like:
http://server:port/apex/f?p=1002:1
where server and port is something defined by your Apex configuration. Application interface is devided in 5 main tabs:
  1. Home
  2. History
  3. Top N segments
  4. Top N tablespace
  5. Difference
Many of resulting columns are links to detailed explanations of chosen value. In this way quick data drill is enabled on all levels.

Home tab

This tab has 4 pages which shows you some general data. Some of them depend of chosen period (defined in format YYYYMMDD) or chosen data (table set page).
Db size stat's main page (home page) is a page where you choose data from all collected periods. This is whole view of db_size table filtered by periods:
Almost every column of this report is link which lead you to detail data explanation of chosen value.

Second page on Home tab is Owners page, which shows cumulative data for chosen period based on on owners statistic:

Third page is general db size page, with total amounts on collected periods:
Keep in mind that on every other page where garphs are shown, blue determine size, orange determine grow values, contrary to this one example where orange represent records.

Fourth page is so called "table set" page, where you come indirectly-if you click on column "parent_id" on home page. Idea is to present cumulative data for one table along with it's indexes.
This way of monitoring data come to my mind when I saw one database with huge amount of indexes where tables are 1/3 or 1/4 of total size. Problem was that all other were ... mostly unuseful, indexes!

History

This tab has 5 pages which are called automatically from other pages (called as link). Their main purpose is to shows briefly history of
  1. segement
  2. partiton
  3. owner
  4. tablespace
  5. segment type (table, index, lob segment etc)
Just to mention that "Db Share %" and "Db Rec Share %" values are percentage against whole database in that moment, what might tell you "who is the boss in database".

First and main page is segment history. Segments are represented by dba_segment view with excluded RECYCLE BIN part.
Important parts are when segment's size (or number of records) grow/fall between two chosen periods. Another important part is that REC/MB values should always be the same. If they are not segment is mostly too big (allocated too much space and should be shrinked) or segment was altered. Think about it.

Second page is similar statistic but based on one partition:
Keep in mind that partition tables are placed under segment history while it's partitions are here.

Third page on this tab is history based on owners (schema):
While on Home tab owners are shown generally, here you can see history data for each owner.

Fourth page is history based on tablespace:
Tablespaces are vital parts when you start to tune performance. It's content and grow could tell you much for one database.

Last page is history based on segment type:
Regardless that tables segment type should always be main part of this statistic, interesting is when other segments "keep a role" to play.

Top N segments

Beside pure history analyze for vital was ability to compare two periods. Comparing them proactive DBA can see and prevent many incoming problems which may arise. PERIOD1 must always be newer then PERIOD2 and this is ensured with cascading LO's. So firstly always choose PERIOD1.

First page is based on pure segment size shown in MB.
Every investigation should start from this point. Because big segments (table based) usually suffer many FTS, so measuring it's size is vital to understand data in database.

Second page is based on size grow in MB.
Grow is rather different value then size. It is mostly common that bigger objects grow more than small, but when this change, DBA should look to see what has happened. Grow is almost equally danger as shrink.

Third page is based on grow but now in percentage value.
This is very interesting statistic because small segments may rise and you might not noticed them until is late, because their size is far beyond first places. So here regardless the segment size, only important is grow in percentage. Here is typical example of uncontrolled segment size explosion.
Keep in mind that only segments with more then 10 MB are calculated in this report. Smaller segments would never be shown here.

Fourth page is based on number of records:
Regardless grow of records is almost ever linear to size grow, sometimes it divert.

Fifth page is based, similar to "size", on records grow:
Again grow is rather different then number of records. Do not have to mention how important is to monitor grow in records for any tables in DB.

Sixth page is based, similar to "size", on records percentage grow:
Keep in mind that here are only segments which has more them 10.000 records are shown in report

Seventh page is based on total statistic. Here periods are unimportant but generally top n by size:

and top n by records, ever regardless chosen period:
It is very interesting that Top N ever might not show the same records as you query Top N size with first and last period as filter!

Top N tablespace

This tab has three pages that show statistic based on Top N tablespaces according: size, grow and grow %. It is interesting to see graphical relations on tablespace level as well.


Sometimes finding huge problems in tablespace may lead to determining segment which is "odd".

Difference

This tab has only one page. But because of the way it shows data I decided to put him on separate tab to be more accented. Main idea is to shown all new and dropped segments between two periods.
Idea come to me when I have to monitor if developers has "missed" to recreate some index in last CR or what is new in database. With this page I see that at once. Because I need this report based on last change, this is why by default PERIOD1 and PERIOD2 are filled initially with automatic values: last snapshot and one before.
For finding last period, use db_size_pkg.get_last_period function (briefly described in package source)

Live demo

And if you turn back to mine starting topic, here are graphical interpretation of to examples mentioned in starting post, which was direct cause of this what you are looking now:
case nr 1:

case nr 2:

The end

And this is where I'm now in this subject. I have several ideas what should be implemented yet as well:
  1. Add prediction for all parts. This is highly recommended feature and first on mine "to do" list when I find time.
  2. Add support for statistic history based on table sets (logical group tables+indexes)
  3. Add support for INDEX PARTITION, currently snapshot is not doing this but only on segment level for whole partition index
  4. add "rank" function instead of "rownum" for determining top_n parts. This is more precise.
When I make some upgrades, I'll post this solution here on mine blog.

However if you find a bug or something interesting or have developed your own extension on db_size table (something new), please let me know-I'd like to include here.

The end II

Just after this year HROUG 2011 has finished, I have some time to fix and rebuild the whole part for Apex 4.1 . Beside new look and feel, tableset history support is added ... as well as some minor fixes and improvemens in GUI). So here are these three parts (package + Apex app), which you should placed over existing ones (steps 8,9,10):
  1. DB_SIZE_PKG package source 1.61 version
  2. DB_SIZE_PKG package body 1.61 version
  3. Apex application source Apex 4.1

The end III

To make this util more friendly, I have removed create_snapshot function (which require more privileges) from db_size_pkg to db_size_admin_pkg. Here is that new package: Now you can place db_size_pkg in any non privileged Apex schema where this part is used only for reporting. Still create_snapshot is left in this package to be compatible with previous version.

Apex cloud version

I have manage time to upload Apex application on Oracle Apex cloud. Version with reduced data might be tested on here.
Cheers!
Here is missing part of common_pkg.
CREATE OR REPLACE PACKAGE "COMMON_PKG" AS
/* $Header: TOOLS.common_pkg.pks 1.00 03/24/2009 14:54 damirv $ */
/*--------------------------------------------------------------------------------------------------------------------

           Copyright(C) 2011-2014 Vadas savjetovanje d.o.o.


 NAME    : COMMON_PKG
 PURPOSE : Package has all important procs that do not depend on any data or
           other package!

 Date    : 12.02.2005.
 Author  : Damir Vadas

 Remarks : Changing anything in this package require full recompile what is
           heavy to succed in real production!

           Here shouldn't exist any function that is depending on any table
           or other custom package so package must be able to recompile with
           no data and custom functions/packages!

   Changes (DD.MM.YYYY, Name, CR/TR#):
            18.02.2010 Damir Vadas
                       Changed NOT_OK         0 => -1
                               NOTHING_TO_DO -1 =>  0
            12.04.2010 Damir vadas
                       Added "NO_G_APP_ID_DEFINED := -5;"
            21.04.2010 Damir VAdas
                       Added exception NO_G_APP_ID_DEFINED_EXCEPTION
            29.03.2011 Damir Vadas
                       Added hr_date_format values and set_session_date procedures (3 peaces)
            08.04.2011 Damir Vadas
                       Changed WHEN_OTHERS_ERROR 100=> 20998, UNHANDLED_ERROR NUMBER -99999 => -20999;
                       "set_session_date_format" changed to "set_session_date_fmt"
            12.04.2011 Damir Vadas
                       added const_max_number to make max value for sequence number
            05.05.2011 Damir Vadas
                       "Neobradena"->"Neobradjena", "pronaden"->"pronadjen"
            10.05.2011 Damir Vadas
                       Added number format part procs "set_session_num_fmt"
-------------------------------------------------------------------------------------------------------------------- */
  const_max_number CONSTANT NUMBER := 999999999999999999999999999;

  long_msg_t VARCHAR2(4000 CHAR);
  unhandled_error_msg long_msg_t%TYPE DEFAULT 'Neobradjena greska!';
  unhandled_error_msg_eng long_msg_t%TYPE DEFAULT 'Unhandled error!';

  others_error_msg long_msg_t%TYPE DEFAULT 'Neobradjena greska! '||CHR(13)||'Error code:' || TO_CHAR(SQLCODE)|| ': ' ||SQLERRM;
  html_others_error_msg long_msg_t%TYPE DEFAULT 'Neobradjena greska! 
Error code:' || TO_CHAR(SQLCODE)|| ': ' ||SQLERRM; no_data_found_msg long_msg_t%TYPE DEFAULT 'Podatak nije pronadjen.'; no_data_found_msg_en long_msg_t%TYPE DEFAULT 'No data found.'; bad_input_data_msg long_msg_t%TYPE DEFAULT 'Nepravilan ulaz podataka.'; bad_input_data_msg_en long_msg_t%TYPE DEFAULT 'Invalid input data.'; too_many_data_found_msg long_msg_t%TYPE DEFAULT 'Pronadjeno previse podataka.'; too_many_data_found_msg_en long_msg_t%TYPE DEFAULT 'To many data found.'; try_later_user VARCHAR2 (64 CHAR) DEFAULT 'Molim pokusajte ponovno kasnije!'; try_later_user_en VARCHAR2 (64 CHAR) DEFAULT 'Please try again later!'; others_error_msg_user_init long_msg_t%TYPE DEFAULT 'Zahtjev se ne moze procesirati. ' || try_later_user; others_error_msg_user_init_en long_msg_t%TYPE DEFAULT 'Request cannot be applied. '|| try_later_user_en; others_error_msg_user long_msg_t%TYPE DEFAULT 'Sustav nije u stanju obraditi Vas zahtjev. ' || try_later_user; others_error_msg_user_en long_msg_t%TYPE DEFAULT 'System cannot apply Your request. '|| try_later_user_en; resource_busy_msg long_msg_t%TYPE DEFAULT 'Slog je zakljucan. '|| try_later_user; resource_busy_msg_en long_msg_t%TYPE DEFAULT 'Record is locked. '|| try_later_user_en; hr_date_format_long VARCHAR2(21 CHAR) := 'DD.MM.YYYY HH24:MI:SS'; hr_date_format_short VARCHAR2(10 CHAR) := 'DD.MM.YYYY'; -- problem "andcompany" znaka prilikom uvlacenja u PLSQL-u H_AND_CO CHAR(1) := CHR(38); --html sign & H_NBSP CHAR(6) := H_AND_CO || 'nbsp;'; --html sign & + nbsp H_SMALL_IDENT CHAR(48) := H_NBSP||H_NBSP||H_NBSP||H_NBSP||H_NBSP||H_NBSP||H_NBSP||H_NBSP; H_BIG_IDENT CHAR(96) := H_SMALL_IDENT||H_SMALL_IDENT; log_file_handle UTL_FILE.FILE_TYPE; log_file_dir VARCHAR2(256 CHAR) := '/oracle/dir'; log_file_name VARCHAR2(256 CHAR) := 'OracleDebug.log'; maxlinesize NUMBER := 32767; -- -20000 to -20999 are custom error numbers ! OK NUMBER := 1; NOTHING_TO_DO NUMBER := 0; NOT_OK NUMBER := -1; NOTHING_SELECTED NUMBER := -2; BUSY_RES NUMBER := -3; INVALID_INPUT NUMBER := -98; WHEN_OTHERS_ERROR INTEGER := -20998; UNHANDLED_ERROR NUMBER := -20999; g_company_mail VARCHAR2(255 CHAR) := 'teb-informatika@teb-informatika.hr'; /*5 min sysdate + 5/(60*24) */ cn_small_lock NUMBER DEFAULT 0.00347222; /*1 day */ cn_big_lock NUMBER DEFAULT 1; cn_MY_FORCED_EXCEPTION CONSTANT NUMBER := -20203; NO_G_APP_ID_DEFINED CONSTANT NUMBER := -20202; NO_G_APP_ID_DEFINED_EXCEPTION EXCEPTION; PRAGMA EXCEPTION_INIT(NO_G_APP_ID_DEFINED_EXCEPTION, -20202); MY_FORCED_EXCEPTION EXCEPTION; PRAGMA EXCEPTION_INIT(MY_FORCED_EXCEPTION, -20203); MY_TOO_MANY_ROWS_EXCEPTION EXCEPTION; PRAGMA EXCEPTION_INIT(MY_TOO_MANY_ROWS_EXCEPTION, -20204); MY_NO_ROWS_FOUND_EXCEPTION EXCEPTION; PRAGMA EXCEPTION_INIT(MY_NO_ROWS_FOUND_EXCEPTION, -20205); PRIMARY_KEY_EXCEPTION EXCEPTION; PRAGMA EXCEPTION_INIT(PRIMARY_KEY_EXCEPTION, -0001); small_char_buffer EXCEPTION; PRAGMA EXCEPTION_INIT(small_char_buffer, -06502); bad_collection_query EXCEPTION; PRAGMA EXCEPTION_INIT(bad_collection_query, -20104); busy_resource EXCEPTION; PRAGMA EXCEPTION_INIT(busy_resource, -00054); bad_db_link EXCEPTION; PRAGMA EXCEPTION_INIT(bad_db_link, -00352); bad_number EXCEPTION; PRAGMA EXCEPTION_INIT(bad_number, -06502); -------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- -- date format part procedure set_session_date_fmt (p_date_format IN VARCHAR2); procedure set_session_date_fmt_hr_long; procedure set_session_date_fmt_hr_short; -------------------------------------------------------------------------------------- -- number format part PROCEDURE set_session_num_fmt (p_number_format IN VARCHAR2); PROCEDURE set_session_num_fmt_hr; PROCEDURE set_session_num_fmt_en; -------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- END; / CREATE OR REPLACE PACKAGE BODY "COMMON_PKG" AS -------------------------------------------------------------------------------------- PROCEDURE set_session_date_fmt (p_date_format IN VARCHAR2) IS BEGIN EXECUTE IMMEDIATE ('alter session set nls_date_format=''' || p_date_format || ''''); END set_session_date_fmt; PROCEDURE set_session_date_fmt_hr_long IS BEGIN set_session_date_fmt (hr_date_format_long); END set_session_date_fmt_hr_long; PROCEDURE set_session_date_fmt_hr_short IS BEGIN set_session_date_fmt (hr_date_format_short); END set_session_date_fmt_hr_short; -------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- PROCEDURE set_session_num_fmt (p_number_format IN VARCHAR2) IS BEGIN EXECUTE IMMEDIATE ('alter session set NLS_NUMERIC_CHARACTERS=''' || p_number_format || ''''); END set_session_num_fmt; PROCEDURE set_session_num_fmt_hr IS BEGIN set_session_num_fmt (',.'); END set_session_num_fmt_hr; PROCEDURE set_session_num_fmt_en IS BEGIN set_session_num_fmt ('.,'); END set_session_num_fmt_en; -------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- END COMMON_PKG; /
d

Tuesday, October 11, 2011

Trailing zeroes in OBI (Office apps)

When working with OBI (Oracle Business Intelligence), ability to export to many different formats is one of the most usable features. Idea to save data in PDF, Excel, Word, CSV and all without a single line of code, made this application famous. However, there might be some problems. In this topic I'd like to explain one problem which may arise some frustrations-problem with trailing zeroes when exporting to MS applications.

The problem

Let's suppose that we have a column with fixed number of decimal points (in this case 2 decimals):
Accented part is one that would shown as a problem-value with trailing zeroes.

When you want to export data from answer report to Excel, you do that through link placed at bottom of the page.

Result is classical Excel file. In this case after opening, monitored value looks like:
So , regardless, formatting in Answers report, as you see, trailing zeroes are lost!

Cause

You might think that problem is in OBI and it's functionality-but it is not! For proving this here are steps that shows that:
  1. Open Notepad, and write 1,25000 (1.25000 if different Regional Setting)
  2. Select that value and copy value in Windows clipboard
  3. Open new Excel worksheet
  4. Paste value from clipboard-you get 1,25 (1.25) zeroes are gone
The same may be tested if you export to PDF file (when BI Publisher is involved-no MS influence)-trailing zeroes are here!

The solution

The solution was found From HTML to Excel blog post. Translated to OBI syntax, it is performed in a way that you change CSS style of problematic column property:
Crucial part is
mso-number-format:"\@"
part which is written under "Use Custom CSS Style" option.
If you run export now, in Excel worksheet, zeroes are saved:
This workaround has been successfully tested on Office 2010 client and OBI 10.1.3.4 (Build 080726.1900) server side.

The End

But when you run the same Excel file on Excel 2003 or 2007 client, you'll notice problems in columns where decimal and integer values are mixed. In such a column integer values on mentioned Excel versions will be shown a date values in format "mon-yy" (i.e. Jan-11).

For that problem I do not have solution in this moment.

Cheers!

Saturday, October 8, 2011

ORA-20104: create_collection_from_query

Apex collections are memory structures placed directly in memory. So this is the only solutions if you want to work with data in a fastest way when you work with Apex. In all other Oracle apps, global temporary tables is one of the most used similar feature.

The problem

If you work with Apex and use collection, which is base for some other objects (i.e. view) placed in other schema then WWV_FLOW_xxx, where collections reside, you'll probably suffer this problem. Here is reproducing part.
In mine case there was a pipeline function which query data in TOOLS schema. This function is used for creating collection from query. Lately, Apex collection is used to create view which is again in TOOLS schema.
Pipelined function is inside define as:
FUNCTION grow_hist_one_owner_pr (p_owner IN VARCHAR2
           ) RETURN DB_SIZE_HIST_TBLSPC_SET_T PIPELINED;
View is defined as:
CREATE OR REPLACE FORCE VIEW TOOLS.HIST_ONE_SEGMENT_VW (
   OWNER,
   SEGMENT_NAME,
   SEGMENT_TYPE,
   PERIOD,
   DB_SHARE,
   DB_REC_SHARE,
   SIZE_MB,
   TREND_MB,
   TREND_SIZE_PERC,
   CUMUL_SIZE_PERC,
   RECORDS,
   TREND_REC,
   TREND_REC_PERC,
   CUMUL_REC_PERC,
   REC_IN_MB
)
AS
   SELECT C001 OWNER,
          C002 SEGMENT_NAME,
          C003 SEGMENT_TYPE,
          C004 PERIOD,
          TO_NUMBER (C005) DB_SHARE,
          TO_NUMBER (C006) DB_REC_SHARE,
          TO_NUMBER (C007) SIZE_MB,
          TO_NUMBER (C008) TREND_MB,
          TO_NUMBER (C009) TREND_SIZE_PERC,
          TO_NUMBER (C010) CUMUL_SIZE_PERC,
          TO_NUMBER (C011) RECORDS,
          TO_NUMBER (C012) TREND_REC,
          TO_NUMBER (C013) TREND_REC_PERC,
          TO_NUMBER (C014) CUMUL_REC_PERC,
          DECODE (TO_NUMBER (C007),
                  0, 0,
                  TO_NUMBER (C011) / TO_NUMBER (C007))
             REC_IN_MB
     FROM apex_collections
    WHERE collection_name = 'HIST_ONE_SEGMENT_COLL';
In Apex Page process, Before Header, you have code like:
IF APEX_COLLECTION.COLLECTION_EXISTS (p_collection_name=>'HIST_ONE_SEGMENT_COLL') THEN
  APEX_COLLECTION.DELETE_COLLECTION (p_collection_name=>'HIST_ONE_SEGMENT_COLL');
END IF;

APEX_COLLECTION.CREATE_COLLECTION_FROM_QUERY (
      p_collection_name => 'HIST_ONE_SEGMENT_COLL',
      p_query => 'SELECT * SELECT * FROM TABLE( db_size_pkg.grow_hist_one_segment_pr'
      );
whose purpose is to create Apex collection and fill data from CREATE_COLLECTION_FROM_QUERY method into it.
Later, HIST_ONE_SEGMENT_VW view has data based on Apex collection and it is used to define ordinary Apex report as:
select * from HIST_ONE_SEGMENT_VW 

So, when you firstly start Apex form that use this view, you'll get an error like:
ORA-20104: create_collection_from_query Error: ORA-20104: create_collection_from_query ExecErr:ORA-01007: variable not in select list
Even thought there are many explanations why and how this happened, all focus on some security problems when you deal with object in one schema and collections in other. Mine intention is not to cover the cause (frankly I do not have enough publicly related information for that) but to how how to make a workaround for that.

The solution

The solution is very easy-force twice same action to execute and silent exceptions. Here is the code.
BEGIN
  APEX_COLLECTION.DELETE_COLLECTION (p_collection_name=>'HIST_ONE_SEGMENT_COLL');
EXCEPTION
  WHEN OTHERS THEN
    NULL;
END;

BEGIN
  APEX_COLLECTION.CREATE_COLLECTION_FROM_QUERY (
        p_collection_name => 'HIST_ONE_SEGMENT_COLL',
        p_query => 'SELECT * FROM ... '
        );
EXCEPTION
  WHEN OTHERS THEN
    BEGIN
      APEX_COLLECTION.CREATE_COLLECTION_FROM_QUERY (
            p_collection_name => 'HIST_ONE_SEGMENT_COLL',
            p_query => 'SELECT * FROM ... '
            );
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
END;

The end

I do not want to mention that global temporary tables are not usable in Apex, so collections are the only one left but to encourage all to use even more collections in Apex, because they rocks!

Cheers!

Zagreb u srcu!

Copyright © 2009-2018 Damir Vadas

All rights reserved.


Sign by Danasoft - Get Your Sign