tag:blogger.com,1999:blog-28077857521812711372024-03-21T14:18:10.817+01:00Damir Vadas, Oracle as I learnedThis blog has been created just to express some of my thoughts publicly. To share something with beautiful Oracle community.
Let me share some knowledge ...Damir Vadashttp://www.blogger.com/profile/15963017378937428976noreply@blogger.comBlogger158110tag:blogger.com,1999:blog-2807785752181271137.post-68666111271264428192022-11-17T07:05:00.003+01:002022-11-17T07:09:20.043+01:00Rebuild index candidatesRecent reminder on Oracle on line course "Understanding Explain Plans & Index Utilization" by Dan Hotka and problems that I faced in real professional life, reminded me to write some thoughts about index and usage in Oracle databases.
<br />
As we all known, some developers/DBAs have bad habit to "tune" SQL's by rebuilding indexes just because someone said it is good to have rebuild indexes. But how to know when I need to rebuild index?
<br />
<b>First things first: Do not rebuild Oracle indexes! … Unless you have to.</b>
If you are even considering rebuilding indexes on an autonomous manner, <b>please stop now</b>, and first spend some time reading some of the many things <a href="https://richardfoote.wordpress.com/">Richard Foote</a> has to say on his well-recommended blog and what is my proposal for such situations.
<br />
This is mainly as online rebuild use a lot of resources and it is far away from "normal" DDL operation in database....so run it always with some precautions and test on NON prod databases.
<br />
<h1>The solution</h1>
My solution against some others exposed all around Internet is that it expose parameters which can cut interesting parts to focus. Also it return all needed information like index size and estimated savings.
<br />
Script is totally aligned on database statistics, <b>so bad index stat -> bad script results</b>.
<br />
My 32kB size script named <span class="sql_text">reb_ind_cand.sql</span> is posted <a href="https://drive.google.com/file/d/1CDcUecgZbye3QahgBMoZjme1xV5P0zbC/view?usp=sharing">here</a>.
<br/ >
You have to run script with 4 parameters:
<pre class="brush: sql; gutter: false;">
sta reb_ind_cand [OWNER] (INDEX_NAME) [I/IP/ALL] [PERC_EST_SAVING]
</pre>
where parameters in [] are fixed and parameter in () is used with like parameter.
Script result looks like:
<br/>
<pre class="brush: sql; gutter: false;">
SQL> @reb_ind_cand.sql XXXXX % ALL 5
REPORT_DATE
-------------------
2022-11-17T06-38-04
ORACLE_VERSION
--------------
19
PDB: SOME_DB
OWNER.INDEX_NAME SAVING % CURRENT SIZE SAV. STAT% CURRENT SIZE (STAT) ESTIMATED SIZE (STAT)
------------------------------------------------- ---------- -------------------- ---------- -------------------- ---------------------
XXXXX.AH_AUDIT_IX2 0.0 % 2.0 MB -41.4 % 1.4 MB 2.0 MB
XXXXX.DWH_WAIT_AH_LOCALDATE_IX2 22.7 % 22.0 MB 16.4 % 20.3 MB 17.0 MB
XXXXX.DWH_WAIT_AH_NAME_IX1 38.9 % 72.0 MB 32.2 % 64.9 MB 44.0 MB
XXXXX.NK_AHG_NAME 94.1 % 34.0 MB 50.3 % 4.0 MB 2.0 MB
XXXXX.NK_AH_ALIAS 6.1 % 6,271.8 MB 4.9 % 6,188.3 MB 5,888.0 MB
XXXXX.NK_AH_ARCHIVE 61.2 % 9,562.4 MB 57.8 % 8,802.5 MB 3,712.0 MB
XXXXX.NK_AH_ARCHIVE1 55.5 % 5,901.8 MB 50.0 % 5,251.9 MB 2,624.0 MB
XXXXX.NK_AH_ARCHIVE2 52.3 % 5,497.6 MB 47.7 % 5,019.9 MB 2,624.0 MB
XXXXX.NK_AH_ATYP 0.8 % 5,936.4 MB -1.8 % 5,782.4 MB 5,888.0 MB
XXXXX.NK_AH_CMTFLAG 5.9 % 2,787.2 MB 2.1 % 2,681.3 MB 2,624.0 MB
XXXXX.NK_AH_DATE1 7.0 % 4,130.3 MB 1.6 % 3,901.5 MB 3,840.0 MB
XXXXX.NK_AH_DATE2 6.9 % 4,123.0 MB 3.8 % 3,992.5 MB 3,840.0 MB
XXXXX.NK_AH_DATE4_1 63.2 % 8,700.9 MB 58.9 % 7,786.3 MB 3,200.0 MB
XXXXX.NK_AH_DATE4_2 61.7 % 8,348.9 MB 60.6 % 8,131.3 MB 3,200.0 MB
XXXXX.NK_AH_DELFLAG 59.1 % 6,103.3 MB 54.3 % 5,459.6 MB 2,496.0 MB
XXXXX.NK_AH_EXTCOND 2.0 % 7,901.2 MB -0.7 % 7,690.0 MB 7,744.0 MB
XXXXX.NK_AH_IDNR_BEARB 2.4 % 5,376.0 MB -0.8 % 5,208.5 MB 5,248.0 MB
XXXXX.NK_AH_NAME 4.0 % 8,463.9 MB 3.1 % 8,392.1 MB 8,128.0 MB
XXXXX.NK_AH_NAME2 2.3 % 8,315.5 MB -4.9 % 7,748.4 MB 8,128.0 MB
XXXXX.NK_AH_NAME3 0.0 % 8,899.0 MB -0.9 % 8,812.5 MB 8,896.0 MB
XXXXX.NK_AH_OH_IDNR 8.5 % 3,077.8 MB 3.7 % 2,923.5 MB 2,816.0 MB
XXXXX.NK_AH_PARENTACT 12.4 % 2,994.3 MB 10.3 % 2,926.5 MB 2,624.0 MB
XXXXX.NK_AH_PARENTHIR 50.7 % 5,586.9 MB 46.2 % 5,114.0 MB 2,752.0 MB
XXXXX.NK_AH_PARENTPRC 56.1 % 5,541.7 MB 50.4 % 4,903.8 MB 2,432.0 MB
XXXXX.NK_AH_QUEUE 3.8 % 5,120.0 MB 0.6 % 4,958.9 MB 4,928.0 MB
XXXXX.NK_AH_REFNR 62.2 % 6,089.9 MB 61.1 % 5,930.0 MB 2,304.0 MB
XXXXX.NK_AH_REORG 2.0 % 5,486.8 MB -0.9 % 5,328.0 MB 5,376.0 MB
XXXXX.NK_AH_RUN_EVNTID 50.0 % 7,169.4 MB 43.6 % 6,353.5 MB 3,584.0 MB
XXXXX.NK_AH_TIMESTAMP4 6.9 % 3,504.9 MB 3.8 % 3,394.3 MB 3,264.0 MB
XXXXX.NK_AH_TOPNR 59.9 % 5,742.4 MB 58.3 % 5,526.3 MB 2,304.0 MB
XXXXX.NK_AH_USR 51.2 % 6,298.5 MB 45.8 % 5,663.3 MB 3,072.0 MB
XXXXX.NK_AJPP_OBJECT 46.1 % 5,940.6 MB 45.5 % 5,876.9 MB 3,200.0 MB
XXXXX.NK_AJPP_REFNR 52.5 % 2,691.9 MB 50.8 % 2,602.3 MB 1,280.0 MB
XXXXX.NK_AJPP_TASKIDNR 60.8 % 3,264.3 MB 56.9 % 2,966.9 MB 1,280.0 MB
XXXXX.NK_EH_CHILDNR 90.9 % 33.0 MB 89.8 % 29.5 MB 3.0 MB
XXXXX.NK_EH_EVENTID 97.7 % 88.0 MB 97.3 % 73.4 MB 2.0 MB
XXXXX.NK_EH_HOST 81.8 % 11.0 MB 72.3 % 7.2 MB 2.0 MB
XXXXX.NK_EH_NAME 66.7 % 24.0 MB 52.6 % 16.9 MB 8.0 MB
XXXXX.NK_EH_NEXTCHECK 92.9 % 28.0 MB 90.9 % 22.1 MB 2.0 MB
XXXXX.NK_EH_OH_IDNR 78.6 % 14.0 MB 76.2 % 12.6 MB 3.0 MB
XXXXX.NK_EH_OTYPE 96.9 % 64.0 MB 96.5 % 56.4 MB 2.0 MB
XXXXX.NK_EH_PARENTACT 94.6 % 37.0 MB 93.6 % 31.2 MB 2.0 MB
XXXXX.NK_EH_PARENTHIR 92.9 % 28.0 MB 91.7 % 24.1 MB 2.0 MB
XXXXX.NK_EH_PARENTPRC 92.9 % 28.0 MB 92.4 % 26.4 MB 2.0 MB
XXXXX.NK_EH_QUEUE 88.5 % 26.0 MB 87.4 % 23.9 MB 3.0 MB
XXXXX.NK_EH_REFNR 50.0 % 4.0 MB 26.6 % 2.7 MB 2.0 MB
XXXXX.NK_EH_STARTTYPE 81.8 % 11.0 MB 77.4 % 8.8 MB 2.0 MB
XXXXX.NK_EH_STATUS 77.8 % 9.0 MB 70.5 % 6.8 MB 2.0 MB
XXXXX.NK_EH_TOPNR 87.5 % 16.0 MB 85.7 % 13.9 MB 2.0 MB
XXXXX.NK_EJPP_OBJECT 86.2 % 29.0 MB 85.5 % 27.6 MB 4.0 MB
XXXXX.NK_EJPP_OTYPE 96.5 % 57.0 MB 96.3 % 53.5 MB 2.0 MB
XXXXX.NK_EJPP_REFNR 33.3 % 3.0 MB 8.2 % 2.2 MB 2.0 MB
XXXXX.NK_EJPP_STATUS 66.7 % 6.0 MB 56.2 % 4.6 MB 2.0 MB
XXXXX.NK_EJPP_TASKIDNR 95.8 % 48.0 MB 95.7 % 46.7 MB 2.0 MB
XXXXX.NK_EJPP_TASKIDNR2 92.6 % 27.0 MB 91.7 % 24.1 MB 2.0 MB
XXXXX.NK_EJPP_TIMEOK 50.0 % 4.0 MB 21.7 % 2.6 MB 2.0 MB
XXXXX.NK_EY_SYNCNAME 88.2 % 17.0 MB 87.2 % 15.7 MB 2.0 MB
XXXXX.NK_EY_SYNCNAME2 92.3 % 26.0 MB 91.7 % 24.1 MB 2.0 MB
XXXXX.NK_FE_FH_IDNR 12.5 % 8.0 MB 2.3 % 7.2 MB 7.0 MB
XXXXX.NK_JPP_ALIAS 40.0 % 5.0 MB 29.7 % 4.3 MB 3.0 MB
XXXXX.NK_JPP_OBJECT 40.0 % 10.0 MB 31.6 % 8.8 MB 6.0 MB
XXXXX.NK_JPP_PARENTALIAS -275.0 % 4.0 MB -401.3 % 3.0 MB 15.0 MB
XXXXX.NK_MELD_LOEKZ 89.3 % 4,354.1 MB 88.9 % 4,198.0 MB 464.0 MB
XXXXX.NK_MELD_REORG 77.0 % 3,264.6 MB 75.1 % 3,022.0 MB 752.0 MB
XXXXX.NK_MELD_SOURCE 47.3 % 1,093.7 MB 36.3 % 904.1 MB 576.0 MB
XXXXX.NK_MELD_TIMESTAMP 78.9 % 3,265.2 MB 76.8 % 2,970.4 MB 688.0 MB
XXXXX.NK_MELD_USR 44.0 % 1,157.1 MB 34.0 % 981.8 MB 648.0 MB
XXXXX.NK_OFS_F 20.0 % 5.0 MB 4.1 % 4.2 MB 4.0 MB
XXXXX.NK_OFS_MODDATE 14.3 % 7.0 MB 0.9 % 6.1 MB 6.0 MB
XXXXX.NK_OFS_O 20.0 % 5.0 MB -3.4 % 3.9 MB 4.0 MB
XXXXX.NK_OH_CRDATE 25.0 % 8.0 MB 17.4 % 7.3 MB 6.0 MB
XXXXX.NK_OH_DELFLAG 33.3 % 12.0 MB 30.5 % 11.5 MB 8.0 MB
XXXXX.NK_OH_MODDATE 76.7 % 30.0 MB 76.0 % 29.2 MB 7.0 MB
XXXXX.NK_OH_MODDATE2 87.8 % 49.0 MB 87.6 % 48.3 MB 6.0 MB
XXXXX.NK_OH_MRTEXECUTE 14.3 % 7.0 MB 0.3 % 6.0 MB 6.0 MB
XXXXX.NK_OH_OPENSESS 0.0 % 6.0 MB -19.6 % 5.0 MB 6.0 MB
XXXXX.NK_OH_OTYPE 24.0 % 25.0 MB 21.5 % 24.2 MB 19.0 MB
XXXXX.NK_OH_OTYPE2 0.0 % 8.0 MB -4.1 % 7.7 MB 8.0 MB
XXXXX.NK_OH_REFIDNR 70.8 % 24.0 MB 42.3 % 12.1 MB 7.0 MB
XXXXX.NK_OH_REORG 72.2 % 36.0 MB 71.4 % 34.9 MB 10.0 MB
XXXXX.NK_OH_SEARCH 26.9 % 26.0 MB 24.5 % 25.2 MB 19.0 MB
XXXXX.NK_OH_SEARCH2 28.6 % 21.0 MB 25.0 % 20.0 MB 15.0 MB
XXXXX.NK_OH_SEARCH3 82.8 % 58.0 MB 82.2 % 56.0 MB 10.0 MB
XXXXX.NK_OH_VERSIONINGID -200.0 % 2.0 MB -333.9 % 1.4 MB 6.0 MB
XXXXX.NK_OT_SEARCH_NC -130.0 % 80.0 MB -154.6 % 72.3 MB 184.0 MB
XXXXX.NK_OVW_VVALUE 77.8 % 864.0 MB 43.7 % 340.9 MB 192.0 MB
XXXXX.NK_OY_SYNCNAME 50.0 % 4.0 MB 37.6 % 3.2 MB 2.0 MB
XXXXX.NK_RH_AH_IDNR 75.9 % 4,775.0 MB 75.5 % 4,700.4 MB 1,152.0 MB
XXXXX.NK_RH_ARCHIVE1 66.8 % 4,242.6 MB 62.7 % 3,773.6 MB 1,408.0 MB
XXXXX.NK_RH_ARCHIVE2 73.0 % 5,924.0 MB 72.2 % 5,758.2 MB 1,600.0 MB
XXXXX.NK_RH_DELFLAG 68.6 % 4,079.4 MB 64.3 % 3,585.2 MB 1,280.0 MB
XXXXX.NK_RH_TIMESTAMP4 76.5 % 4,912.4 MB 76.0 % 4,809.4 MB 1,152.0 MB
XXXXX.NK_TEMP1_KEY1 0.0 % 0.0 MB 0.0 % 0.0 MB 0.0 MB
XXXXX.NK_TEMP1_KEY3 0.0 % 0.0 MB 0.0 % 0.0 MB 0.0 MB
XXXXX.NK_XAO_GETCHLOG 44.4 % 72.0 MB 43.2 % 70.5 MB 40.0 MB
XXXXX.NK_XAO_REORG 79.8 % 104.0 MB 77.1 % 91.8 MB 21.0 MB
XXXXX.PK_ABLOB 69.0 % 1,008.0 MB 46.3 % 581.4 MB 312.0 MB
XXXXX.PK_ACV 25.0 % 44.0 MB 14.8 % 38.8 MB 33.0 MB
XXXXX.PK_AFC 94.3 % 88.0 MB 66.5 % 14.9 MB 5.0 MB
XXXXX.PK_AH 55.4 % 5,164.0 MB 52.2 % 4,816.9 MB 2,304.0 MB
XXXXX.PK_AHG 95.1 % 41.0 MB 45.3 % 3.7 MB 2.0 MB
XXXXX.PK_AHGH 95.7 % 46.0 MB 51.2 % 4.1 MB 2.0 MB
XXXXX.PK_AH_HIST_10_AH_IDNR 10.0 % 1,280.0 MB -1.2 % 1,138.6 MB 1,152.0 MB
XXXXX.PK_AH_HIST_1_AH_IDNR 11.4 % 280.0 MB 5.7 % 262.9 MB 248.0 MB
XXXXX.PK_AH_HIST_50_AH_IDNR 32.5 % 80.0 MB 29.0 % 76.0 MB 54.0 MB
XXXXX.PK_AH_HIST_900_AH_IDNR 25.0 % 4.0 MB 11.5 % 3.4 MB 3.0 MB
XXXXX.PK_AH_HIST_ADX_AH_IDNR 7.5 % 320.0 MB 3.2 % 305.7 MB 296.0 MB
XXXXX.PK_AH_HIST_MBBUP_AH_IDNR -14.3 % 7.0 MB -36.9 % 5.8 MB 8.0 MB
XXXXX.PK_AH_HIST_MB_AH_IDNR 28.6 % 112.0 MB 17.8 % 97.3 MB 80.0 MB
XXXXX.PK_AJPFV 44.1 % 272.0 MB 34.0 % 230.2 MB 152.0 MB
XXXXX.PK_AJPOV 73.1 % 4,283.1 MB 45.3 % 2,106.3 MB 1,152.0 MB
XXXXX.PK_AJPP 52.4 % 2,690.9 MB 51.2 % 2,623.2 MB 1,280.0 MB
XXXXX.PK_AJPPA 52.4 % 2,823.0 MB 49.5 % 2,661.0 MB 1,344.0 MB
XXXXX.PK_AJPPC 39.1 % 46.0 MB 36.7 % 44.2 MB 28.0 MB
XXXXX.PK_AJPPF 56.7 % 240.0 MB 52.4 % 218.6 MB 104.0 MB
XXXXX.PK_AJPPO 72.0 % 3,885.4 MB 40.0 % 1,812.6 MB 1,088.0 MB
XXXXX.PK_AJPPV 85.0 % 3,781.4 MB 42.3 % 983.9 MB 568.0 MB
XXXXX.PK_APD 60.0 % 10.0 MB 51.5 % 8.3 MB 4.0 MB
XXXXX.PK_APPF 90.0 % 50.0 MB 58.0 % 11.9 MB 5.0 MB
XXXXX.PK_APUD 76.5 % 4,365.8 MB 45.4 % 1,874.5 MB 1,024.0 MB
XXXXX.PK_AV 0.3 % 8,988.3 MB -3.1 % 8,688.9 MB 8,960.0 MB
XXXXX.PK_BH 27.3 % 264.0 MB 20.5 % 241.6 MB 192.0 MB
XXXXX.PK_BT 36.8 % 304.0 MB 30.4 % 275.7 MB 192.0 MB
XXXXX.PK_DWH_WAIT 10.5 % 19.0 MB 4.7 % 17.8 MB 17.0 MB
XXXXX.PK_EH 98.2 % 112.0 MB 97.9 % 93.3 MB 2.0 MB
XXXXX.PK_EJ 97.2 % 36.0 MB 96.5 % 28.6 MB 1.0 MB
XXXXX.PK_EJPOV 90.0 % 40.0 MB 67.0 % 12.1 MB 4.0 MB
XXXXX.PK_EJPP 92.3 % 26.0 MB 91.4 % 23.2 MB 2.0 MB
XXXXX.PK_EJPPA 88.9 % 27.0 MB 87.7 % 24.3 MB 3.0 MB
XXXXX.PK_EJPPO 88.6 % 35.0 MB 68.4 % 12.7 MB 4.0 MB
XXXXX.PK_EJPPV 89.5 % 38.0 MB 60.3 % 10.1 MB 4.0 MB
XXXXX.PK_EPUD 94.4 % 72.0 MB 87.8 % 32.9 MB 4.0 MB
XXXXX.PK_ESTP 95.2 % 21.0 MB 93.9 % 16.4 MB 1.0 MB
XXXXX.PK_EV 93.8 % 192.0 MB 92.7 % 165.2 MB 12.0 MB
XXXXX.PK_EY 94.1 % 17.0 MB 93.8 % 16.2 MB 1.0 MB
XXXXX.PK_FE 12.5 % 8.0 MB -3.2 % 6.8 MB 7.0 MB
XXXXX.PK_FIFO 50.0 % 2.0 MB 4.5 % 1.0 MB 1.0 MB
XXXXX.PK_JPOV 42.9 % 7.0 MB 28.7 % 5.6 MB 4.0 MB
XXXXX.PK_JPP 25.0 % 4.0 MB 13.3 % 3.5 MB 3.0 MB
XXXXX.PK_JPPA 20.0 % 5.0 MB 3.2 % 4.1 MB 4.0 MB
XXXXX.PK_JPPO 42.9 % 7.0 MB 22.0 % 5.1 MB 4.0 MB
XXXXX.PK_JPPV 40.0 % 5.0 MB 16.3 % 3.6 MB 3.0 MB
XXXXX.PK_MELD 87.9 % 3,845.5 MB 87.4 % 3,675.7 MB 464.0 MB
XXXXX.PK_MQMEM 97.7 % 44.0 MB 82.9 % 5.8 MB 1.0 MB
XXXXX.PK_MSGTX 0.0 % 2.0 MB -82.9 % 1.1 MB 2.0 MB
XXXXX.PK_ODOC 50.0 % 2.0 MB 26.0 % 1.4 MB 1.0 MB
XXXXX.PK_OFS 0.0 % 4.0 MB -27.4 % 3.1 MB 4.0 MB
XXXXX.PK_OH 0.0 % 6.0 MB -4.5 % 5.7 MB 6.0 MB
XXXXX.PK_OKD 33.3 % 9.0 MB 24.1 % 7.9 MB 6.0 MB
XXXXX.PK_OT 11.1 % 27.0 MB 5.4 % 25.4 MB 24.0 MB
XXXXX.PK_OVD 0.0 % 2.0 MB -41.4 % 1.4 MB 2.0 MB
XXXXX.PK_OVW 69.3 % 600.0 MB 33.3 % 275.7 MB 184.0 MB
XXXXX.PK_OX 25.0 % 8.0 MB 13.5 % 6.9 MB 6.0 MB
XXXXX.PK_OY 50.0 % 2.0 MB 5.9 % 1.1 MB 1.0 MB
XXXXX.PK_RH 77.2 % 4,352.6 MB 76.8 % 4,275.4 MB 992.0 MB
XXXXX.PK_RT -4.6 % 11,200.0 MB -7.3 % 10,914.4 MB 11,712.0 MB
XXXXX.PK_XAO 77.5 % 80.0 MB 76.2 % 75.5 MB 18.0 MB
XXXXX.UK_OH_NAME 28.6 % 21.0 MB 25.5 % 20.1 MB 15.0 MB
XXXXX.UK_OKB_KEY 50.0 % 2.0 MB 30.1 % 1.4 MB 1.0 MB
XXXXX.UK_RH_TYP 66.5 % 3,625.2 MB 64.0 % 3,374.8 MB 1,216.0 MB
Elapsed: 00:00:21.56
reb_ind_cand [OWNER] (INDEX_NAME) [I/IP/ALL] [PERC_EST_SAVING]
reb_ind_cand HR % ALL 10
SQL>
</pre>
<h2>The Final</h2>
Hope this helps someone, especially young and non experienced developers/DBA. Also for all of them might be interested to read my previous <a href="https://damir-vadas.blogspot.com/2010/11/how-to-see-index-usage-without-alter.html">blog post</a> from <b>2010</>.
</br>
</br>
Cheers!
<div class="blogger-post-footer">Feed URL</div>Damir Vadashttp://www.blogger.com/profile/15963017378937428976noreply@blogger.com0tag:blogger.com,1999:blog-2807785752181271137.post-62548733934602637812022-08-27T02:45:00.004+02:002022-08-27T12:03:37.102+02:00One Click Clone - fully automated EBS 11.5.10.2 cloning (DB and APPS side)<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiH7SWJ0LaqB0fp9bFiu1AbCx9xdTuElAOYMS9ugGyQxCfJr6BMGdTbDktNK1DIL7OtdV7usbyEERGxBun7ozObyP50eJgCgL1gcs6VU62acZuiR8MPSNK35F5yNy4syv6uuD00w8AHzsw/s1600-h/dolly.jpg"><br />
<img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiH7SWJ0LaqB0fp9bFiu1AbCx9xdTuElAOYMS9ugGyQxCfJr6BMGdTbDktNK1DIL7OtdV7usbyEERGxBun7ozObyP50eJgCgL1gcs6VU62acZuiR8MPSNK35F5yNy4syv6uuD00w8AHzsw/s320/dolly.jpg" style="float: left; margin-right: 6px; margin-top: 6px;" /></a>Although the current version of the EBS 11.5.10 "Cloning Oracle Applications Release 11i with Rapid Clone" it is considered a major and most stable release EBS version for all 11.x releases. Oracle EBS 12.x version will not be treated as though I have no experience, according to what is written, it is obvious progress in cloning part, but shutting down source database instance is still present. Another way of official cloning is a visual approach through OAM (Dashboard|Site Map|Maintenance|Clonning group). <br />
Both types of cloning ("visual" or "completely by hand") have the following restrictions:<br />
<ul><li>Can not be automated, requires constant human interaction</li>
<li>Visual type of cloning still require additional work in some of the shell-a (running Linux commands)<br />
</li>
<li>It is necessary shut down starting instance. Because in most of the cases this is production, I find out a big minus for any real usage</li>
<li>Other smaller inconvenience, not visible to most of the users, but rise up on bigger Oracle EBS installations.</li>
</ul>Mine story starts when I was APPS DBA, on biggest EBS project in Croatia, and have to manage 18 EBS instances (mostly developers). Cloning of them was a every day job and performed (mostly) during the night (PROD has to be shut down). Let me tell you that in that time cloning lasted 5-8 hours at least (no RMAN usage but plain datafiles copy over network). After several months, totally drained and with no motivation, I start to dig how to automate this or leave the job ASAP. The last step of the story begins when I have circled the whole story. This moment come after I read the article in a magazine about Linux use of (auto) expect module, a non-standard Linux package, which enabled automated entry in interactive scripts.<br />
All other is history which I'd like to represent here in few words....<br />
<ul><li>"One Click Clone" (later reference as "OCC") provides non Apps DBA and ordinary system administrator way to automate Oracle EBS cloning without any special knowledge</li>
<li>The OCC method did not require additional hardware or exotic configuration. OCC does not depend on any one hardware platform (for example, a snapshot expensive storage systems, etc.)</li>
<li>The procedure has been tested in a Linux environment and all the scripts are set to work in the bash shell. <br />
With minor modifications in the scripts, I believe it would be possible to do the same for the Windows environment (Windows as OS for Oracle EBS is something that I warmly advise to avoid).<br />
</li>
<li>AFAIK, this method has not been published in the world (according to available data from the Internet, and Metalink). The very idea of human cloning came after approx. 80 instances and the desire to automate the process.</li>
<li>Other positive aspects of OCC methods are the following:<br />
</li>
<ul><li>Cloning is carried out fully automated (crontab) with the online activities of all the starting system (DB and APPS side) and no too big overhead on starting (source) servers. Overhead on the destination servers (APPS and DB side) is significant and affects the performance on them during cloning. Here is primarily concerned overhead on the disks as a clone of the database with 100+ GB of datafiles DB and cca. 600,000 files on the APPS side</li>
</ul><li>There small amount for more automation, and I’m now very close to complete the installation process from zero. This means out of the box installation from the empty server that has just the OS to "up and running," EBS instance! <br />
To prove that concept is almost finished, you see from the procedure f_prepare_db_server which prepare all settings on an empty server and which is the precursor for function f_db_create_oracle_home that should do the creation of a new $ ORACLE_HOME. <br />
</li>
<li>If I ever get a chance to back to APPS DBA job, I’ll do that part to the very end (right now I do not have environment and work different job-this is life in Croatia!). I can promise this because it is really a quite rudimentary part in its proceedings which I have done several times manually and never has time to put that on paper ... 8-(<br />
</li>
</ul><br />
<h1>Used literature</h1>When cloning and work with Oracle EBS generally, I would point official documentation from Oracle with special accent on next topics<br />
<h2>Oracle RMAN</h2><a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=73912.1">Note Id:73912.1</a> RMAN: Creating a Duplicate Database<br />
<a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=388431.1">Note Id:73912.1</a> Creating a Duplicate Database on a New Host<br />
<h2>Oracle EBS general</h2><a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=285267.1">Note Id:285267.1</a> Oracle E-Business Suite 11i and Database FAQ<br />
<h2>Oracle EBS cloning</h2><a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=216664.1">Note Id:216664.1</a> FAQ: Cloning Oracle Applications Release 11i<br />
<a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=230672.1">Note Id:230672.1</a> Cloning Oracle Applications Release 11i with Rapid Clone<br />
<a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=364565.1">Note Id:364565.1</a> Troubleshooting RapidClone issues with Oracle Applications 11i<br />
<a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=398619.1">Note Id:398619.1</a> Clone Oracle Applications 11i using Oracle Application Manager (OAM Clone)<br />
<h2>Oracle EBS Autoconfig</h2><a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=218089.1">Note Id:218089.1</a> Autoconfig FAQ<br />
<a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=315674.1">Note Id:315674.1</a> How To Verify if Autoconfig is Enabled on 11.5.x<br />
<a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=165195.1">Note Id:165195.1</a> Using AutoConfig to Manage System Configurations with Oracle Applications 11i<br />
<a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=341322.1">Note Id:341322.1</a> How to change the hostname of an Applications Tier using AutoConfig<br />
<a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=270519.1">Note Id:270519.1</a> Customizing an AutoConfig Environment<br />
<h2>Oracle EBS System Administration</h2><a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=189487.1">Note Id:189487.1</a> System Administration FAQ's<br />
<a href="https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=457166.1">Note:457166.1</a> FNDCPASS Utility New Feature: Enhance Security With Non-Reversible Hash Password
<h1>Cusomers reactions</h1><br />
I have successfully implemented solution on several sites and have <a href="https://drive.google.com/file/d/1hJ3CHmhfnxGLiUvAZssnmSfDpQPAAqHm/view?usp=sharing">one written feedback form.</a>
</br>
So you can ask them about live chat impressions.
<h1>How to proceed</h1>
For all details and steps to implement this solution on your site please contact author on email: <b>damir dot vadas (monkey) gmail.com</b>
<div class="blogger-post-footer">Feed URL</div>Damir Vadashttp://www.blogger.com/profile/15963017378937428976noreply@blogger.com0tag:blogger.com,1999:blog-2807785752181271137.post-29617536660722055672021-09-27T08:09:00.010+02:002021-09-27T08:13:18.821+02:00Telnet replacement on Windows OS
Many times I get situations where a client has problems with Oracle connection. First thing is to check that there is no firewall between client maschine and Oracle.
</p>
For that, professionals use telnet application (available on any Linux OS) but not installed as default on many Windows client computers.
</p>
In such situations there is very handy PowerShell command that is already available:
<pre class="brush: bash; gutter: false;">
PS U:\> tnc -ComputerName some_server.some_domain -Port 2329
ComputerName : some_server.some_domain
RemoteAddress : 10.89.248.10
RemotePort : 2329
InterfaceAlias : Ethernet 2
SourceAddress : 10.29.16.79
TcpTestSucceeded : True
PS U:\>
</pre>
<br>
Hope this helps someone.
<br>
<br>
Cheers!
<div class="blogger-post-footer">Feed URL</div>Damir Vadashttp://www.blogger.com/profile/15963017378937428976noreply@blogger.com0tag:blogger.com,1999:blog-2807785752181271137.post-55432525799208971452021-03-18T11:57:00.006+01:002021-09-27T08:05:41.962+02:00Login storm on Oracle?Recently have customer complain that there was problem with connections to one oracle the database and if we can check if there was a "listener storm". I will not explain what this is or how this impact the system, as you can google that almost anywhere.
<br />
I will focus how to efficiently check this claim. As listener in this case had 7,7 GB, it was very hard to make any unoptimized asks against it. Here is what I do.
First check day (if you do not know exact time), lister.log is default name (change it in your particular case to relevant listener name if needed).
<br />
So I moved my focus to 14-MAR-2021, when problems seems to happened (according customer claim), and see logins by hour (summary):
<pre class="brush: bash; gutter: false;">
fgrep "14-MAR-2021" listener.log | fgrep "establish" | awk '{ print $1 " " $2 }' | awk -F: '{ print $1 }' | sort | uniq -c
1392 14-MAR-2021 00
1376 14-MAR-2021 01
1340 14-MAR-2021 02
1388 14-MAR-2021 03
1411 14-MAR-2021 04
1394 14-MAR-2021 05
1817 14-MAR-2021 06
1503 14-MAR-2021 07
1580 14-MAR-2021 08
1653 14-MAR-2021 09
1742 14-MAR-2021 10 <---will take this hour for further analyze
1769 14-MAR-2021 11
1648 14-MAR-2021 12
1643 14-MAR-2021 13
1677 14-MAR-2021 14
1675 14-MAR-2021 15
1681 14-MAR-2021 16
1750 14-MAR-2021 17
1700 14-MAR-2021 18
1875 14-MAR-2021 19
3336 14-MAR-2021 20
2478 14-MAR-2021 21
1512 14-MAR-2021 22
1474 14-MAR-2021 23
</pre>
<br />
Then check listener.log for that day if there were any erros in that period. In my case there were none, so I moved my focus to 14-MARCH, 10 o'clock time. So here is chosen hour by minutes :
<pre class="brush: bash; gutter: false;">
fgrep "14-MAR-2021 10:" listener.log | fgrep "establish" | awk '{ print $1 " " $2 }' | awk -F: '{ print $1 ":" $2 }' | sort | uniq -c
29 14-MAR-2021 10:00
38 14-MAR-2021 10:01
28 14-MAR-2021 10:02
28 14-MAR-2021 10:03
28 14-MAR-2021 10:04
28 14-MAR-2021 10:05
36 14-MAR-2021 10:06
35 14-MAR-2021 10:07
26 14-MAR-2021 10:08
22 14-MAR-2021 10:09
32 14-MAR-2021 10:10
43 14-MAR-2021 10:11
23 14-MAR-2021 10:12
33 14-MAR-2021 10:13
39 14-MAR-2021 10:14 <---will take this minute for further analyze
25 14-MAR-2021 10:15
26 14-MAR-2021 10:16
34 14-MAR-2021 10:17
29 14-MAR-2021 10:18
19 14-MAR-2021 10:19
22 14-MAR-2021 10:20
50 14-MAR-2021 10:21
30 14-MAR-2021 10:22
23 14-MAR-2021 10:23
24 14-MAR-2021 10:24
28 14-MAR-2021 10:25
31 14-MAR-2021 10:26
24 14-MAR-2021 10:27
32 14-MAR-2021 10:28
20 14-MAR-2021 10:29
29 14-MAR-2021 10:30
37 14-MAR-2021 10:31
31 14-MAR-2021 10:32
24 14-MAR-2021 10:33
18 14-MAR-2021 10:34
23 14-MAR-2021 10:35
37 14-MAR-2021 10:36
33 14-MAR-2021 10:37
35 14-MAR-2021 10:38
24 14-MAR-2021 10:39
32 14-MAR-2021 10:40
40 14-MAR-2021 10:41
26 14-MAR-2021 10:42
30 14-MAR-2021 10:43
32 14-MAR-2021 10:44
30 14-MAR-2021 10:45
21 14-MAR-2021 10:46
27 14-MAR-2021 10:47
27 14-MAR-2021 10:48
21 14-MAR-2021 10:49
32 14-MAR-2021 10:50
45 14-MAR-2021 10:51
27 14-MAR-2021 10:52
27 14-MAR-2021 10:53
22 14-MAR-2021 10:54
26 14-MAR-2021 10:55
33 14-MAR-2021 10:56
17 14-MAR-2021 10:57
27 14-MAR-2021 10:58
24 14-MAR-2021 10:59
----
1742
</pre>
<br />
Similar but grouped one by minute, you can get with:
<pre class="brush: bash; gutter: false;">
fgrep "14-MAR-2021 10:14:" listener.log | fgrep "establish" | awk '{ print $1 " " $2 }' | awk -F: '{ print $1 ":" $2 }' | sort | uniq -c
39 14-MAR-2021 10:14
</pre>
<br />
And finally, the most important one is login by seconds, as this really shows if there were any problematic tresholds:
<pre class="brush: bash; gutter: false;">
fgrep "14-MAR-2021 10:14:" listener.log | fgrep "establish" | awk '{ print $1 " " $2 }' | awk -F: '{ print $1 ":" $2":"$3 }' | sort | uniq -c
3 14-MAR-2021 10:14:01
1 14-MAR-2021 10:14:04
1 14-MAR-2021 10:14:07
7 14-MAR-2021 10:14:14
1 14-MAR-2021 10:14:16
1 14-MAR-2021 10:14:20
1 14-MAR-2021 10:14:27
1 14-MAR-2021 10:14:29
1 14-MAR-2021 10:14:34
1 14-MAR-2021 10:14:43
1 14-MAR-2021 10:14:44
1 14-MAR-2021 10:14:47
2 14-MAR-2021 10:14:50
1 14-MAR-2021 10:14:55
1 14-MAR-2021 10:14:56
7 14-MAR-2021 10:14:57
8 14-MAR-2021 10:14:59
----
39
</pre>
<br />
So there was no login storm!
<br />
<br />
Hope this helps someone.
<br />
<br />
Cheers!
<div class="blogger-post-footer">Feed URL</div>Damir Vadashttp://www.blogger.com/profile/15963017378937428976noreply@blogger.com1tag:blogger.com,1999:blog-2807785752181271137.post-3199115609410363082020-12-13T09:35:00.016+01:002020-12-13T09:47:50.826+01:00Purge OS .aud file with SQL*PlusDeleting .aud files may be a problem when you cannot reach server directly from OS side (security or sometimes firewall problems). <div><br />
However deleting them all without any additional checking, what is an easiest solution, was not something that I wanted. Additionally I was forced to leave database settings in the same setup as they were before purging files, what needs additional programming as well. This is why I was forced to introduce purge period (defined as timestamp variable) and introduce some additional code, detailedly explained inside script as comments, that will run on any env that I could find.
<br /><br />
And this is how this So I was forced to wrote small script called <a href="https://sites.google.com/site/vadasovi/icon_holders-1/purge_os_aud.sql">purge_os_aud.sql</a> which can directly delete .aud file on OS level, directly from SQL*Plus.
<h2 style="text-align: left;">Live example</h2>
<b>Situation on OS at the beginning</b>:
<pre class="brush: text; gutter: false;">@some_server:/orabase/product/12cR2/db_1/rdbms/audit > ls -ltr
total 30
-rw-r----- 1 oracle dba 787 Dec 9 00:03 SOME_DB_ora_29590_20201209000317028931143795.aud
-rw-r----- 1 oracle dba 814 Dec 9 00:03 SOME_DB_ora_29767_20201209000328231292143795.aud
-rw-r----- 1 oracle dba 812 Dec 9 00:03 SOME_DB_ora_29590_20201209000328069648143795.aud
-rw-r----- 1 oracle dba 816 Dec 9 00:03 SOME_DB_ora_29845_20201209000334147381143795.aud
-rw-r----- 1 oracle dba 812 Dec 9 00:04 SOME_DB_ora_299_20201209000427939461143795.aud
-rw-r----- 1 oracle dba 812 Dec 9 00:04 SOME_DB_ora_294_20201209000427414852143795.aud
-rw-r----- 1 oracle dba 812 Dec 9 00:04 SOME_DB_ora_844_20201209000459038354143795.aud
-rw-r----- 1 oracle dba 814 Dec 9 00:05 SOME_DB_ora_1117_20201209000507772692143795.aud
-rw-r----- 1 oracle dba 814 Dec 9 00:05 SOME_DB_ora_1121_20201209000508072602143795.aud
-rw-r----- 1 oracle dba 816 Dec 9 00:24 SOME_DB_ora_11275_20201209002414644280143795.aud
-rw-r----- 1 oracle dba 816 Dec 9 00:24 SOME_DB_ora_11290_20201209002415858573143795.aud
-rw-r----- 1 oracle dba 816 Dec 9 00:24 SOME_DB_ora_11285_20201209002415146598143795.aud
-rw-r----- 1 oracle dba 816 Dec 9 00:27 SOME_DB_ora_13301_20201209002753537037143795.aud
-rw-r----- 1 oracle dba 816 Dec 9 00:27 SOME_DB_ora_13305_20201209002753756343143795.aud
-rw-r----- 1 oracle dba 881 Dec 9 05:18 SOME_DB_ora_24140_20201209051822299229143795.aud
-rw-r----- 1 oracle dba 879 Dec 9 05:30 SOME_DB_ora_1003_20201209053043613964143795.aud
-rw-r----- 1 oracle dba 879 Dec 9 05:31 SOME_DB_ora_1237_20201209053118186852143795.aud
-rw-r----- 1 oracle dba 879 Dec 9 05:33 SOME_DB_ora_2144_20201209053328373759143795.aud
-rw-r----- 1 oracle dba 879 Dec 9 05:38 SOME_DB_ora_4381_20201209053814441437143795.aud
-rw-r----- 1 oracle dba 879 Dec 9 05:40 SOME_DB_ora_5934_20201209054017131313143795.aud
-rw-r----- 1 oracle dba 879 Dec 9 05:43 SOME_DB_ora_7378_20201209054357866854143795.aud
-rw-r----- 1 oracle dba 881 Dec 9 05:50 SOME_DB_ora_11079_20201209055048991541143795.aud
-rw-r----- 1 oracle dba 882 Dec 9 05:57 SOME_DB_ora_14130_20201209055707639335143795.aud
-rw-r----- 1 oracle dba 881 Dec 9 06:20 SOME_DB_ora_26917_20201209062009264057143795.aud
-rw-r----- 1 oracle dba 877 Dec 9 06:27 SOME_DB_ora_398_20201209062714864529143795.aud
@some_server:/orabase/product/12cR2/db_1/rdbms/audit >
</pre>
<b>Purging</b>
<br />
Now task to do this is pretty easy:
<pre class="brush: text; gutter: false;">
purge_os_aud.sql [before SYSTIMESTAMP]
</pre>
<br />
<pre class="brush: text; gutter: false;">07:13:58 SYS@SOME_DB>@purge_os_aud "to_timestamp ('09.12.2020 06:00:00.000000', 'dd.mm.yyyy hh24:mi:ss.FF')"
Purging OS audit files, CONTAINER_CURRENT, older than "to_timestamp ('09.12.2020 06:00:00.000000', 'dd.mm.yyyy hh24:mi:ss.FF')"
Please wait ...
OS AUDIT TRAIL temporary set to: 09.12.2020 06:00:00.000000000
AUDIT_TRAIL_OS filesystem on "/orabase/product/12cR2/db_1/rdbms/audit" cleared.
PL/SQL procedure successfully completed.
Elapsed: 00:00:05.70
07:14:06 SYS@SOME_DB>
</pre>
<b>AFTER execution</b>:
<pre class="brush: text; gutter: false;">@some_server:/orabase/product/12cR2/db_1/rdbms/audit > ls -ltr
total 30
-rw-r----- 1 oracle dba 816 Dec 9 00:24 SOME_DB_ora_11275_20201209002414644280143795.aud
-rw-r----- 1 oracle dba 879 Dec 9 05:40 SOME_DB_ora_5934_20201209054017131313143795.aud
-rw-r----- 1 oracle dba 882 Dec 9 05:57 SOME_DB_ora_14130_20201209055707639335143795.aud
-rw-r----- 1 oracle dba 881 Dec 9 06:20 SOME_DB_ora_26917_20201209062009264057143795.aud
-rw-r----- 1 oracle dba 877 Dec 9 06:27 SOME_DB_ora_398_20201209062714864529143795.aud
@some_server:/orabase/product/12cR2/db_1/rdbms/audit >
</pre>
<h2>Is something wrong?</h2>
As some of you may noticed, still some .aud files were not deleted on OS side.
<br />Why?
<br />IMHO, reason is active sessions that are still running. You can check them with next script:
<pre class="brush: sql; gutter: false;">SELECT v.con_id, v.name, v.open_mode, COUNT(u.event_timestamp) count
FROM cdb_unified_audit_trail u
FULL OUTER JOIN v$containers v ON u.con_id = v.con_id
GROUP BY v.con_id, v.name, v.open_mode
ORDER BY v.con_id;
</pre>
<h1>The End</h1>
Since 11g, Oracle introduced some nice methods for such a tasks. All code placed here was tested against 12.1 database, so on earlier releases this code may not work.
<br />
<br />
Hope this helps someone.
<br />
<br />
Cheers!</div><div class="blogger-post-footer">Feed URL</div>Damir Vadashttp://www.blogger.com/profile/15963017378937428976noreply@blogger.com0tag:blogger.com,1999:blog-2807785752181271137.post-5433908025679876602020-11-26T11:13:00.009+01:002021-03-03T07:21:12.842+01:00Check file space script for any Unixwe DBAs have problems when need to check space on different Unix envs (HP-UX, AIX, Linux, Solaris etc).
Here is a script that covers all envs... and I use it with great success:
<pre class="brush: text; gutter: false;">
df -Pk | awk '{
if ( NR == 1 ) { next }
if ( NF == 6 ) { print }
if ( NF == 5 ) { next }
if ( NF == 1 ) {
getline record;
$0 = $0 record
print $0
}
}' | awk '
BEGIN {print "Filesystem Mount Point Total GB Avail GB Used GB Used"
print "--------------------------------------------- --------------------------------------------- ---------- ---------- ---------- -----"}
END {print ""}
/dev/ || /^[0-9a-zA-Z.]*:\// {
printf ("%-45.45s %-45s %10.2f %10.2f %10.2f %4.0f%\n",$1,$6,$2/1024/1024,$4/1024/1024,$3/1024/1024,$5)
}'
</pre>
Result (from one Linux server) looks like:
<pre class="brush: text; gutter: false;">
Filesystem Mount Point Total GB Avail GB Used GB Used
--------------------------------------------- --------------------------------------------- ---------- ---------- ---------- -----
devtmpfs /dev 377.67 377.67 0.00 0%
tmpfs /dev/shm 377.68 377.61 0.08 1%
/dev/mapper/SYSTEM-ROOT / 1.91 1.69 0.10 6%
/dev/mapper/SYSTEM-USR /usr 3.81 1.88 1.71 48%
/dev/mapper/SYSTEM-SRV /srv 0.48 0.44 0.00 1%
/dev/sda2 /boot 0.48 0.21 0.23 52%
/dev/mapper/SYSTEM-HOME /home 0.95 0.88 0.01 1%
/dev/mapper/SYSTEM-HOMEU_USER /home/uisupp 1.91 1.78 0.01 1%
/dev/sda1 /boot/efi 0.24 0.23 0.01 4%
/dev/mapper/SYSTEM-OPT /opt 4.89 3.02 1.65 36%
/dev/mapper/SYSTEM-TMP /tmp 5.78 5.43 0.04 1%
/dev/mapper/SYSTEM-VAR /var 3.87 3.41 0.26 8%
/dev/mapper/SYSTEM-VARLIB /var/lib 0.95 0.52 0.37 42%
/dev/mapper/SYSTEM-VARLOG /var/log 3.81 3.21 0.39 11%
/dev/mapper/SYSTEM-VARCACHE /var/cache 4.86 4.05 0.57 13%
/dev/mapper/SYSTEM-ORABASE /orabase 39.25 9.43 27.80 75%
srvrv001.srvr.com:/srvr_db_new /var/backup 7168.00 7131.84 36.16 1%
/dev/mapper/SRVR10T-AUDIT /db/audit/SRVR10T 9.72 9.30 0.20 3%
/dev/mapper/SRVR10T-ADMIN /db/admin/SRVR10T 39.25 36.22 2.21 6%
/dev/mapper/SRVR10T-REDOA /db/redoa/SRVR10T 9.72 5.44 4.06 43%
/dev/mapper/SRVR10T-REDOB /db/redob/SRVR10T 9.72 5.44 4.06 43%
/dev/mapper/SRVR10T-FLASHARCH /db/flasharch/SRVR10T 98.31 96.23 0.06 1%
/dev/mapper/SRVR10T-DBF01 /db/dbf/SRVR10T 344.38 0.00 338.84 100%
123.45.67.890:/transfer /mnt/transfer 2918.40 2673.66 244.74 9%
</pre>
</br>
Hope this helps.
</br>
</br>
Cheers!<div class="blogger-post-footer">Feed URL</div>Damir Vadashttp://www.blogger.com/profile/15963017378937428976noreply@blogger.com0tag:blogger.com,1999:blog-2807785752181271137.post-72993791247704095282019-12-04T10:19:00.000+01:002019-12-04T12:00:04.956+01:00Truncate (or drop!) partition without disturbing global non partitioned indexes<h2>
The problem</h2>
Typical problem with partition tables is partition pruning and problems with global non partitioned index on them.
From the theory, if you truncate partition when you have global non partitioned index, it will become INVALID and prevent normal functioning of the application.
And when you have INVALID global non partitioned index, its rebuild may be very costly operation (especially on DWH systems), while in the same time if you do not use ONLINE option, rebuild put locks that also can prevent normal application execution against those table.
However ONLINE option means put lock in very small moment of time, but you must have double the size of index in tablespace, what also can be sometimes impossible to achieved.
<br />
But there is a workaround to prevent this.
<h2>
The problem</h2>
Here is brief short demo that will explain all in practical case.
<br />
I will create one range partition table with 5 partitions,where partition will be based on number column.
<pre class="brush: sql;gutter: false; collapse: false;">
DROP TABLE PURGE_PART_DEMO;
CREATE TABLE PURGE_PART_DEMO (
A_NUMBER NUMBER ,
A_NAME VARCHAR2(32)
)
SEGMENT CREATION IMMEDIATE
NOCOMPRESS
TABLESPACE USERS
PCTUSED 40
PCTFREE 10
INITRANS 4
MAXTRANS 255
STORAGE (
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 8
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
)
LOGGING
PARTITION BY RANGE (A_NUMBER)
(
PARTITION A_NUMBER_1 VALUES LESS THAN (1)
LOGGING
NOCOMPRESS
TABLESPACE USERS
PCTUSED 40
PCTFREE 10
INITRANS 4
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 32M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 8
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
),
PARTITION A_NUMBER_2 VALUES LESS THAN (2)
LOGGING
NOCOMPRESS
TABLESPACE USERS
PCTUSED 40
PCTFREE 10
INITRANS 4
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 32M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 8
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
),
PARTITION A_NUMBER_3 VALUES LESS THAN (3)
LOGGING
NOCOMPRESS
TABLESPACE USERS
PCTUSED 40
PCTFREE 10
INITRANS 4
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 32M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 8
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
),
PARTITION A_NUMBER_4 VALUES LESS THAN (4)
LOGGING
NOCOMPRESS
TABLESPACE USERS
PCTUSED 40
PCTFREE 10
INITRANS 4
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 32M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 8
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
),
PARTITION A_NUMBER_5 VALUES LESS THAN (5)
LOGGING
NOCOMPRESS
TABLESPACE USERS
PCTUSED 40
PCTFREE 10
INITRANS 4
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 32M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 8
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
),
PARTITION A_NUMBER_MAXVALUE VALUES LESS THAN (MAXVALUE)
LOGGING
NOCOMPRESS
TABLESPACE USERS
PCTUSED 40
PCTFREE 10
INITRANS 4
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 32M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 8
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
)
)
PARALLEL 1;
</pre>
Block occupation of empty table:
<pre class="brush: sql;gutter: false; collapse: false;">
select bytes/1024/1024 MB, blocks, segment_name, partition_name from dba_segments s where segment_name='PURGE_PART_DEMO' ORDER BY PARTITION_NAME;
MB BLOCKS SEGMENT_NAME PARTITION_NAME
---------- ---------- --------------------------------------------------------------------------------- ------------------------------
1 128 PURGE_PART_DEMO A_NUMBER_1
1 128 PURGE_PART_DEMO A_NUMBER_2
1 128 PURGE_PART_DEMO A_NUMBER_3
1 128 PURGE_PART_DEMO A_NUMBER_4
1 128 PURGE_PART_DEMO A_NUMBER_5
1 128 PURGE_PART_DEMO A_NUMBER_MAXVALUE
6 rows selected.
</pre>
Insert in each partition 30.0000 records:
<pre class="brush: sql;gutter: false; collapse: false;">
begin
for i in 1..30000 loop
insert into PURGE_PART_DEMO (a_number,a_name) values (dbms_random.value (low=>0, high=>1) ,dbms_random.string(opt=>'u', len =>32)); --part A_NUMBER_1
insert into PURGE_PART_DEMO (a_number,a_name) values (dbms_random.value (low=>1, high=>2) ,dbms_random.string(opt=>'u', len =>32)); --part A_NUMBER_2
insert into PURGE_PART_DEMO (a_number,a_name) values (dbms_random.value (low=>2, high=>3) ,dbms_random.string(opt=>'u', len =>32)); --part A_NUMBER_3
insert into PURGE_PART_DEMO (a_number,a_name) values (dbms_random.value (low=>3, high=>4) ,dbms_random.string(opt=>'u', len =>32)); --part A_NUMBER_4
insert into PURGE_PART_DEMO (a_number,a_name) values (dbms_random.value (low=>4, high=>5) ,dbms_random.string(opt=>'u', len =>32)); --part A_NUMBER_5
insert into PURGE_PART_DEMO (a_number,a_name) values (dbms_random.value (low=>5, high=>99),dbms_random.string(opt=>'u', len =>32)); --part A_NUMBER_MAXVALUE
end loop;
commit;
end;
/
</pre>
Checking again segment occupation:
<pre class="brush: sql;gutter: false; collapse: false;">
select bytes/1024/1024 MB, blocks, segment_name, partition_name from dba_segments s where segment_name='PURGE_PART_DEMO' ORDER BY PARTITION_NAME;
MB BLOCKS SEGMENT_NAME PARTITION_NAME
---------- ---------- --------------------------------------------------------------------------------- ------------------------------
33 4224 PURGE_PART_DEMO A_NUMBER_1
33 4224 PURGE_PART_DEMO A_NUMBER_2
33 4224 PURGE_PART_DEMO A_NUMBER_3
33 4224 PURGE_PART_DEMO A_NUMBER_4
33 4224 PURGE_PART_DEMO A_NUMBER_5
33 4224 PURGE_PART_DEMO A_NUMBER_MAXVALUE
6 rows selected.
</pre>
Make stat on table to have full data vie for partition:
<pre class="brush: sql;gutter: false; collapse: false;">
BEGIN
SYS.DBMS_STATS.GATHER_TABLE_STATS (
OwnName => 'Q383211'
,TabName => 'PURGE_PART_DEMO'
,Estimate_Percent => 100
,Block_sample => TRUE
,Method_Opt => 'FOR ALL COLUMNS SIZE 1'
,Degree => NULL
,Cascade => DBMS_STATS.AUTO_CASCADE
,No_Invalidate => DBMS_STATS.AUTO_INVALIDATE
,Force => FALSE);
END;
/
</pre>
<pre class="brush: sql;gutter: false; collapse: false;">
col HIGH_VALUE for A8
SELECT PARTITION_POSITION, PARTITION_NAME,
HIGH_VALUE, TABLESPACE_NAME, NUM_ROWS, LAST_ANALYZED
,AVG_SPACE, BLOCKS, EMPTY_BLOCKS
FROM sys.user_TAB_PARTITIONS
WHERE TABLE_NAME = 'PURGE_PART_DEMO'
ORDER BY PARTITION_POSITION;
PARTITION_POSITION PARTITION_NAME HIGH_VAL TABLESPACE_NAME NUM_ROWS LAST_ANA AVG_SPACE BLOCKS EMPTY_BLOCKS
------------------ ------------------------------ -------- ------------------------------ ---------- -------- ---------- ---------- ------------
1 A_NUMBER_1 1 USERS 30000 04.12.19 0 247 0
2 A_NUMBER_2 2 USERS 30000 04.12.19 0 252 0
3 A_NUMBER_3 3 USERS 30000 04.12.19 0 252 0
4 A_NUMBER_4 4 USERS 30000 04.12.19 0 252 0
5 A_NUMBER_5 5 USERS 30000 04.12.19 0 252 0
6 A_NUMBER_MAXVALUE MAXVALUE USERS 30000 04.12.19 0 252 0
6 rows selected.
</pre>
Create 2 indexes, one loacl and one global non partitioned one:
<pre class="brush: sql;gutter: false; collapse: false;">
CREATE INDEX PURGE_PART_DEMO_IX1 on PURGE_PART_DEMO (a_number) local; --partitioned one!
CREATE INDEX PURGE_PART_DEMO_IX2 on PURGE_PART_DEMO (a_name) ; --global non partitioned index
</pre>
Let me run stat again with histogram on columns with indexes (not needed for this case):
<pre class="brush: sql;gutter: false; collapse: false;">
BEGIN
SYS.DBMS_STATS.GATHER_TABLE_STATS (
OwnName => user
,TabName => 'PURGE_PART_DEMO'
,Estimate_Percent => 100
,Block_sample => TRUE
,Method_Opt => 'FOR ALL INDEXED COLUMNS SIZE 254'
,Degree => NULL
,Cascade => DBMS_STATS.AUTO_CASCADE
,No_Invalidate => DBMS_STATS.AUTO_INVALIDATE
,Force => FALSE);
END;
/
</pre>
Histogram looks like:
<pre class="brush: sql;gutter: false; collapse: false;">
OWNER TABLE_NAME COLUMN_NAME NUM_DISTINCT NUM_BUCKETS HISTOGRAM LAST_ANALYZED SAMPLE_SIZE DENSITY
------------ -------------------- ------------------------------ ------------ ----------- --------------- ------------------- ----------- ----------
Q383211 PURGE_PART_DEMO A_NAME* 180000 254 H BALANCED 04.12.2019 08:20:44 180000 5,5556E-06
A_NUMBER* 180000 254 H BALANCED 04.12.2019 08:20:44 180000 5,5556E-06
* indexed column
** uq indexed column
</pre>
Check index status:
<pre class="brush: sql;gutter: false; collapse: false;">
select index_name, status from all_indexes where owner=(SELECT sys_context('USERENV', 'AUTHENTICATED_IDENTITY') FROM dual) and index_name in ('PURGE_PART_DEMO_IX1','PURGE_PART_DEMO_IX2');
INDEX_NAME STATUS
------------------------------ --------
PURGE_PART_DEMO_IX1 N/A --partition index ALWAYS VALID
PURGE_PART_DEMO_IX2 VALID --global non partitioned index (THIS ONE COULD MAKE A PROBLEMS)
</pre>
Now let me show you ...ordinary truncate partition (a wrong way!!)
<pre class="brush: sql;gutter: false; collapse: false;">
ALTER TABLE PURGE_PART_DEMO TRUNCATE PARTITION (A_NUMBER_3) drop storage;
</pre>
Let us check indexes:
<pre class="brush: sql;gutter: false; collapse: false;">
INDEX_NAME STATUS
------------------------------ --------
PURGE_PART_DEMO_IX1 N/A
PURGE_PART_DEMO_IX2 UNUSABLE
</pre>
Obviously, global non partitioned one become invalid-expected Oracle behavior.
<h2>
The solution</h2>
Now let us rebuild index so it is back to normal state for next presentation purpose.
<pre class="brush: sql;gutter: false; collapse: false;">
alter index PURGE_PART_DEMO_IX2 REBUILD;
Index altered.
</pre>
Check indexes status again:
<pre class="brush: sql;gutter: false; collapse: false;">
select index_name, status from all_indexes where owner=(SELECT sys_context('USERENV', 'AUTHENTICATED_IDENTITY') FROM dual) and index_name in ('PURGE_PART_DEMO_IX1','PURGE_PART_DEMO_IX2');
INDEX_NAME STATUS
------------------------------ --------
PURGE_PART_DEMO_IX1 N/A
PURGE_PART_DEMO_IX2 VALID
</pre>
And here is how we can solve this problem with a little different approach....can be fully avoiod...fully!
Idea is to remove all records in involved partitions (that we want to truncate) and then truncate that partition in a classic way.
This approach what will not put global non partitioned indexes in invalid state!
<pre class="brush: sql;gutter: false; collapse: false;">
DELETE FROM PURGE_PART_DEMO partition (A_NUMBER_2) ;
30000 rows deleted.
commit;
Commit complete.
</pre>
Check indexes status again:
<pre class="brush: sql;gutter: false; collapse: false;">
select index_name, status from all_indexes where owner=(SELECT sys_context('USERENV', 'AUTHENTICATED_IDENTITY') FROM dual) and index_name in ('PURGE_PART_DEMO_IX1','PURGE_PART_DEMO_IX2');
INDEX_NAME STATUS
------------------------------ --------
PURGE_PART_DEMO_IX1 N/A
PURGE_PART_DEMO_IX2 VALID
</pre>
And we should truncate now that partition, when there is no records...<b>Oracle is smart enough to understand that should not touch global non partitioned index and it remains VALID</b>!
<pre class="brush: sql;gutter: false; collapse: false;">
ALTER TABLE PURGE_PART_DEMO TRUNCATE PARTITION (A_NUMBER_2) drop storage;
Table truncated.
</pre>
Check indexes status again:
<pre class="brush: sql;gutter: false; collapse: false;">
select index_name, status from all_indexes where owner=(SELECT sys_context('USERENV', 'AUTHENTICATED_IDENTITY') FROM dual) and index_name in ('PURGE_PART_DEMO_IX1','PURGE_PART_DEMO_IX2');
INDEX_NAME STATUS
------------------------------ --------
PURGE_PART_DEMO_IX1 N/A
PURGE_PART_DEMO_IX2 VALID
</pre>
Only important thing is to have INITIAL part of each partition to only 1 MB, so when you truncate it it will fall to minimum size.
<pre class="brush: sql;gutter: false; collapse: false;">
select bytes/1024/1024 MB, blocks, segment_name, partition_name from dba_segments s where segment_name='PURGE_PART_DEMO' ORDER BY PARTITION_NAME;
MB BLOCKS SEGMENT_NAME PARTITION_NAME
---------- ---------- --------------------------------------------------------------------------------- ------------------------------
33 4224 PURGE_PART_DEMO A_NUMBER_1
1 128 PURGE_PART_DEMO A_NUMBER_2
1 128 PURGE_PART_DEMO A_NUMBER_3
33 4224 PURGE_PART_DEMO A_NUMBER_4
33 4224 PURGE_PART_DEMO A_NUMBER_5
33 4224 PURGE_PART_DEMO A_NUMBER_MAXVALUE
6 rows selected.
</pre>
<h2>
The final</h2>
It is very important that you put storage initial parameter of each partition to 1M, so you can really have smallest amount of space used when partition is truncated.
In this way later, when you have time frame for full downtime, you can drop partitions that are previously truncated.
<br />
But wait a minute, can we drop empty partition now and not disturb indexes?
Let us try!
<br />
<pre class="brush: sql;gutter: false; collapse: false;">
alter table PURGE_PART_DEMO drop partition A_NUMBER_2;
Table altered.
alter table PURGE_PART_DEMO drop partition A_NUMBER_3;
Table altered.
select bytes/1024/1024 MB, blocks, segment_name, partition_name from dba_segments s where segment_name='PURGE_PART_DEMO' ORDER BY PARTITION_NAME;
MB BLOCKS SEGMENT_NAME PARTITION_NAME
---------- ---------- --------------------------------------------------------------------------------- ------------------------------
33 4224 PURGE_PART_DEMO A_NUMBER_1
33 4224 PURGE_PART_DEMO A_NUMBER_4
33 4224 PURGE_PART_DEMO A_NUMBER_5
33 4224 PURGE_PART_DEMO A_NUMBER_MAXVALUE
</pre>
<pre class="brush: sql;gutter: false; collapse: false;">
select index_name, status from all_indexes where owner=(SELECT sys_context('USERENV', 'AUTHENTICATED_IDENTITY') FROM dual) and index_name in ('PURGE_PART_DEMO_IX1','PURGE_PART_DEMO_IX2');
INDEX_NAME STATUS
------------------------------ --------
PURGE_PART_DEMO_IX1 N/A
PURGE_PART_DEMO_IX2 VALID
</pre>
<b>Seems yeeess!</b>
<br />
All exposed here was successfully tested on Oracle >= 11.1 version and was tested against ranged partitioned and sub partitioned tables.
<br />
<br />
Hope this helps someone.
<br />
<br />
Cheers!<div class="blogger-post-footer">Feed URL</div>Damir Vadashttp://www.blogger.com/profile/15963017378937428976noreply@blogger.com0tag:blogger.com,1999:blog-2807785752181271137.post-51494087480752481532018-12-04T20:37:00.000+01:002019-04-29T12:27:20.101+02:00Change many passwords in Toad connection file (outside Toad)<h2>
The problem</h2>
Many users that do use Toad intensively, have a situation that after regular change of some oracle user password, have to repeat the same in all occasions in Toad connection dialog one by one manually. If you have 50-100 databases with 5-10 connections (schemas) on each db this is a hell of a job. So when we have to change passwords, maintaining in Toad is practically impossible without manual work.
<br />
<h2>
The solution</h2>
In Toad 13.1 people from quest has decided to allow to end users to manually change. Here are pretty straightforward steps:
Find file:
<pre class="brush: text; gutter: false; collapse: false;">%APPDATA%\Quest Software\Toad for Oracle\12.12\User Files\Connections.xml
</pre>
"12.12" was mine Toad version, so actual path may vary in your case.
<br />
Open file in <a href="https://notepad-plus-plus.org/download">Notepad++</a> editor.
<br />
Copy one password value which is OK for named user you would like to change (regardless which database-only user is important)
<br />
Then start replace dialog in Notepadd++ and check "regular expression". Then put in search part
<pre class="brush: text; gutter: false; collapse: false;">(?s-i)(<User>\s*<!\[CDATA\[YourUsername\]\]>\s*</user>.*?<Password>\s*)(\S+)(\s*</Password>)
</pre>
where "YourUsername" should be replaced with user that you really want to change in all occurrences.
in replace part put:
<pre class="brush: text; gutter: false; collapse: false;">$1(new_password)$3
</pre>
where "new_password" is value that you previously put in the clipboard.
<br />
Choose replace all and that is all.
<h2>
The end</h2>
This is not mine solution but the whole solution came from Michael Staszewski, from <a href="https://forums.toadworld.com/t/change-pwd-on-many-connections-automatically/33415">Toad forum</a>.
<br />
<b>Once again, thank you Michael</b>
<br />
<br />
Hope this help someone.
<br />
<br />
Cheers!
<div class="blogger-post-footer">Feed URL</div>Damir Vadashttp://www.blogger.com/profile/15963017378937428976noreply@blogger.com0tag:blogger.com,1999:blog-2807785752181271137.post-10632367870004905052017-11-28T18:01:00.000+01:002017-11-28T18:01:45.488+01:00SYSAUX and purging big objects (segments) manuallyand this one published recently <a class="internal-link view-post blog" data-postid="18198" href="https://www.toadworld.com/platforms/oracle/b/weblog/archive/2017/11/15/sysaux-and-purging-big-objects-segments-manually"> SYSAUX and purging big objects (segments) manually</a><br />
Cheers!<div class="blogger-post-footer">Feed URL</div>Damir Vadashttp://www.blogger.com/profile/15963017378937428976noreply@blogger.com0tag:blogger.com,1999:blog-2807785752181271137.post-78186224582647175292017-11-28T17:59:00.003+01:002017-11-28T18:20:59.923+01:00Writing as a toad maven-Custom Bind Values in SQL OptimizerSince last year, I am writing for Toad as their maven.
So mine new post would mostly be <a href="https://www.toadworld.com/members/damir.vadas_5f00_250/blogs" target="_blank">there</a>.<br />
Here is mine first topic...<a href="https://www.toadworld.com/products/toad-for-oracle/b/soo/archive/2017/08/25/custom-bind-values-in-sql-optimizer" target="_blank">Custom Bind Values in SQL Optimizer.</a><br />
Cheers! <div class="blogger-post-footer">Feed URL</div>Damir Vadashttp://www.blogger.com/profile/15963017378937428976noreply@blogger.com0