This site has been destroyed by Google forced upgrade to new way of WEB site.
All files links are not working. Many images has been lost in conversation.
Have to edit 190 pages manually. Will try to do ASAP but for this I need time ...
THANK YOU GOOGLE !

Monday, June 14, 2010

Audit DDL operations in database


DDL operations present real danger for any database.

Catastrophic commands like "drop trigger" or "drop table" are noticed immediately if they succeed to complete. But some other operations like altering/adding/dropping indexes or altering tables, are not easy to determine immediately after they happened. The worse of all is that it's circumstances are not obvious until they comes into play like downgrade database performance so much that the whole system suffer.

Because DDL are usually performed through scripts where many commands are executed sequentially, if you do not have straight error level checking through whenever sqlerror sqlplus directive, some errors that happened might not be catch and stays hidden with poor chance to notice them.

If beside DBA, there are many other users that has rights to perform any DDL operation, it is more then welcome to have history of such an activity. This is the best way to understand quickly what went wrong.

For such a cases DDL monitoring is right approach.

The solution

The solution is to implement DDL monitoring tool that will record all successful DDL operations in the database. This done through several parts:
  1. AUDIT_DDL_LOG table which will hold DDL information that were processed
  2. AUDIT_DDL_LOG_ARCH table that will hold archive DDL information
  3. Database trigger which will fill AUDIT_DDL_LOG table
  4. Procedure ARCHIVE_AUDIT_DDL_LOG which will move records from AUDIT_DDL_LOG to AUDIT_DDL_LOG_ARCH table
  5. Database job which will fire procedure ARCHIVE_AUDIT_DDL_LOG

AUDIT_DDL_LOG and AUDIT_DDL_LOG_ARCH tables

Table structure is:
CREATE SEQUENCE AUDIT_DDL_LOG_SEQ
  START WITH 1
  MAXVALUE 999999999999999999999999999
  MINVALUE 1
  CYCLE
  CACHE 100
  ;

CREATE TABLE AUDIT_DDL_LOG (
  DDL_DATE     DATE,
  USER_NAME    VARCHAR2(30 BYTE),
  SESSION_ID   NUMBER(8),
  SID          NUMBER(8),
  DDL_TYPE     VARCHAR2(30 BYTE),
  OBJECT_TYPE  VARCHAR2(18 BYTE),
  OWNER        VARCHAR2(30 BYTE),
  OBJECT_NAME  VARCHAR2(128 BYTE),
  ID           INTEGER
)
TABLESPACE TOOLS
PCTUSED    0
PCTFREE    0
;

CREATE UNIQUE INDEX AUDIT_DDL_LOG_UQX ON AUDIT_DDL_LOG
  (ID)
LOGGING
TABLESPACE TOOLS
PCTFREE    0
INITRANS   2
MAXTRANS   255
;


ALTER TABLE AUDIT_DDL_LOG ADD (
  CONSTRAINT AUDIT_DDL_LOG_PK
  PRIMARY KEY (ID)
  USING INDEX AUDIT_DDL_LOG_UQX)
;

-- GRANT SELECT ON AUDIT_DDL_LOG TO POWER_USER;
With granting select to POWER_USER role (now is commented), DBA allow that any user that own that role might look in the table for content that interest him/her. Because POWER_ROLE has user that belong to developers, DBA or any non "end user" group, he/she can alone check the result of scripts that was run under his session.

Because AUDIT_DDL_LOG is supposed to be table that holds current data, in a case of to many DDL's it might suffer of size problem.
This is why on regular time base I move old records to AUDIT_DDL_LOG_ARCH table.
CREATE TABLE AUDIT_DDL_LOG_ARCH(
  DDL_DATE     DATE,
  USER_NAME    VARCHAR2(30 BYTE),
  SESSION_ID   NUMBER(8),
  SID          NUMBER(8),
  DDL_TYPE     VARCHAR2(30 BYTE),
  OBJECT_TYPE  VARCHAR2(18 BYTE),
  OWNER        VARCHAR2(30 BYTE),
  OBJECT_NAME  VARCHAR2(128 BYTE),
  ID           INTEGER
)
TABLESPACE TOOLS
PCTUSED    0
PCTFREE    0
COMPRESS 
;
Frequency of deleting records from AUDIT_DDL_LOG_ARCH (too old DDL are really not important) is totally customer based policy and have to be defined as it needs.

Database trigger

Because this trigger is fired when DDL statement completes (AFTER DDL), it's primary task is to fill AUDIT_DDL_LOG table with information about execution. If DDL statement fail, trigger will not be fired.
This trigger is intentionally placed in tools schema (not in sys!) because when you export/import database, sys objects are gone. However this always withdrawn additional security settings for owner.
CREATE OR REPLACE TRIGGER TOOLS.audit_ddl_trigger
/* $Header: audit_ddl_trigger.tgb 1.00 10/01/2007 14:54 damirv $ */

/*-------------------------------------------------------------------------------------------------------------------- 
 NAME    : audit_ddl_trigger
 PURPOSE : Log any DDL statement in "audit_ddl_log" table

 Date    : 01.10.2007.
 Author  : Damir Vadas, damir.vadas@gmail.com

 Remarks : 

 Changes (DD.MM.YYYY, Name, CR/TR#):
-------------------------------------------------------------------------------------------------------------------- */
AFTER DDL
ON DATABASE
DECLARE
  s_sess audit_ddl_log.session_id%TYPE;
  s_sid audit_ddl_log.sid%TYPE;
  s_seq INTEGER;
BEGIN
  IF (ora_dict_obj_name != 'ALRT_PATH1' AND
      ora_dict_obj_name NOT LIKE 'EUL4%' AND
      ora_dict_obj_name NOT LIKE 'GL_INTERFACE%' AND
      ora_dict_obj_name NOT LIKE 'GL_POSTING%' AND
      ora_dict_obj_name NOT LIKE 'QUEST%' AND
      ora_dict_obj_name NOT LIKE 'SYS_TEMP%'  AND
      ora_dict_obj_name NOT LIKE 'WF_LOCAL%' AND
      ora_dict_obj_name NOT LIKE 'WF_UR_%' AND
      ora_dict_obj_name NOT LIKE 'ORA_TEMP%'
      ) THEN
    s_sess := sys_context('USERENV','SESSIONID');
    select sid into s_sid from v$session where audsid = s_sess;
    select AUDIT_DDL_LOG_SEQ.nextval into s_seq from dual;
    insert into audit_ddl_log (
      ddl_date,
      user_name,
      session_id,
      sid,
      ddl_type,
      object_type,
      owner,
      object_name,
      id
    )
    VALUES (
      sysdate,
      ora_login_user,
      s_sess,
      s_sid,
      ora_sysevent,
      ora_dict_obj_type,
      ora_dict_obj_owner,
      ora_dict_obj_name,
      s_seq
    );
  END IF;
EXCEPTION
  WHEN others THEN
    null;
END;
/
In this database trigger I have excluded several objects that are used in Oracle eBS 11.5.10.x environment (temp objects) which logging was not important to me.

Procedure ARCHIVE_AUDIT_DDL_LOG

In mine case I allays wanted to have at least 14 days in main AUDIT_DDL_LOG table. With p_retention parameter you can adjust this. Keep in mind that for p_retention=0 all data will be moved to AUDIT_DDL_LOG_ARCH table.
CREATE OR REPLACE PROCEDURE archive_audit_ddl_log (p_retention in number default 14) IS
/*---------------------------------------------------------------------------------- 
 NAME    : archive_audit_ddl_log.prc
 PURPOSE : Move records from audit_ddl_log to audit_ddl_log_arch table

 Date    : 01.10.2007.
 Author  : Damir Vadas, damir.vadas@gmail.com

 Remarks : p_retention define how much data stays in audit_ddl_log table

 Changes (DD.MM.YYYY, Name, CR/TR#):
-----------------------------------------------------------------------------------*/
DECLARE
  l_date date;
BEGIN
  l_date := sysdate-p_retention;
  insert into audit_ddl_log_arch (select * from AUDIT_DDL_LOG where DDL_DATE < l_date);
  delete from audit_ddl_log where DDL_DATE < l_date;
  commit;
EXCEPTION
  WHEN OTHERS THEN
    rollback;
END archive_audit_ddl_log;
/

Database job

The best way is to place archive_audit_ddl_log procedure in job which fires once a day. In this case job is fired every day at 1:01 AM.
DECLARE
  X NUMBER;
BEGIN
  SYS.DBMS_JOB.SUBMIT
  ( job       => X 
   ,what      => 'archive_audit_ddl_log;'
,next_date => to_date('02.10.2007 00:00:00','dd/mm/yyyy hh24:mi:ss')
   ,interval  => 'TRUNC(SYSDATE+1+1/1440)'
   ,no_parse  => FALSE
  );
  SYS.DBMS_OUTPUT.PUT_LINE('Job Number is: ' || to_char(x));
COMMIT;
END;
/
Owner of this job should not be in any circumstance sys. There is no reason for that. In mine case I use for many tasks like that TOOLS schema.

The result

Here is part of AUDIT_DDL_LOG table content what is result of our trigger:

The end

As you see, this monitoring tool is pretty straight forward for implementation as well for use. From mine experience, something like that is worth to be implemented on any production database.

If ypu want to monitor unsuccessful DDL statements (those which were not completed because of any kind of error), then one way to achieve is to do this:
  1. Add another table whose records will be filled with similar trigger which fire "BEFORE DDL"
  2. Difference (SQL "MINUS") between those two tables would show unsuccessful DDL operations
Cheers!

5 comments :

  1. I love it! Will this show the Oracle Login (user), or also the OS User (LAN ID) so in the case of shared ID's, we know more specifically who to yell at? =)

    ReplyDelete
  2. Hi!
    What will be loged is defined in AUDIT_DDL_LOG columns. In mine example oracle user name is supported through USER_NAME column. Unfortunately there is no data for OS User.
    But this can be fixed in a way:
    1) Add column for os_user_name value
    alter table AUDIT_DDL_LOG add (OS_USER_NAME VARCHAR2(64));
    2) Change "TOOLS.audit_ddl_trigger" in a way to replace previous code with (look for "NEW" lines):
    AFTER DDL
    ON DATABASE
    DECLARE
    s_sess audit_ddl_log.session_id%TYPE;
    s_sid audit_ddl_log.sid%TYPE;
    s_seq INTEGER;
    s_os_user_name VARCHAR2(64); /* NEW */
    BEGIN
    IF (ora_dict_obj_name != 'ALRT_PATH1' AND
    ora_dict_obj_name NOT LIKE 'EUL4%' AND
    ora_dict_obj_name NOT LIKE 'GL_INTERFACE%' AND
    ora_dict_obj_name NOT LIKE 'GL_POSTING%' AND
    ora_dict_obj_name NOT LIKE 'QUEST%' AND
    ora_dict_obj_name NOT LIKE 'SYS_TEMP%' AND
    ora_dict_obj_name NOT LIKE 'WF_LOCAL%' AND
    ora_dict_obj_name NOT LIKE 'WF_UR_%' AND
    ora_dict_obj_name NOT LIKE 'ORA_TEMP%'
    ) THEN
    s_sess := sys_context('USERENV','SESSIONID');
    select sid into s_sid from v$session where audsid = s_sess;
    select AUDIT_DDL_LOG_SEQ.nextval into s_seq from dual;
    /* NEW */
    SELECT substr(NVL(s.osuser,'???'),1,64) INTO s_os_user_name
    FROM gv$session s
    WHERE s.inst_id=1 AND
    s.SID = s_sid;

    insert into audit_ddl_log (
    ddl_date,
    user_name,
    session_id,
    sid,
    ddl_type,
    object_type,
    owner,
    object_name,
    id,
    os_user_name
    )
    VALUES (
    sysdate,
    ora_login_user,
    s_sess,
    s_sid,
    ora_sysevent,
    ora_dict_obj_type,
    ora_dict_obj_owner,
    ora_dict_obj_name,
    s_seq,
    s_os_user_name
    );
    END IF;
    EXCEPTION
    WHEN others THEN
    null;
    END;
    /

    And that's it!

    Cheers!

    ReplyDelete
  3. And notice that this code works for non RAC instance (as you see inst_id is hardcodded as 1 in mine new part of code).

    For RAC you have to use "inst_id" in all queries involved in trigger.

    Cheers!

    ReplyDelete
  4. Perfect, so you've leveraged another dba view to pull OS Name based on the known session ID. Very useful and elegant.

    ReplyDelete
  5. Eugene,

    Correct.

    gv$session view is open to add any other column that you like...and there are all important info's about any Oracle session.
    Rg,
    Damir

    ReplyDelete

Zagreb u srcu!

Copyright © 2009-2018 Damir Vadas

All rights reserved.


Sign by Danasoft - Get Your Sign