- sqlldr (in direct mode)
- CTAS (CREATE TABLE AS ...)
- ALTER TABLE statements (move/add/split/merge partitions)
- CREATE INDEX
- INSERT /*+APPEND*/ ...
- ALTER INDEX statements (move/add/split/merge partitions)
While Data Warehousing applications usually take advantage of NOLOGGING operations, 24x7 OLTP databases can benefit as well but with some caution.
In particular, most 24x7 applications still require maintenance windows to perform varied database activities as fast as possible. However, each maintenance window usually incurs at least a small outage to the business. Expeditious and well-planned changes are a necessity. The DML and data dictionary language (DDL) NOLOGGING features can dramatically reduce the time an Oracle scientist needs to perform the maintenance and substantially reduce the redo generated. If meticulous planning and care are given to address the recoverability issues the Oracle experts should give serious consideration to the NOLOGGING feature.
The performance gain and redo reduction benefits of NOLOGGING operations come at a sacrifice to recoverability. However, this can be quite dangerous if you need to roll-forward through this time period during a database recovery.
For example, if you invoke a sequence of direct-path inserts the affected data cannot be recovered using Oracle’s media recovery mechanisms. Following any NOLOGGING operation the data file(s) impacted by the NOLOGGING operation need to be backed up to ensure that the changes persist through media failure.
When use NOLOGGING operations there is one simple rule: Backup before and after! You must take a backup, both before and after all NOLOGGING operations. Keep in mind as well that it is not possible to roll forward through a point in time when an NOLOGGING operation has taken place.
This means very simple. If your backup was taken in time when NOLOGGING action occurred, then this backup will suffer from logical corruption (soft kind of error), what lead directly to restore problem in the future with more or less impact on your data!
FORCE_LOGGING parameter
From my previous EBS experience, the whole Oracle E-Business Suite 11.x and 12.x is masivelly using GTT (Global Temporary Tables) whose DDL creation has NOLOGGING option. This is done, as mentioned before, because of performance gain, so be aware that most of your backups would probably have bad blocks if you will be using EBS instance installed out of the box with no parameter changed!
However in some situations, NOLOGGING is not acceptable behavior. Beside backup problem this is a conditio sinne qua non for Oracle Streams and Data Guard (standby databases) where this option is not allowed. For those situations Oracle has provided init.ora parameter FORCE_LOGGING, which ensure that ALL operations are redo logged on most top level regardless underlying SQL statement source.
To check current value of FORCE_LOGGING perform:
SQL> SELECT force_logging FROM v$database;
FORC
----
NO
FORC
----
NO
To enable force logging perform:
SQL> ALTER DATABASE force logging;
Database altered.
Database altered.
To explain more widely complexity of NOLOGGING problem in E-Business Suite, follow Metaling note 216211.1 "Nologging In The E-Business Suite" which gives you a complete "How to" steps you must provide after complete database recover if EBS instance was backup ed with FORCE_LOGGING=NO parameter.
The most frighten sentence comes at the very start of mentioned Metalink note:
We have made every effort to identify where these steps are required, and will update this note when we identify new uses of nologging. It is really a nightmare for every DBA if his production database dependent on updates of some document, even that document come from Oracle itself.
Checking of RMAN backup
If during restore/recover process you get any of next errors:
ORA-1578: ORACLE data block corrupted (file # %s, block # %s)
ORA-26040: Data block was loaded using the NOLOGGING option
then your backup is corrupted!ORA-26040: Data block was loaded using the NOLOGGING option
Normally, RMAN checks only for physically corrupt blocks, with every backup it takes and every image copy it makes. This is a common misunderstanding among a lot of DBAs. RMAN doesn't automatically detect logical corruption by default! We have to tell it to do so, implicitly by using option CHECK LOGICAL. Here is an example for that:
rman target /
run {
BACKUP VALIDATE CHECK LOGICAL DATABASE;
}
run {
BACKUP VALIDATE CHECK LOGICAL DATABASE;
}
Corruption information are recorded in the V$DATABASE_BLOCK_CORRUPTION view. Ensure that MAXCORRUPT parameter (sets the total number of physical and logical corruptions permitted in a file) is set to enough large value.
Checking backups with dbverify
The primary tool for checking for corruption in an Oracle database is DBVERIFY. It can be used to perform a physical data structure integrity check on data files whether the database is online or offline.
dbv file=data01.dbf logfile=verify.log blocksize=8192 feedback=100
Combining CHECK LOGICAL and dbverify utility prevents DBA from unexpected backup results.For the end mother of all backup thoughts-Test it in live from time to time!
Excelent post Damir!
ReplyDeleteYour blog is getting better & better, keep it that way !!
Toni