It is not impossible for a database to generate large amounts of redo logs even though the database may not be that large. The amount of redo generation is determined by the volume of data change in the database. To illustrate this, assume that I issue the following command:
UPDATE scott.emp SET lname=UPPER(lname);Now I issue this command 1,000 times (for some really odd reason). The amount of data in the database has not changed yet the amount of redo can be large.
So the big question to be answered is to determine what large changes are happening in your instance. Luckily, you can mine your archived redo logs to see if one or two tables are undergoing more changes than other tables. Chapter 9 of the Administrator's Guide is titled Using LogMiner to Analyze Redo Logs. This document contains the instructions you'll need to mine your archived redo logs to get an idea of the changes occurring in your database.
Dig Deeper on Oracle database backup and recovery
Related Q&A from Brian Peasland
Oracle expert Brian Peasland answers one reader's question about common pitfalls when connecting Oracle to outside programs. Continue Reading
One reader asks expert Brian Peasland a question about datafile sizes with the Oracle RMAN duplicate 10g command. Continue Reading
Managing parent table-child table relations in Oracle SQL environments is key to efficient programming. Continue Reading