I would like to process 3 million records faster. The records constitute customers who might have anywhere from 50 to 120 transactions. All transactions for each customer are processed from earliest to latest to give the final value. A value in a previous entry is merged with a part in current row to get the new value. I have a script which works fine for a few records, but it has problem of "lgwr -00470" when processing, which forces the database to shutdown. How best can I rewrite the script? The first one numbers transactions for each customer using ID to fill the reference and get the number of transactions for each customer. The second one gets the difference in months from one transaction to the other as a factor to the value in the next row for the same customer. The transactions for one customer can span 10 years, and month-year is documented, hence the need to get gaps from the immediate month to the next third one. It determines values by looking at the preceding transaction to get the value for the current row of the same customer.
Based on your source code you are using anonymous PL/SQL blocks to process the 3 million transactions. You are also using cursors to update the same table as in the cursor and executing a COMMIT after each transaction. Try defining the cursor as select <col> from STATEMENTS_POSTINGS_EMPLOYERS for update of <column_name> that you are going to update in that block.
A few additional pointers:
- Try not to commit on every record but after say 500 to 1000 records.
- Try to use a synonym and avoid specifying a schema name in your code; it makes the code non-portable to a different schema. However, the second suggestion has nothing to do with the problem you are facing.