I am updating 80% of the rows in a 30-column (6 VC  and a blob) table. I have the STD 10% pctfree set up. I am getting fair throughput, but have been tasked with making it faster. So far, I haven't been able to improve on my original design. I've looked at what indexes are affected, used preparedStatements, batching (4000 updates at a time, each update affecting 0-N rows), set auto-commit off and switched to using the thin driver. Can you list other areas that I should explore?
Are you updating one row at a time? If so, your biggest payoff would probably be to modify the SQL to modify multiple rows at once. Otherwise, the things you've done are all worthwhile. Make sure that you're not parsing each statement more than once (I assume that using preparedStatements takes care of that) and that your access path to the rows to be updated is efficient (in this case, you probably don't want to be using an index). Have you traced the updates to see the breakdown of elapsed time? If not, that should be your next step. Get statistics on elapsed time, CPU time and wait time and proceed from there.
Dig Deeper on Oracle database performance problems and tuning
We are starting to upgrade to Oracle 10g and support a PeopleSoft application on AIX5L. Which initialization parameters do you consider important for...
Can I install the Oracle 8i client and the Oracle 10g client on Windows XP?
How I can export multiple tables (3000 of them, actually) in Oracle 9i Unix-HP?
I created three files (1000 tables in each file) with starting table ...