By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
I am updating 80% of the rows in a 30-column (6 VC  and a blob) table. I have the STD 10% pctfree set up. I am getting fair throughput, but have been tasked with making it faster. So far, I haven't been able to improve on my original design. I've looked at what indexes are affected, used preparedStatements, batching (4000 updates at a time, each update affecting 0-N rows), set auto-commit off and switched to using the thin driver. Can you list other areas that I should explore?
Are you updating one row at a time? If so, your biggest payoff would probably be to modify the SQL to modify multiple rows at once. Otherwise, the things you've done are all worthwhile. Make sure that you're not parsing each statement more than once (I assume that using preparedStatements takes care of that) and that your access path to the rows to be updated is efficient (in this case, you probably don't want to be using an index). Have you traced the updates to see the breakdown of elapsed time? If not, that should be your next step. Get statistics on elapsed time, CPU time and wait time and proceed from there.