I have a table with 1.5 million records that needs to have a column updated based on a correlated subselect.
update updtbl a set updcol = (select rtnval from srctbl s where s.qcol1=a.qcol1 and s.qcol2=a.qcol2 and s.qcol3=a.qcol3 and s.qcol4=a.qcol4);
All the columns referenced in srctbl, including rtnval, are columns in an index; qcol columns are the first ones in the index.
This update is currently sitting at four hours and counting. The database server is on a Linux box. I have a lot more updates such as this to perform on this database. What can I do to improve this performance?
- Is the updcol column on the updtbl table indexed? If it is, I would first drop or disable the index, and then run this update. After the update is complete, then rebuild the index.
- Have you run an Explain Plan on the entire update statement to verify it is using the index on the srctbl in the subquery? If not, please do so to see what access path the optimizer is using. If it is not using an index it is doing a table scan of the scrtbl table for each row in the updtbl table. Also, look at the subquery and run an explain plan on it as well. How long does the subquery take to run when run by itself? If it is using the index what type of access on the index is it using? If the subquery takes a long time to run, then start by tuning it first.
Dig Deeper on Oracle database performance problems and tuning
Related Q&A from Phillip Bracken
I have a critical performance issue due to the large volume of data for a specific customer. Whenever customer XYZ's data is being fetched the query ... Continue Reading
I increased the RAM from 2GB to 3GB, but when I try to increase the sga_max_size I get the following error on startup: "ORA-27102: out of memory." Continue Reading
Why does the SMON process periodically consume 100% CPU? Continue Reading