The following is the ninth part of a 12-part series on Oracle10g CBO internals and SQL tuning optimization. Each tip is excerpted from the not-yet-released Rampant TechPress book, "Oracle SQL and index internals," by Kimberly Floss. Check back to the main series page for upcoming installments.
Revisit when necessary
If you are a database developer, you may think your job is over once you have delivered the SQL code into production. And in some shops, you might be right. Oftentimes, a database administrator then takes over and serves as the appointed sentry to monitor and identify SQL code that may be bottlenecking a system. But do not be surprised one day if your once well-running SQL is delivered back to you with the mandate to fix it again. Why might this happen? It is not uncommon for fast code to become sluggish in these circumstances:
- Data volume changes. Code that ran well with objects having small data volumes may turn into something that crawls along at a snail's pace when those same objects are a little heftier in size.
- Missing indexes. Occasionally, a DBA will remove an index on a table that is causing slowdowns during INSERT, UPDATE, and DELETE operations. Unfortunately for you, that index may have been critical to your query success and you will have to come to a compromise with the DBA about what to do.
If you are a DBA, you may wonder how to quickly locate poorly running SQL in a production system. If an Oracle database is in question, then you can use the following code to find bad SQL running in an existing database. It combs through the Oracle shared pool and ranks SQL by one of the standard rules of thumb — disk reads per execution:
SELECT A.SQL_TEXT , B.USERNAME , ROUND((A.DISK_READS/DECODE (A.EXECUTIONS,0,1,A.EXECUTIONS)), 2), A.DISK_READS , A.BUFFER_GETS , A.PARSE_CALLS , A.SORTS , A.EXECUTIONS , A.ROWS_PROCESSED , A.FIRST_LOAD_TIME , A.SHARABLE_MEM , A.PERSISTENT_MEM , A.RUNTIME_MEM FROM SYS.V_$SQLAREA A, SYS.ALL_USERS B WHERE A.PARSING_USER_ID=B.USER_ID ORDER BY 3 DESC
If Jeff is still out there writing SQL queries, I am sure his coding skills are much better today than they were back in 1993. To make sure you do not repeat some of his mistakes, follow the simple SQL checklist that follows. While certainly not exhaustive or all-inclusive, it will help serve as a reminder of some basic steps to follow when writing and troubleshooting database code.
- Validate the SQL
- Update and understand key object statistics
- EXPLAIN and understand the access path
- Correct obvious flaws
- Rewrite using various techniques and combinations
- Benchmark the rewrites
- Choose best option based on elapsed time and I/O consumption (normally)
This is extremely costly. STATSPACK can identify the most expensive, but the STATSPACK package at higher level is a real hog, and it may very well cause additional problems. The most expensive SQL on the system is best identified from v$sqlarea or using a third-party tool, like the ones from Quest Software (TOAD, SQL*Lab, SQL*Navigator).
Certainly, the next step is to run an explain plan on any poorly performing statement, to ensure that the access path is correct. Then you can move on to fixing the SQL statement.
About the author
Kimberly Floss is one of the most-respected Oracle database administrators in the U.S., and is president of the International Oracle Users Group (IOUG). With more than a decade of experience, Kimberly specializes in Oracle performance tuning and is a respected expert in SQL tuning techniques. She is an active member of the Chicago Oracle Users Group, and the Midwest Oracle Users Group, in addition to the IOUG. Kimberly Floss has over 15 years of experience in the information technology industry, with specific focus on relational database technology, including Oracle, DB2, Microsoft SQL Server and Sybase. She holds a bachelor's of science degree in computer information systems from Purdue University, specializing in systems analysis and design, and has an MBA with emphasis in management information systems from Loyola University.