News Stay informed about the latest enterprise technology news and product updates.

Tune SQL with SQL*Plus autotrace

Performance tuning tip #3 from "30 tips in 30 minutes," brought to you by the IOUG.


The following is performance tuning tip #3 from "30 tips in 30 minutes," brought to you by the IOUG. Return to the main page for more tips on this topic.


Once you identify slow running SQL with Statspack (or other utilities like tracing plus tkprof, Oracle Enterprise Manager's Top Sessions or a vendor monitoring tool), you can use SQL*Plus autotrace to examine the execution plan. For example: Consider this actual query that was found to be expensive and time consuming.

SELECT count(*)

  FROM report r, bank u

 WHERE r.type=:u1 

   AND r.bankid=:u2 

   AND r.userbankaccountid =u.userbankaccountid 

   AND r.statusid=3 

ORDER BY LastModified;

This SQL uses two bind variables to return a count from a join of two tables. The SQL was executed like this:

Execution plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   SORT (AGGREGATE) 
   2    1     NESTED LOOPS
   3    2       TABLE ACCESS (FULL) OF 'BANK'
   4    2       TABLE ACCESS (BY INDEX ROWID) OF 'REPORT'
   5    4         AND-EQUAL
   6    5           INDEX (RANGE SCAN) OF 'REPORT_BANKID'
   7    5           INDEX (RANGE SCAN) OF 'BANKACCOUNTID'

This query reads each row of the BANK and reads two indexes on the REPORT table. Therefore, two reads from the indexes on the REPORT table and a table read occurs for each row of the BANK. The problem is that the BANK table has over 350,000 rows. Therefore, to get a count of the rows for this query, Oracle has to make three or more reads multiplied by 350,000! The execution statistics for this query look like this:

Statistics
---------------------------------------------------------
          0  recursive calls
          4  db block gets
    2124587  consistent gets
         36  physical reads
          0  redo size
        187  bytes sent via SQL*Net to client
        316  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

The bad thing here is the consistent gets of over two million. All these reads take CPU time because all but 36 reads came from memory. Why? Because it keeps reading the same report index and table blocks over and over. Bottom line to the user is the amount of time required for the query to return is the elapsed time. By looking at a trace file for this query, you can see the elapsed time in this output:

call     count       cpu    elapsed       disk      query 
------- ------  -------- ---------- ---------- ----------
Parse        0      0.00       0.00          0          0
Execute      5      0.00       0.01          0          0
Fetch        5    526.09     548.96        667   10538601
------- ------  -------- ---------- ---------- ----------
total       10    526.09     548.97        667   10538601

Bottom line, each time the query runs, it takes about 100 seconds to complete. During the unprocessed reports function, this query is called five times. This one query, executed times takes almost 10 minutes to answer the count query five times.

By adding one index and analyzing the BANK and REPORT tables, the optimizer was able to choose a different execution path. The index added for this query was an index on the BANK table and the USERBANKACCOUNTID column.

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=85 Card=1 Bytes=20)
   1    0   SORT (AGGREGATE)
   2    1     NESTED LOOPS (Cost=85 Card=4435 Bytes=88700)
   3    2       TABLE ACCESS (BY INDEX ROWID) OF 'REPORT'
   4    3         AND-EQUAL
   5    4           INDEX (RANGE SCAN) OF 'REPORT_TYPE' (NON-UNIQUE)
   6    4           INDEX (RANGE SCAN) OF 'REPORT_BANKID'
   7    2       INDEX (RANGE SCAN) OF 'BANKACCOUNTID' 

This query now looks up report by the two indexes as before. But during the join to the BANK table, the join occurs via an index. The cardinality of the indexed columns is high (they have many distinct values in the column). Therefore, little of the index has to be read to find the table rows needed for the count. The good new shows up the in the execution statistics.

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        130  consistent gets
          0  physical reads
          0  redo size
        187  bytes sent via SQL*Net to client
        316  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Only 130 consistent gets are required to resolve this query. Therefore, the database does much less work to yield the same answer. Subsequently, the times improve also for this query as shown in the trace output.

call     count       cpu    elapsed       disk      query
------- ------  -------- ---------- ---------- ---------- 
Parse        3      0.00       0.00          0          0 
Execute      3      0.00       0.00          0          0 
Fetch        6      0.02       0.02          0        771 
------- ------  -------- ---------- ---------- ---------- 
total       12      0.02       0.02          0        771 

The improvement is dramatic. During this execution of this report function, the query was executed three times. The elapsed time for that one query executed three times was two hundredths of a second. This query improvement and others show how this report function can run in 15 minutes on one day and then run in 15 seconds on the next day.


Get more tips in minutes! Return to the main page.

About the author: Kenny Smith has been working with Oracle technology on HP servers for over a decade. He specializes in Oracle database architecture, database administration and development. He has presented at numerous Oracle conferences on two continents. He has published many articles describing Oracle solutions and has co-authored "Oracle backup and recovery 101" from Oracle Press.

IOUG: Become a member of the IOUG to access the paper referenced here and a repository of technical content created for Oracle users by Oracle users.

Dig Deeper on Oracle and SQL

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchDataManagement

SearchBusinessAnalytics

SearchSAP

SearchSQLServer

TheServerSide.com

SearchDataCenter

SearchContentManagement

SearchHRSoftware

Close