Q

Problems with statement performance

This Content Component encountered an error
DELETE grey_list
 WHERE  notes = :b1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  16297   3191.25    4786.15   18122153   18350422         12           6
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    16298   3191.25    4786.15   18122153   18350422         12           6

Misses in library cache during parse: 1
Misses in library cache during execute: 2
Optimizer goal: CHOOSE
Parsing user id: 19 (ODMSAPP) (recursive depth: 1)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  DELETE STATEMENT   GOAL: CHOOSE
      0   DELETE OF 'GREY_LIST'
      0    INDEX   GOAL: ANALYZED (FAST FULL SCAN) OF 'PK_GREY_LIST' (UNIQUE)

Questions:
1. Why does it need to execute 16297 times to delete the six rows of unwanted records even though it is index?
2. Why is the physical read so high?
First, you're using a bind variable and often the CBO (cost based optimizer) cannot generate accurate cost figures for like or range predicates against bind variables. This may result in indexes not being properly chosen.

Second, the access method was via an index fast full scan. An Index FFS will scan all blocks in the index and the returned data is not sorted. Is the index being used (PK_GREY_LIST) a composite index? What columns make up that index? Perhaps the "notes" column is just one of the columns in that index and not the leading column as well....thus the use of the Index FFS instead of a more direct/efficient access method such as an index range scan or index unique scan. An Index FFS is chosen when the statistics indicate that it is going to be more efficient than a full table scan and a sort. Apparently here, the optimizer decided that selecting all the information from the index and not sorting was a more efficient way than doing a full table scan or any other method and then sorting. An Index FFS performs single block i/o's and so it may prove inefficient (as you have seen). So here, I'd suggest you check/verify the actual index (PK_GREY_LIST) and make sure it is appropriate for this statement.

Third, do you have statistics collected on the grey_list table? Not having up to date statistics can cause inefficient plans to be chosen by the optimizer.

Fourth, you asked why it needed to execute 16K times. Remember, this count is the count of the single block i/o's being done and since it did an Index FFS, that what you're seeing....it had to read all the index blocks to determine which rows it would need to delete. This also answers why the reads were so high.

I'd suggest that you 1) verify that you have a "good" index on the notes column which can be used directly and 2) that you have valid and up to date statistics collected. Try testing the delete with a hint to force a specific index use, or perhaps try it without a bind variable to see if the optimizer will choose a more appropriate index.

This was first published in December 2003

Dig deeper on Oracle competitors and market analysis

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchDataManagement

SearchBusinessAnalytics

SearchSAP

SearchSQLServer

TheServerSide

SearchDataCenter

SearchContentManagement

SearchFinancialApplications

Close