Problem solve Get help with specific problems with your technologies, process and projects.

Are left outer joins messing up the optimizer?

I'm on Oracle 9i Release 2 on XP. I am having problems understanding exactly what the optimizer is trying to do. I have a small query joining three tables. Indexes were created for the primary keys. They look something like:

table a (aid number, other data columns);

table b (aid number, version number, other data columns)

table c (aid number, version)
In reality, table c is a temp table, but the problem persists even if I create it as a 'real table' -- there is no index on the C table as temp, and it only has about 12 rows, and we use it to drive the query.

Table a has 25,000 rows. Table B has in excess of 25,000 rows. Table c has about 12 rows. The query is:

SELECT A.aid, A.otherdatacolumns                      
  FROM A, B, C
   AND B.AId = C.AId
   AND B.Version = C.Version
We run this with autotrace on, and we get a good query. We gather statistics for tables and run the query again and get full table scans appearing for either one or two of the tables A and B. We put a rule hint in the select statement, and we get a good query again. This query is part of a larger group of queries that I am trying to port from Sybase to Oracle - with left outer joins, etc. So any help you could give here would be appreciated. I also suspect from other work I have been doing that the optimzer can't cope with LEFT OUTER JOIN syntax and much prefers the Oracle (+) notation. I have a query where if I use the left outer join, I get a bad result from the optimizer, but if I use (+) syntax, I get a good plan from the optimizer. Any ideas?

Am I missing something? Is there something disabled that I should enable?

The reason you see differences in the execution plan is that after you collect statistics, the cost based optimizer can use the statistics to generate what it believes to be an "optimal" plan based on the statistics. You comment that before analyzing the tables, you get a "good query". Then after analyzing the tables, you see full tables scans. Then if you put in a rule hint (which ignores the cost based optimizer and thus the statistics you collected via analyze), you get a "good query" again. When you say "good query" are you meaning that you consider a "bad query" to be one with a full table scan? Have you timed the executions of the query each way? Do you see a big difference in the time to execute for the "good query" vs. the "bad query"?

What I want to point out is that full tables scans don't always equate to "bad". If the optimizer is choosing a FTS over using an index, it thinks that it has chosen the optimal plan for your query. Now, if you are seeing a difference in the response time under each scenario, here are a couple of things to look at that will effect how the optimizer makes decisions.

There are a few init.ora parameters that you may want to look at modifying which will effect the choices the optimizer makes. They are:

  1. optimizer_index_caching: you can change this from its default of 0 to something higher, let's say 50. This will cause the optimizer to assume that it will find 50% of index blocks in the cache.
  2. optimizer_index_cost_adj: you can change this from its default of 100 to something lower, let's say 50. This lets you tune optimizer behavior for access path selection to be more or less index friendly, that is, to make the optimizer more or less prone to selecting an index access path over a full table scan. The default makes FTS and index use equal.
  3. optimizer_mode: set this to FIRST_ROWS. That way, the optimizer will attempt to select a plan that will minimize response time.
If you are seeing response time issues with the use of the FTS, then look at changing these parameters to encourage the optimizer to weight index usage more heavily. If you don't see a difference in response times between FTS and index use, then why worry?! The optimizer is doing it's thing!

Hope that helps!

For More Information

Dig Deeper on Oracle and SQL