Problem solve Get help with specific problems with your technologies, process and projects.

What is the DUAL table costing you?

The DUAL table is one of the first things we learn as Oracle professionals. It is created by default as a single row table, understood by the optimizer and available to everyone for selection. This article will explore the cost of querying the DUAL table and offer some less expensive options to optimize application processing.

Introduction

The DUAL table is one of the first things we learn as Oracle professionals. It is created by default as a single row table, understood by the optimizer and available to everyone for selection. We typically use this table to select the name of the current user, ping the database or to generate the next sequence number for a surrogate key. Chances are, few of us consider the impact of querying this table on our applications. This article will explore the cost of querying the DUAL table and offer some less expensive options to optimize application processing.

Measuring the cost of DUAL

One way to measure the cost of DUAL is to look at CPU consumption in terms of logical I/O. We measure logical I/O as 'consistent gets.' A consistent get is when Oracle gets the data in a block from the buffer cache which is consistent with a given point in time, or SCN.

The SQL*Plus autotrace command will help us display the value of an executed statement's consistent gets. Note that the SQL*Plus array size has no affect on this statistic for single row fetches.

Cost prior to Oracle 10g

Prior to 10g the cost of querying the DUAL table is relatively expensive. The following diagram shows the access path from the optimizer's execution plan as a full scan of the DUAL table. The SQL statement execution statistics show querying the DUAL table will cost a minimum of three consistent gets.

set autotrace traceonly
SELECT user FROM dual;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   TABLE ACCESS (FULL) OF 'DUAL'

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        209  bytes sent via SQL*Net to client
        233  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

You can create your own less expensive DUAL as an indexed-organized table (IOT). In the following diagram, the access path from the optimizer's execution plan shows an index (FULL SCAN) of the BETTER_DUAL table. The SQL statement execution statistics show the number of consistent gets have dropped from three to one.

create table better_dual
   (dummy   varchar2(1) null
   ,constraint better_dual_pk primary key (dummy)
   )
   organization index;
   
insert into better_dual values ('X');
commit;

execute dbms_stats.gather_table_stats ('SA','BETTER_DUAL');
-- or
analyze table better_dual compute statistics;

set autotrace traceonly
SELECT user FROM better_dual;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1)
   1    0   INDEX (FULL SCAN) OF 'BETTER_DUAL_PK' (UNIQUE) (Cost=1 Card=1)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          1  consistent gets
          0  physical reads
          0  redo size
        206  bytes sent via SQL*Net to client
        234  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

If you have "SYSDBA" authority you can reduce consistent gets to zero by caching the dual table.

connect sys/<sys_password> as sysdba;
create view best_dual as select * from x$dual;
grant select on best_dual to public;

connect <my_schema>/<my_password>;
create synonym DUAL for sys.best_dual;

(Reference: http://www.dba-oracle.com/t_dual_table_cache_caching.htm, Don Burleson)

Cost in Oracle 10g

In Oracle 10g the overhead of performing logical I/O for our query has been eliminated. The access path from the optimizer's execution plan shows a new fast dual operation. SQL statement execution statistics show consistent gets are zero.

Note: A 'select * from dual' SQL statement still behaves as it did prior to 10g and requires three consistent gets to satisfy the query. Changing your code to 'select 1 from dual' will eliminate consistent gets.

set autotrace traceonly
SELECT user FROM dual;

Execution Plan
----------------------------------------------------------
Plan hash value: 1388734953

-----------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------
|   0 | SELECT STATEMENT |      |     1 |     2   (0)| 00:00:01 |
|   1 |  FAST DUAL       |      |     1 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        331  bytes sent via SQL*Net to client
        377  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Key generation

Let's say we have an application which uses sequences to generate primary keys. This seems to work fine for low-volume OLTP processing. However, batch performance, which adds millions of rows to the database, is poor. Our task is to measure the cost of key generation and introduce a more efficient way to create unique keys for our batch processing.

We'll compare two methods of generating numbers using TIME as our common scale. One method uses a sequence by querying the DUAL table and another generates keys right in the PL/SQL code. The Oracle dbms_profiler package will help us calculate the time it takes for each procedure to generate one million keys.

Generation using DUAL

This method uses the 'nextval' function on a sequence object to generate the next number in a sequence. Use of a sequence object requires one consistent get. Note that caching sequence numbers will eliminate the "recursive calls" and "db block gets" for cached numbers.

create sequence  my_seq
    start with   1000000000
    increment by 1
    minvalue     1000000000
    maxvalue     5000000000
    nocycle
    nocache
    order;   

set autotrace traceonly
SELECT my_seq.NEXTVAL FROM dual;

Execution Plan
----------------------------------------------------------
Plan hash value: 3108674772

---------------------------------------------------------------------------
| Id  | Operation        | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT |        |     1 |       |     2   (0)| 00:00:01 |
|   1 |  SEQUENCE        | MY_SEQ |       |       |            |          |
|   2 |   FAST DUAL      |        |     1 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------


Statistics
----------------------------------------------------------
         15  recursive calls
          3  db block gets
          1  consistent gets
          0  physical reads
        644  redo size
        217  bytes sent via SQL*Net to client
        235  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

The following procedure generates 1 million numbers using a sequence object.

create or replace procedure up_my_seq
as
l_number number := 0;
begin
  for i in 1..1000000 loop
    select my_seq.nextval into l_number from dual;
  end loop;   
end; 
/
execute dbms_profiler.start_profiler(run_comment=>'Sequence');
execute up_my_seq;  
execute dbms_profiler.stop_profiler; 

Generation using PL/SQL code

The following procedure generates 1 million numbers directly in PL/SQL code.

create or replace procedure up_my_code
as
l_number number := 0;
begin
  for i in 1..1000000 loop
    l_number := l_number + 1;
  end loop;   
end; 
/
execute dbms_profiler.start_profiler(run_comment=>'Code');
execute up_my_code;  
execute dbms_profiler.stop_profiler;  

Performance statistics

Querying the dbms_profiler tables we can see some remarkable statistics. For 1 million rows, the statement generating a sequence took 133 times longer to run as compared to the PL/SQL code. The performance difference will be even greater for resourced constrained systems.

select ppu.unit_name                                                        "PROCEDURE"
      ,lpad(substr(ppd.line#,1,6),6)                                        "LINE#"              
      ,lpad(substr(ppd.total_occur,1,7),7)                                  "OCCUR"              
      ,lpad(substr(round(ppd.total_time/1000000000,2),1,7),7)               "SECONDS"            
      ,substr(src.text,1,100)                                               "SQL"     
     from plsql_profiler_runs  ppr                                                                  
         ,plsql_profiler_units ppu                                                                  
         ,plsql_profiler_data  ppd                                                                  
         ,dba_source          src                                                                   
     where ppr.runid          = ppu.runid                                                              
       and ppu.runid          = ppd.runid                                                              
       and ppu.unit_number    = ppd.unit_number                                                        
       and ppu.unit_owner     = src.owner                                                              
       and ppu.unit_name      = src.name                                                               
       and ppd.line#          = src.line                                                               
     order by ppd.runid, ppd.unit_number,line#;   

PROCEDURE  LINE#  OCCUR   SECONDS SQL
---------- ------ ------- ------- --------------------------------------------------
UP_MY_SEQ       1       0       0 procedure up_my_seq
UP_MY_SEQ       3       1       0 l_number number := 0;
UP_MY_SEQ       5 1000001    2.25   for i in 1..1000000 loop
UP_MY_SEQ       6 1000000  289.12     select my_seq.nextval into l_number from dual;
UP_MY_SEQ       8       1       0 end;

UP_MY_CODE      1       0       0 procedure up_my_code
UP_MY_CODE      3       1       0 l_number number := 0;
UP_MY_CODE      5 1000001    2.15   for i in 1..1000000 loop
UP_MY_CODE      6 1000000    2.17     l_number := l_number + 1;
UP_MY_CODE      8       1       0 end;

Optimizing key generation

Now that we've seen a less expensive batch key generation solution, we'll need to figure out how this can co-exist with OLTP sequences. One way is to start sequences at a number greater than one. Setting the sequence "minvalue" to 100,0000,000 effectively reserves 1-999,999,999 for batch processing. Another way is to start the batch numbering higher than the sequence "maxvalue". In either case, the start of the batch execution should query the database to determine the starting key value. This optimization technique allows for concurrent batch and OLTP number generation, guaranteeing unique surrogate keys.

Conclusion

We've seen that taking the Oracle DUAL table for granted may be costing you more than you thought. Whether you're on Oracle 9i or 10g you should rethink your use of the DUAL table and optimize processing by considering some of the alternatives we introduced in this article!

About the author

Jeff McCormick is a senior data architect at a financial services company and executive director of the Connecticut Oracle User Group. Jeff has worked in IT for almost 20 years as a system, storage and database architect/administrator and has over 15 years of experience with DB2, Sybase, SQL Server, MySQL and Oracle relational database technology. He holds several certifications including Oracle Database 10g Administrator Certified Professional, Microsoft Certified Product (SQL Server) and Certified Sybase Professional Database Administrator. Jeff has performed extensive work in the area of high availability and disaster recovery, speaking and authoring several papers on availability architecture.

Dig Deeper on Oracle database design and architecture

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchDataManagement

SearchBusinessAnalytics

SearchSAP

SearchSQLServer

TheServerSide.com

SearchDataCenter

SearchContentManagement

SearchHRSoftware

Close