Manage Learn to apply best practices and optimize your operations.

Oracle disk I/O tuning: Automated storage management, part 2

The following is part of a series on the different aspects of disk I/O performance and optimization for Oracle databases.

The following is part of a series on the different aspects of disk I/O performance and optimization for Oracle databases. Each tip is excerpted from the not-yet-released Rampant TechPress book, "Oracle disk I/O tuning," by Mike Ault. Check back to the main series page for upcoming installments.

Mike Ault

Mike Ault is one of's Oracle Internals experts. Mike is senior Oracle consultant with Burleson Consulting, and one of the leading names in Oracle technology.

To view Mike's expert responses or to ask him a question, click here.

Asynchronous I/O

When a system process attempts to read or write using the normal synchronous read() or write() system calls, then it must wait until the physical I/O is completed. Once the result of the read/write operation, if success or failure, is understood the process finishes the task. During this time, the execution of the process is blocked while it waits for the results of the system call. This is synchronous or blocking I/O.

However, the desired method is Asynchronous I/O which indicates that it is a 'Non-blocking I/O'. If the process instead uses the asynchronous aio_read() or aio_write() system calls, then the system call will return immediately once the I/O request has been passed down to the hardware or queued in the operating system, typically before the physical I/O operation has even begun. It can continue executing and then receive the results of the I/O operation later, once they are available. Thus it is asynchronous or non-blocking I/O.

Asynchronous I/O enables write intensive processes like Oracle's DBWn to make full use of the I/O bandwidth of the hardware, by queuing I/O requests to distinct devices in quick succession so that they can be processed largely in parallel. Asynchronous I/O also allows processes performing compute intensive operations like sorts to pre-fetch data from disk before it is required so that the I/O and computation can occur in parallel.

The performance of asynchronous I/O is depends much on if the kernelized asynchronous I/O or threaded asynchronous I/O is used.

  • For kernelized asynchronous I/O, the kernel allocates an asynchronous I/O request data structure and calls an entry point in the device driver to set up the asynchronous I/O request. The device driver then queues the physical I/O operation and returns control to calling process. When the physical I/O operation has completed, the hardware generates an interrupt to a CPU. The CPU switches into interrupt service context and calls the device driver's interrupt service routine to update the asynchronous I/O request data structure and possibly to signal the calling process with SIGIO.
  • The threaded implementation of asynchronous I/O uses the kernel's light-weight process functionality to simulate asynchronous I/O by performing multiple synchronous I/O requests in distinct threads. This achieves I/O parallelism at the expense of additional CPU usage associated with thread creation and extra context switching overheads. If threaded asynchronous I/O is used very intensively, these costs can add as much as 5% to system CPU usage. For this reason using kernelized asynchronous I/O is a preferred method.

Kernelized Asynchronous I/O, popularly known as KIO, is only available if the underlying file system is uses Oracle Disk Manager (ODM) API, Veritas Quick I/O, or a similar product that routes the I/O via a pseudo device driver that can serve as the locus for asynchronous I/O request completion. Also KIO is available if you use the raw partitions. Many operating systems also require special configuration of device files, device drivers and kernel parameters to enable and tune kernelized asynchronous I/O. It is definitely a complex configuration to achieve Asynchronous IO.

The best news is that the KIO is available with the ASM files automatically.


In the layout of database files, we often see many hot spots where same disk or set of blocks are accessed more often than others. Often it results in large queues and does not take advantage of the concurrent disk I/Os.

Striping Oracle database files across multiple physical disk spindles improves the concurrency of access to hot spots, and generally improves the transfer rate for large reads and writes, and spreads the I/O load evenly across the available disks. Striping spreads the I/O load evenly across all the disks in the stripe. This makes it possible for the full I/O bandwidth of the storage hardware to be used to service any I/O workload. The trend towards using fewer, larger capacity disks makes it increasingly important to be able to use the full I/O bandwidth in this way.

When the striping is implemented at storage array or hardware level, Oracle database files inherently gets the striping benefit. Striping can also be implemented using an appropriate volume manager tool.

ASM provides the facility of generic volume management type software for striping at host level. This will avoid any use of expensive striping software.

Click to buy the book, "Oracle disk I/O tuning," by Mike Ault.

About the author

Mike Ault is a expert and a senior Oracle consultant with Burleson Consulting, and one of the leading names in Oracle technology. The author of more than 20 Oracle books and hundreds of articles in national publications, Mike Ault has five Oracle Masters Certificates and was the first popular Oracle author with his landmark book "Oracle7 administration and management." Mike also wrote several of the "Exam Cram" books, and enjoys a reputation as a leading author and Oracle consultant. Ask Mike a question today!

Dig Deeper on Oracle database performance problems and tuning

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.