Problem solve Get help with specific problems with your technologies, process and projects.

Oracle disk I/O tuning: SCSI tuning under AIX

The following is part of a series on the different aspects of disk I/O performance and optimization for Oracle databases.

The following is part of a series on the different aspects of disk I/O performance and optimization for Oracle databases. Each tip is excerpted from the not-yet-released Rampant TechPress book, "Oracle disk I/O tuning," by Mike Ault. Check back to the main series page for upcoming installments.

Mike Ault

Mike Ault is one of's Oracle Internals experts. Mike is senior Oracle consultant with Burleson Consulting, and one of the leading names in Oracle technology.

To view Mike's expert responses or to ask him a question, click here.

SCSI tuning under AIX

AIX is a SCO UNIX derivative like HP-UX (Solaris is a SVR4 UNIX derivative). However, AIX has been highly modified from the standard SCO release of UNIX, unlike HP-UX, which remains fairly true to the original SCO. The SCSI interface in AIX has several parameters that can be adjusted. Generally speaking, the IBM disks can be read by the interface and adjustments are usually made automatically, after market drives (that is, non-IBM manufacture), may need to have proper settings made manually. Let's look at the available settings in AIX.

Setting AIX SCSI-adapter and disk-device queue limits

The AIX operating system, like LINUX, Windows and HP-UX, has the ability to enforce limits on the number of I/O requests that can be outstanding from the SCSI adapter to a given SCSI bus or disk drive. These limits are intended to exploit the hardware's ability to handle multiple requests while ensuring that the seek-optimization algorithms in the device drivers are able to operate effectively.

For non-IBM devices, it is sometimes required to modify AIX default queue-limit values to ensure highest performance rather than use those that have been chosen to handle the worst possible case. Let's look at situations in which the defaults should be changed and the IBM recommended new values.

AIX SCSI settings with a non-IBM disk drive

The default settings for AIX for IBM disk drives for the number of requests that can be outstanding at any given time is 3 (8 for SSA). This value has no direct interface for changing it. The default hardware queue depth for non-IBM disk drives is a very performance killing 1. It behooves you to change the setting for non-IBM drives if they can accept multiple commands (most modern drives can), so you can get the maximum performance from those drives. For example, you can display the default characteristics of a non-IBM disk drive with the lsattr command:

# lsattr -D -c disk -s scsi -t osdisk

pvid          none Physical volume identifier      False
clr_q         no   Device CLEARS its Queue on error
q_err         yes  Use QERR bit
q_type        simple Queuing TYPE
queue_depth   3    Queue DEPTH
reassign_to   120  REASSIGN time out value
rw_timeout    30   READ/WRITE time out value
start_timeout 60   START unit time out value
You should use the AIX provided interface, SMIT, to change the disk parameters as required. SMIT allows for fast-pathing to specific points in the SMIT interface from the command line. The fast path is smitty chgdsk to get the disk interface section of SMIT. As an alternative, you can also use the chdev command line command to change these parameters.

Let's look at an example using chdev, if your system contained a non-IBM SCSI disk drive hdisk7, the following command enables queuing for that device and sets its queue depth to 3:

# chdev -l hdisk7 -a q_type=simple -a queue_depth=3
That's fine for a single disk, but what about when you are dealing with an entire non-IBM disk array? Let's look at that next.

Setting SCSI parameters for a non-IBM disk array

Any disk array appears to AIX as a single, rather large, disk drive. A non-IBM disk array, like a non-IBM disk drive, will be of class disk, subclass scsi, type osdisk (which is short for "Other SCSI Disk Drive"). However, we know a disk array actually contains a number of physical disk drives. Each physical disk drive can handle multiple requests, therefore the queue depth for the disk array device has to be set to a value high enough to allow efficient use of all of the physical devices. For example, if hdisk8 were an eight-disk non-IBM disk array, where each disk supports a queued depth of 3, an appropriate change using chdev would be:

# chdev -l hdisk8 -a q_type=simple -a queue_depth=24
If the disk array is attached via a SCSI-2 Fast/Wide SCSI adapter bus, it may also be necessary to change the outstanding-request limit for that bus. Let's look at that next.

Changing AIX disk adapter outstanding-request limits

The AIX SCSI-2 Fast/Wide Adapter supports two SCSI buses; one for internal devices and one for external devices. There is a limit on the total number of outstanding requests that can be defined for each bus. The default value of that limit is 40 for each bus and the maximum is 128. When an IBM disk array is attached to a SCSI-2 Fast/Wide Adapter bus, the outstanding-request limit for the bus is increased automatically to accommodate the queue depth of the disk array. However, for a non-IBM disk array, this change must be performed manually. For example, using chdev to set the outstanding-request limit of adapter scsi2 to 80, you would use:

# chdev -l scsi2 -a num_cmd_elems=80

Note, if you are using the SCSI-2 High Performance Controller, the maximum number of queued requests is 30 and that limit cannot be changed. For that reason, you should ensure the sum of the queue depths of the devices attached to a SCSI-2 High Performance Controller does not exceed 30.

You should also note that the original RS/6000 SCSI adapter does not support queuing. It is inappropriate to attach a disk array device to such an adapter.

Controlling the number of system pbufs in AIX

In AIX the Logical Volume Manager (LVM) uses a construct called a "pbuf" to control a pending I/O to disk. In AIX Version 3, one pbuf is required for each page being read or written. In systems that do large amounts of sequential I/O, this can result in depletion of the pool of pbufs. The vmtune command can be used to increase the number of pbufs to compensate for this effect.

In AIX Version 4, only a single pbuf is used for each sequential I/O request, regardless of the number of pages involved. This greatly decreases the probability of running out of pbufs and tuning pbufs in version 4 and is generally not advised.

In AIX Version 5, you no longer need to adjust this parameter.

Click to buy the book, "Oracle disk I/O tuning," by Mike Ault.

About the author

Mike Ault is a expert and a senior Oracle consultant with Burleson Consulting, and one of the leading names in Oracle technology. The author of more than 20 Oracle books and hundreds of articles in national publications, Mike Ault has five Oracle Masters Certificates and was the first popular Oracle author with his landmark book "Oracle7 administration and management." Mike also wrote several of the "Exam Cram" books, and enjoys a reputation as a leading author and Oracle consultant. Ask Mike a question today!

Dig Deeper on Oracle database performance problems and tuning

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.