Problem solve Get help with specific problems with your technologies, process and projects.

Predictable Oracle applications tuning, part 1

Perfomance issues with Oracle apps stem from a variety of often unexpected causes. This tip highlights a few misconceptions about tuning and outlines a management approach.

This tip is brought to you by the International Oracle Users Group (IOUG), a user-driven organization that empowers Oracle database and development professionals by delivering the highest quality information, education, networking and advocacy. It is excerpted from the paper "Predictable Oracle applications tuning," by David Welch. Become a member of the IOUG and gain access to thousands more tips and techniques created for Oracle users by Oracle users.


We've been exposed to a good number of Oracle Applications / E-Business Suite installations with significant performance challenges. We have come to the conclusion that a significant improvement in performance improvements can indeed be expected from such installations. To state it differently, it would be a rare installation where that is not the case.

Up for debate

Our bottom end-to-end tuning approach for production system stacks has invariably yielded fruit sooner than we believe would have been possible otherwise following extensive checklists. I'd like to bring up some issues for debate:

Most performance improvement opportunities are at the application level: This from a prominent Metalink note on performance tuning. This statement doesn't correlate statistically with our experience tuning system stacks.

Two-day average engagement: "The book" makes this claim. Our experience wouldn't support the above conclusion. I believe an Oracle Applications performance clinic ought to be a minimum twelve-day engagement. The entrance meeting consumes most of the first morning. The last two days are consumed with executive and technical-level documentation of the findings, wins, and follow-on recommendations. To speak in dramatic terms, if an improvement isn't documented, it can't be repeated. If a problem isn't documented, it is probable it will re-occur. If a problem and its resolution isn't documented, mentoring gets difficult.

Extent fragmentation: This supposedly isn't a problem in OLTP systems. We've heard all the arguments about how OLTP transactions over seriously fragmented tables that are entirely keyed-unique shouldn't impact performance. But as often as we reorganize to eliminate the fragmentation, the performance improvements are dramatic. Oracle storage management improvements are making strides in minimizing the impact of fragmentation.

Tune disk I/O since buffer I/O isn't a big deal: Two points here. The actual cost of disk I/O isn't 10,000 times more expensive than memory buffer I/O. The number is closer to 70. Even if your CPU seems to be absorbing it without a noticeable performance hit, the problem is definitely limiting your scalability. As time goes on, we are less inclined to ignore high memory buffer I/O while looking for performance opportunities.

OATablespace model and migration utility: Published Metalink note (10/03) claims "This new model ... introduces run-time performance gains." The idea is to consolidate the 100+ Oracle Applications tablespaces down to a tablespace count in the teens. Potential storage savings? Maybe. More operationally efficient? It depends. We haven't gotten our hands on this utility yet. But we have yet to understand at the whiteboard level how tablespace consolidation holds the promise of performance improvements.

De-fragmenting your client PC: A lot is made of this in "the book". That might have been true when the book was written and the fat "smart client" was in play. But now that the Oracle Applications client is a thinner client (as soon as Oracle does away with Jinitiator, we'll call the browser interface a thin client), don't look for performance improvements by defragmenting PC hard drives.

Load the module patch: This one is probably suggested by Oracle Support for performance problems more frequently than it should be. The problem is patching frequently invokes instability. If careful attention isn't given to patch dependencies, you can find yourself forced into loading entire family packs which you had no intention of loading, with the resulting impact on your system stack's stability.

Project management

Project management is critical. Oracle Applications performance engagements are as political as they are technical. Someone has to be calling the shots. Functional areas must be prioritized. If need be, prioritize by having business units calculate the financial expense of their however-many-second screen delays, multiplied by the number of users and their revenue per minute. One of the costs of success in an Apps performance clinic is there's documentation to do. Also, technical and business users come out of the woodwork hoping to divert tuning resources to their area. User expectations have to be managed because not all areas will produce equally dramatic results. There has to be a gatekeeper and a process for prioritizing and at times even filtering access to the performance team. On the one hand, users frequently have ideas and hunches that lead to underlying performance causes. On the other hand, interacting with users can stranglehold your progress. Success also produces the welcome problem of exposing the next layer of performance problems.

What your users can't tell you

The bottom up approach with one customer revealed a single homegrown package consuming a full 25% of I/O resources. At another customer, a single query was responsible for 4.3 terabytes of buffer I/O per week. Tuning altered the buffer overhead down to 0.06% of the original impact. That problem was draining CPU, and that in an environment where CPU expansion was under serious consideration. Nobody knew--the system stack was absorbing it.

One of the best-kept secrets about performance tuning is found in the Oracle Tuning Guide. As a team we've been quoting this for years. For beta or production performance problems, you start at the bottom of the system stack and diagnose up. Unfortunately, too often most of the performance diagnostics focus on just the four pieces in the middle of the system stack:

  • Logical database structure
  • Database operations
  • Access paths (SQL)
  • Memory allocation
But frequently we find significant performance problems underneath Oracle at these levels:
  • I/O and physical database structure
  • Resource contention
  • Underlying platform(s)

Part 2 covers the v$sqlarea view, Statspack, index counts, CPU and user involvement in applications tuning.

Dig Deeper on Oracle E-Business Suite

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.