By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
This tip is an excerpt from the paper "Keep downtime short on 11i migration" and is brought to you by the International Oracle Users Group (IOUG). Become a member of the IOUG to access the paper referenced here and a repository of technical content created for Oracle users by Oracle users.
Evaluating which tasks can be done before or after critical downtime
The simplest approach to upgrade Oracle Applications is to exactly follow the Upgrade Manual. Since the manual lacks instructions for many possible upgrade optimizations, the most important task is to go through the whole manual, your upgrade plan and check which tasks could be accomplished before or after Apps required downtime. A good example of such activity is application of file system patches during the test upgrade and reusing the patched file system for later production upgrade.
It is not possible to evaluate all the upgrade tasks in this document, but here's a short list with examples about moving (parts of) tasks before or after the upgrade process:
- Use Oracle's new tool, The Upgrade Manual Script (TUMS), to report upgrade tasks which don't have to be run in your environment. (See metalink note 230538.1)
- Gather table and index statistics before upgrading in source system. Although 10.7 not support CBO, we still can analyze all needed tables and keep init.ora parameter optimizer_mode=rule. Then RBO will be used even if statistics exist. This analyzing should be done after data purging or larger updates are done, to have up-to-date statistics.
- For example, in Order Management, Shipping Execution and Advanced Pricing you can use Upgrade Bifurcation feature to upgrade only active transactions during AutoUpgrade process and inactive transactions could be upgraded when new system is already in use.
- Importing, processing and purging data in all kinds of interfaces are standard Applications Upgrade steps, but naturally, these tasks can and should be done before taking down the system. In case there's a continuous feed of transactions into interface tables, we still should process the majority of records in interfaces during uptime and the rest few ones when taken down the system for upgrade.
- Few smaller issues, like installing online help could be done after the system has gone live, but there are other restrictions like users who have just started using the new system, likely need this help system pretty much.
- If you need to add any new languages to new system, it'd be reasonable to do it afterwards, when your new system is already in use and proved to be working. Otherwise you need to apply patches for more languages during the upgrade, thus having longer downtime. If converting character set to UTF-8, it should be done after Applications is successfully upgraded to 11i.
- Any data issues found during test cycles should be fixed in production as well, so that they would be fixed before we start production upgrade.
- Purging or archiving old and unnecessary data before any upgrade. The less data to upgrade, the faster the upgrade.
Also, all the customizations and interfaces aren't required immediately on the first minute after going live with new system. It might be reasonable to schedule less important and less used customizations to be implemented later on, after successfully gone live with the core functionality. On the other hand, if using an automated system for applying the customizations, the whole process usually won't take too much time (it is easy to create a script which copies all the custom forms/reports in place, creates triggers, etc).
It's always a good idea to check for any national holidays and plan Applications downtime over weekend and holiday if possible, thus keeping the non-availability impact to the business minimum. Of course this activity doesn't shorten the hours spent on migration nor would help much in an international installation or 24x7x365 scenarios, but it is definitely worth considering in most upgrade cases. For example even though the manufacturing has to work 24x7, the financial department doesn't work during national holidays – the actual price of downtime will be lower.
Database migration to applications' certified versions
Many Applications 10.7 sites are still running on desupported 7.3 or 8.0 Oracle Server versions databases, so the first major task during an upgrade is to get all the data to new, certified release of database server. Currently, the choices vary from version 8.1.7 to 9.2.
There are several ways to migrate data to the new version database, but the most important question regarding upgrade downtime is, whether to do the database upgrade prior to critical downtime period or within it. Both ways have their pluses and minuses, there's again tradeoff between downtime and preparation time, the shorter you want the first, the more you'll have to spend on second.
Upgrading database prior to applicaitons
Upgrading Oracle database before Oracle Applications can help to reduce several hours of downtime during the production migration, especially if the export/import upgrade path is chosen. Even using the mig utility for database migration, can take several hours to recreate the dictionary and other objects, thus if downtime price is high, we need to spread database upgrade's downtime to a week or two prior Applications critical downtime. This is mostly the case, when weekend/night downtime is cheaper than during daytime, thus having a point in spreading downtime to multiple parts. Upgrading database prior to Appications itself has other advantages, like ability to analyze tables and indexes during uptime, in working system.
Here arises one more important issue, if you are upgrading Apps version lower than 11.5.3 and you want to use database version 9i. Oracle has certified its Applications on 9i starting from version 11.5.3.
A quote from Metalink Certify section:
Oracle9i: There are no plans to certify Oracle9i on release 11.5.3 or lower, including Releases 11.0.3 & 10.7.
So, if you are planning to upgrade to database version 9i, you shouldn't separate the database and applications upgrade, because versions 10.7 and 11.0 aren't certified with 9i database configuration, thus introducing additional problems and risks. If you are planning to run on 8.1.7, there isn't going to be a problem, because both 10.7 and 11.0 run on 8i fine. Though, since Oracle 9i Release 2 is getting stable enough now, upgrading to 9i R2 is recommended. Of course, it is possible to upgrade (or keep) existing database on 8i, run old Apps on 8i at first, then perform the Apps upgrade using 8i, and switch to 9i later on, when needed. This greatly increases complexity, but can eliminate or spread up several hours of critical downtime for upgrading database and analyzing tables, while still keeping the certified version compatibility between Apps and database.
Upgrading database with applications
This is probably the cheapest way of upgrading in sense of overall project efforts. The old Applications' database is migrated within the production upgrade downtime. Since all major changes, both in database and Applications are done in a row, no double testing effort is needed, but the downtime is going to be longer. In some international 24x7 used environments this approach to have larger downtime once, than shorter downtime twice, can be even better, because there's not much difference in any time of day or week.
Database upgrade with export/import method to a new database
Applications version 11i requires minimum database block size to be 8k, thus in several cases Applications database has to be recreated if old one has smaller block size. Reorganizing database using exp/imp has other advantages as well, like optimizing table storage usage, index rebuilding, also we are able to use locally managed tablespaces already from start, not having to convert from DMTs.
But the major drawback is increased downtime: the whole database has to be exported to an exportfile, then imported back to fresh one, all indexes have to be recreated, etc. If you have decided or have to go with export/import, there's one trick, which can help us to shorten downtime caused by import.
The main idea is to export and import the logical structure of database to new server even before the upgrade starts. Naturally we have already created the new empty database to new location, with all its datafiles with sizes measured accurate during test cycles, controlfiles, redos, etc. Since upgrading with exp/imp method involves exporting and importing the full database, we can actually transfer the logical structure (table definitions, packages, triggers, etc) to new database using exp and imp options rows=n. This can shorten downtime considerably, since creating packages, table definitions and others can also take considerable amount of time. Since we transfer all data (rows + possibly LOBs) afterwards, we will have consistent and up to date information in target system. The Applications configuration should be frozen after exporting the logical structure, no DDL changes to database should be allowed. To verify whether any object definitions have changed after export in source database, we can use dba_objects view's column last_ddl_time. Following query will show all objects which have changed in last 24 hours due DDL on them.
select owner, object_name, object_type from dba_objects where last_ddl_time > sysdate - 1;
In rare cases, as some customizations, objects are dropped and recreated regularly, thus they'll show up as recently modified. It's usually enough to verify whether the definition of the object in new database still matches the one in source environment. Also, you might want to do possible DDL changes in database (such as TUMS patch, etc) before the export, to avoid confusion with comparing schemas afterwards.