This tip is brought to you by the International Oracle Users Group (IOUG), a user-driven organization that empowers...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Oracle database and development professionals by delivering the highest quality information, education, networking and advocacy. It is an excerpt from the paper "Keep downtime short on 11i migration." Become a member of the IOUG and gain access to thousands more tips and techniques created for Oracle users by Oracle users.
Case study of an upgrade
Here's a simple case study of upgrading Apps 10.7SC on Sequent Dynix to 11.5.7 on Solaris. Only the most relevant details are brought out here. Source database size is ~50GB, Modules: AP, AR, GL, FA. All Applications tiers are on the same node.
Architecture in general level
Two Sun servers: TEST for testing, development and validation, PROD for production.
- First test upgrade on TEST
- Testing, customizations development started on first upgraded Apps
- Second test upgrade on PROD
- Applied developed customizations to PROD (reports, scripts in file system, custom tables, triggers, concurrent job definitions)
- Third test upgrade on PROD reusing original prepared software stack. (file system customizations already in place, scripts for creating custom tables and triggers were ran, new concurrent job definitions were automatically created using FNDLOAD and .lct files)
- Basic system, unit testing on third test upgrade and stress testing
- Production upgrade on PROD (file system customizations already in place, scripts for tables/triggers, FNDLOAD for concurrent definitions)
- After gone live, cloned PROD back to TEST to have most recent test environment.
Disk backup and configuration
Production server had 4*72GB disks, which were divided into two striped RAID0 volumes with about 130 GB usable space each (after OS, swap), with both disk sets on different FC-AL controllers.
|Stripe A||Stripe B|
|OS||Backups of applications software stack|
|Applications software stack|
|Backups of datafiles|
Two different striped volumes and having redologs on low-activity allowed us to reduce IO bottlenecks. Also, when doing backups of database (and file system), we copied from one set of disks do second one, which was very fast because the first set did only reading and second did only writing, and sequential access to disks allows great throughput. Since we did backups in certain milestones, we didn't implement any more complex backup solutions, such are mirror splitting and BCVs.
When the major part of the upgrade and patching was completed, database files were copied from stripe B to A again to have a reliable offline backup, then the database was put into archivelog mode and was opened to complete the last post-upgrade tasks (custom upgrade speeding parameters like _wait_for_sync were also removed). Note that the database was opened now from the offline copy on stripe A, to leave stripe B completely redundant. The files on Stripe B were backed up, after that Stripe B was resilvered to a RAID0+1 stripe-mirror disk set with A, to have real hardware redundancy. The point when resilvering, postupgrade steps and basic testing were done, we were live.
|Applications software stack|
Putting all datafiles, redos and even archivelogs (before copying to backup server) into the same redundant volume was acceptable performancewise because the upgraded system was fairly small and didn't have huge IO activity during normal production use. During the production upgrade, sacrificing redundancy and striping the disks separately allowed us to shorten the upgrade time several hours and significantly sped up backups during the upgrade as well.