Tip

Keep downtime short on 11i migration: Applications database migration using the exp/imp method

This tip is an excerpt from the paper "Keep downtime short on 11i migration" and is brought to you by the International Oracle Users Group (IOUG). Become a member of the IOUG to access the paper referenced here and a repository of technical

    Requires Free Membership to View

content created for Oracle users by Oracle users.


This tip continues from Keep downtime short on 11i migration: Evaluating which tasks can done before and after.


Sequence of steps to perform fast applications database migration using exp/imp method

In this example, we won't get too much in details in export and import or database links nor parallel processing, more attention is paid on methodology to first transfer only database structure to new system during uptime and only data during downtime. Other parameters to exp/imp (direct, buffers and log for example) might be necessary to tune up used tools performance and behavior. The data transfer process can have more stages, depending on amount of data and again, allowable downtime.

1. Transfer source database logical structure to target database (can be done before migration, no downtime required)

  • Export only database structure (table, index, constraint definitions, PL/SQL stored procedures, etc.) from source database:
     exp sys/password file=structure.dmp rows=n indexes=y constraints=y full=y
    
  • Import structure without constraints and indexes to target database:
     imp sys/password file=structure.dmp constraints=n indexes=n full=y
    

Those operations won't need much tuning, because they can be performed during uptime.

2. Transfer only data from source database to target (done after source database is ready to be upgraded, downtime required)

  • To retain data consistency, we have to make sure that noone is changing the database anymore, that usually means shutting down Applications, any interfaces and putting database to restricted mode.

     

  • Export only table rows from source database:
     exp sys/password file=data.dmp rows=y constraints=n indexes=n full=y
    
  • Import only rows of all tables from export file:
     imp sys/password file=data.dmp constraints=n indexes=n full=y ignore=y
    
    Import is done without constraints and indexes to avoid temporary data inconsistency and index maintenance performance issues during data loading. Indexes and constraints are created after all data has been transferred successfully.

    There are few more export/import parameters, which could increase performance of data transfer:

     

    Parameter Comments
    direct If set to Y, causes exp/imp to bypass conventional SQL mechanism and export data straight from datafiles, thus being much faster.
    Restrictions: Before 8.1.5 rows with LOB or object columns were not exported, conventional method had to be used instead. Client character set (set by NLS_LANG parameter) has to match database's character set.
    recordlength Set in bytes, specifies the amount of data written to or read from export file in one request. Maximum size is 65536 (64kB), default size is operating system specific, generally 1kB. Normally 64kB is best.
    buffer Applies only for conventional path export and import (direct=n). Specifies the fetch buffer size in bytes - determines maximum number of rows in an array fetched by export or max insert array size for imp.
    commit Default is N, meaning that commit is only done in end of each table (or its partition). When importing huge tables and there's not enough rollback space, commit could be set to Y, but buffer's value should be quite high then to avoid overcommiting.

     

  • Alternatively, use parallel direct load insert over database link to transfer huge tables
    Since it might not be feasible to export tables tens of gigabytes in size to disk and back to new database from there, direct load insert could be used instead. Direct load insert over a database link can be much faster than exp/imp, since the data doesn't have to be stored intermediately, but is transferred directly over the network instead.
     alter session enable parallel dml; alter table table_name nologging; insert /*+ APPEND PARALLEL(table_name,4) */ into table_name select * from table_name@dblink; alter table table_name logging;
    
    Don't forget to set table or tablespace back to logging mode afterwards. Starting from 9i you can use NOLOGGING hint right in the insert command, that way you don't have to manually alter table or tablespace to nologging state at all.
     insert /*+ APPEND NOLOGGING PARALLEL(table_name,4) */ into table_name select * from table_name@dblink;
    
    The PARALLEL hint above only states that the insert itself will be done using parallel execution. To have data extracted from source database using parallel query, we could add another parallel hint to select as well, but often it doesn't help because the simple nature of a full table scan - usually disk IO and network are the bottlenecks.
     insert /*+ APPEND NOLOGGING PARALLEL(table_name,4) */ into table_name select /*+ PARALLEL(table_name,4) */ from table_name@dblink;
    
    See Appendix C for simple script for generating insert and alter commands for relevant tables.

     

  • In SAN environments, one can even use split mirror technique or BCVs like on EMC systems to open an exact copy of the database on a temporary server to have more than one source environment to load data from. This technique can speed up data migration, when moving from old slow servers to new faster ones. It requires a server with same operating system and platform to be used temporarily during production upgrade. Some distinction by table/schema names should be done, to have part of tables loaded from one, part from other server.

    You can combine above methods for transferring schema objects, for example exp/imp method for small tables and schemas, but direct load insert over dblink for big tables.

3. Create indexes and enable constraints (downtime required)

  • First approach: Use previously exported structure file for serially creating both indexes and constraints
     imp sys/password file=structure.dmp constraints=y indexes=y full=y ignore=y
    
    This approach has its problems - with import we can't actually specify a custom degree of parallelism to create the indexes nor we can't specify NOLOGGING attribute. Even if we use special undocumented parameters to disable redo writing to disk, nologging attribute can still help improve performance by causing less checkpoints and using less CPU as well. Luckily, there has been a feature of imp around, which allows us to extract index creation scripts from exportfile.

     

  • Second approach: Create indexes separately using imp's option indexfile and constraints with imp as usual

    Large index creation time can be considerably shortened when doing it in parallel and with nologging. Since import syntax itself doesn't allow us to specify how to create the indexes, we have to get the index creation scripts for modifying them according to our needs. Imp's option indexfile comes very handy here:
     imp sys/password file=structure.dmp constraints=y indexes=y full=y ignore=y show=y indexfile=indexes.sql
    
    The above example generates normal SQL DDL commands for both standalone and constraint index creation (for primary and unique constraints some indexes are implicitly created). Note the parameter show=y which specifies, that no changes are done to the database, the exportfile is just scanned from start to end and any index creations occurring there are stored to indexes.sql script which was specified using indexfile option.

    Later on we can open up the SQL script with an editor, for example replace LOGGING with NOLOGGING and add PARALLEL clauses to every large index creation command. Also, we can use alter session commands in the beginning of script to modify parameters like sort_area_size and db_file_multiblock_read_count which can significantly increase index creation speed (see section about Database Tuning for more info). When modified, the script has to be run using sqlplus.
     sqlplus sys/password @indexes.sql
    
    Indexfile contains commands for both standalone indexes and indexes required for PK/unique constraints, but not the constraint definitions themselves. To create and enable the constraints, we have to run imp of structure.dmp file once more:
     imp sys/password file=structure.dmp constraints=y indexes=n full=y ignore=y
    
    In above command we have set indexes=n because they have been created already in previous step. Also because all needed indexes already exist, creating the constraints should complete fast, because no index building is needed for unique/primary keys.

4. Verify, that no object definition has changed in source database since structure export (in parallel with step 2 or 3)

  • Query dba_segments as described above and transfer any new objects to new database if appropriate (for example use Enterprise Manager or Toad to extract DDL of single object). There shouldn't be much or any new objects, since the Apps configuration should have been frozen since structure export. This task can usually be done in parallel with data transfer or index creation.

Note that there won't be any huge rollback or redo generation if you use direct load AND don't have any indexes enabled on relevant tables. This is also one reason why primary/unique constraints should be disabled during data load, since those require indexes.


This was first published in July 2004

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.