Problem solve Get help with specific problems with your technologies, process and projects.

Need to speed up load of 2 GB database

We have an database that's about 2 GB. Our DBAs (I use that term lightly) have set up a dump of the database and load. The load takes about 1 1/2 hours to run. It drops and recreates the user, then runs the import, which creates all objects and indexes and loads data within the same import. We had thought it might be faster if we do not drop the user, just drop indexes, truncate tables, run import with indexes=N, then recreate indexes after the import. The problem is the database load still seems to take over an hour, and the creation of the indexes takes over an hour as well. Any help in speeding up the load would be appreciated. Arguments to the original import were:

commit=y recordlength=65535 buffer=100000000
Arguments to the new import are:
commit=y ignore=y indexes=n log=${LOG} constraints=n
I think he does commit=y because he is worried that the rollback segments can't handle some of the larger tables.

Are existing triggers enabled during the load? If they are, they could be hurting performance. If the data to be loaded doesn't need to have triggers run against it, disable the triggers before loading, and enable afterwards. Other than that, I don't think there's too much you can do to speed up Import (removing commit=y won't help very much). Better to rethink the process: dump your data into a flat file and use SQL*Loader to load it--it will be MUCH faster than Import, especially if you use direct path loading.

Dig Deeper on Oracle database performance problems and tuning

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.