Select (sum(nvl(nu_year_open_balance,0)*nvl(b.nu_eval_rate,0))) from mst_item b, mst_month_summary a where b.vc_comp_code = :comp_code and a.vc_comp_code=b.vc_comp_code and a.vc_item_code=b.vc_item_code and nvl(a.nu_year_open_balance,0)<>0 and b.vc_item_group=:item_group and dt_fin_start_date=:start_date
Mst_item has 4 lakhs rows and mst_month_summary has 8 lakh rows.
There is primary key index on mst_item(vc_comp_code,vc_item_code) and on mst_month_summary it's on (vc_comp_code,vc_item_code,dt_fin_start_date,dt_fin_end_date)
Is there any option to optimize the performance of this query. I tried using materialized view but it gives some error.
b.vc_comp_code = :comp_code
a.vc_comp_code = :comp_code
[as a --> references mst_month_summary table and it has 8,000,000 rows. (1 Lakh --> 100,000 if I am not mistaken :)].
b.vc_item_code = :item_group
a.vc_item_group = :item_group.
So, your new where condition should be as follows:
a.vc_comp_code = :comp_code and a.vc_item_group=:item_group and dt_fin_start_date=:start_date b.vc_comp_code=a.vc_comp_code and b.vc_item_code=a.vc_item_code and nvl(a.nu_year_open_balance,0)<>0
Let me know how it comes out. Also, please let me know which version of the database you are running.
P.S. Ensure that the tables are analyzed if you are using a cost-based optimizer. It is recommended that you use DBMS_STATS.GATHER_SCHEMA_STATS procedure to analyze the schema as opposed to only these two tables.
This was first published in March 2004