Q

SQL Server overworked, or more memory necessary?

For a 80-160 dimension OLAP query over three or four tables, each with 50-100 million rows, the query takes an hour on average on SQL Server. It is doing full table scans 60% of the time, lots of paging, i/o queues are 5-6, there are a lot of group by and order by clauses which do a lot of sorting, mostly in disk and some in memory. Would doubling the memory and striping across twice as many disks solve the problem or is it that SQL Server is too overworked in this case? The CPU utilization is 100% with three users. There are seven or eight users, but they cannot get into the system. Now we are using a 2 CPU Xeon 2.4 GHz machine with 4 GB of RAM. The database size is 300 GB. The average table size is 50 GB, and the max table size is 80 GB.

I would recommend taking a look at the data design before throwing more hardware at the problem. 80-160 dimensions

in a query seems a bit excessive. Could it be you have an example of a one-size fits all data mart? I suggest getting your data architect/business analyst involved to gain an in-depth appreciation for how that query result set is being used, then look at the data architecture for areas where the design can be improved.

For More Information


This was first published in January 2004

Dig deeper on Oracle database design and architecture

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchDataManagement

SearchBusinessAnalytics

SearchSAP

SearchSQLServer

TheServerSide

SearchDataCenter

SearchContentManagement

SearchFinancialApplications

Close