News Stay informed about the latest enterprise technology news and product updates.

Intel answers Project MegaGrid questions at LinuxWorld

@8966

BOSTON -- It's been more than two months since Oracle, Dell, EMC and Intel announced Project MegaGrid, a collaborative effort to develop a standard approach to building and deploying an enterprise grid computing infrastructure with components from each company. And Dr. Sunil Saxena, senior principal engineer with Intel's Software Solutions Group says the project has yielded some promising results.

Speaking to a crowded room at the LinuxWorld Boston Conference and Exposition yesterday, Saxena explained that phase I testing for Project MegaGrid was done on a 32-node Intel Xeon processor based Dell/Linux server infrastructure and an Intel Itanium processor based Dell/Linux infrastructure, both running Oracle Database 10g and Oracle Real Application Clusters 10g.

The systems were then benchmarked against a large SMP Unix server and both the Intel and Itanium infrastructures scaled a 1.2 terabyte Oracle Database 10g and exceeded 550,000 transactions per hour running a Telco application.

In this excerpt from yesterday's conference session, Saxena answers general questions from audience members about the lessons learned from Project MegaGrid, and about how soon-to-be released processor technologies from Intel fit into the larger grid computing picture.

When you said you scaled to 1.2 Terabytes, was that user space, or total disc space?
There were two things I said actually. With Itanium what I said was we reached 1.2 trillion transactions per hour with six of those servers. And we actually had to get the storage to go up to 1.2 terabytes which was actually common all across. That is the amount of storage you need for achieving 550,000 transactions per second. How did you set up storage for the project?
On the storage side, we used actually multi-tiered storage, and we used SAN as well as NAS. The NAS was really used wherever we needed shared storage. For instance some of the home directory and other things were all networked attached so all servers could share that storage and we didn't have to duplicate the data for each of the servers. SAN was used for most of the database type storage. What have you learned so far from this project?
Overall, on the grid side of it, what we learned was that essentially, if you put the machines in a cluster with the right storage and software you can really achieve the performance [you need] much more cost effectively.

Also, one of the other things we learned was that you can use different tiered storage for SAN or NAS depending on how you need to essentially organize your data.

Much of the challenge we ran into was that the database software came with the clustering software and the management software along with it. The documentation around that was very important to get this working quickly and easily. Because once we got it to run then we were able to automate it.

FOR MORE INFORMATION

Oracle launches Linux Test Lab

Oracle takes grid mobile

Any other lessons learned?
You need to keep the pace very consistent when you're running in a grid type of environment, especially because when you do clustering you can almost achieve the payload capability pretty easily with software so you can really manage your outages very quickly. Provisioning and distribution are all really driven by the application software themselves. The [Telco] application provided some of that as well as the network storage software and the database software. Intel plans to deliver two separate dual-core products and dual-core-enabled chipsets for its Pentium processor-class families in the second quarter, providing the ability to process four software "threads" simultaneously. How does this technology fit into a greater grid computing picture?
Intel is coming up with lots of things including processor platforms and chipsets to provide more capabilities in the hardware. And these really provide compelling value for the grid solutions as well. So, if you look at multi-core, having more than one CPU and one processor, essentially means more performance for these types of [systems.] How did multi-core technology come about?
More than two years ago, in 2002, Intel started with hyper-threading technology. And the whole purpose there was we were looking at the chip and said, 'Well, we don't use all the execution units all the time.' So, allowing another thread to exclude or use the same resources helps. We went through a lot of workloads trying to look at that and we were surprised that you can get up to 30% performance gain just by adding [another thread.]

This year is the key year where we actually role out duel core, or two full execution units in a single processor. It's like a duel processor machine in one CPU itself from an applications point of view. And this is going to give you SNMP performance.

Dig Deeper on Oracle on Linux

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchDataManagement

SearchBusinessAnalytics

SearchSAP

SearchSQLServer

TheServerSide.com

SearchDataCenter

SearchContentManagement

SearchHRSoftware

Close