When you said you scaled to 1.2 Terabytes, was that user space, or total disc space?
There were two things I said actually. With Itanium what I said was we reached 1.2 trillion transactions per hour with six of those servers. And we actually had to get the storage to go up to 1.2 terabytes which was actually common all across. That is the amount of storage you need for achieving 550,000 transactions per second. How did you set up storage for the project?
On the storage side, we used actually multi-tiered storage, and we used SAN as well as NAS. The NAS was really used wherever we needed shared storage. For instance some of the home directory and other things were all networked attached so all servers could share that storage and we didn't have to duplicate the data for each of the servers. SAN was used for most of the database type storage. What have you learned so far from this project?
Overall, on the grid side of it, what we learned was that essentially, if you put the machines in a cluster with the right storage and software you can really achieve the performance [you need] much more cost effectively.
Also, one of the other things we learned was that you can use different tiered storage for SAN or NAS depending on how you need to essentially organize your data.
Much of the challenge we ran into was that the database software came with the clustering software and the management software along with it. The documentation around that was very important to get this working quickly and easily. Because once we got it to run then we were able to automate it.
You need to keep the pace very consistent when you're running in a grid type of environment, especially because when you do clustering you can almost achieve the payload capability pretty easily with software so you can really manage your outages very quickly. Provisioning and distribution are all really driven by the application software themselves. The [Telco] application provided some of that as well as the network storage software and the database software. Intel plans to deliver two separate dual-core products and dual-core-enabled chipsets for its Pentium processor-class families in the second quarter, providing the ability to process four software "threads" simultaneously. How does this technology fit into a greater grid computing picture?
Intel is coming up with lots of things including processor platforms and chipsets to provide more capabilities in the hardware. And these really provide compelling value for the grid solutions as well. So, if you look at multi-core, having more than one CPU and one processor, essentially means more performance for these types of [systems.] How did multi-core technology come about?
More than two years ago, in 2002, Intel started with hyper-threading technology. And the whole purpose there was we were looking at the chip and said, 'Well, we don't use all the execution units all the time.' So, allowing another thread to exclude or use the same resources helps. We went through a lot of workloads trying to look at that and we were surprised that you can get up to 30% performance gain just by adding [another thread.]
This year is the key year where we actually role out duel core, or two full execution units in a single processor. It's like a duel processor machine in one CPU itself from an applications point of view. And this is going to give you SNMP performance.