Server virtualization has become a proven technology for many situations and is rapidly becoming the de facto standard in data center design. But virtualizing database workloads? That's a whole new story.
Organizations considering virtualization technology must perform due diligence before committing to a specific product or strategy – especially if they’re thinking of virtualizing mission-critical database workloads. Don’t get fooled by past successes in virtualizing less important applications on Windows servers. All virtualization projects are different, and a database workload virtualization project involves its own unique set of risks and benefits.
Virtualization project failures can be avoided by taking time to vet the options and gain the knowledge needed to deploy the technology properly. There are several issues that must be addressed when planning to virtualize Oracle databases and applications, all revolving around two main subjects: performance and platforms.
Database performance in virtualized environments
One of the biggest concerns associated with virtualizing mission-critical applications is whether those applications will perform at an acceptable level.
In the past, virtualization technology has had the negative side effect of hurting performance, simply because the hardware was not optimized to deliver maximum performance. That situation can be disastrous when moving a mission-critical database or application to a virtualized platform. However, with some simple planning and testing, organizations can guarantee that hardware performance levels will meet the current and future needs of complex databases and applications
Oracle Database, at least on the server side, can consume large amounts of memory and CPU cycles. With that in mind, it is important to determine what performance levels are provided under the current implementation and set out to improve on those numbers in a virtualized environment. Current information about things like memory and CPU usage can serve as a guide when organizations configure virtualized environments to offer adequate performance for mission-critical applications.
Server virtualization vendors offer performance rating and load calculation tools or formulas that can be used to size server demand for an application. If an adequately performing mission-critical application is consuming a measurable amount of resources on a physical server, calculations can be done to determine what the needs would be under a virtualized platform. One of the most important rules of thumb to remember is that virtualization does not decrease the amount of memory needed, and providing ample memory in a dedicated partition is often the key to success in virtualizing database workloads successfully.
CPU usage is another primary concern with enterprise applications. To provide needed clock cycles for mission-critical applications, there must be a baseline of CPU use, including utilization peaks. IT staff can use those CPU measurements to calculate the expected load on the virtualized platform.
Organizations can increase performance of a given application by providing adequate RAM and using high-end processors that are optimized for virtualization. This is true even when deploying multiple virtual servers onto a single physical server. Those same calculations about current CPU usage and other baselines can be used to determine how many virtual servers can run on a physical server. The number of processor cores along with the virtual environment requirements will determine the feasible virtual density on a single physical server.
It’s also important to keep in mind that performance upgrades tend to be more easily accomplished on a virtual environment, because the virtual server can easily be moved to a more capable physical server.
Picking a server platform for virtualizing database workloads
Oracle databases and applications have obtained a level of platform independence rarely seen with other vendors’ software solutions. Oracle supports hardware platforms ranging from mainframes to x64/x86 systems to Sun Solaris implementations. That multiplatform support brings up an interesting question: When and how should an Oracle product be virtualized?
The simple answer is that platforms running Windows or Linux have the most to gain with virtualization, and there are a few simple reasons for that. First and foremost is the fact that mainframe environments have used virtualization technologies for decades, meaning that most mainframe-based Oracle solutions are already virtualized in one manner or another.
On the Unix front, multithreaded, multiprocessor applications have been the norm and they have greatly reduced the benefits offered by modern server virtualization. Meanwhile, high-end Unix platforms have been optimized for virtualization solutions for quite some time. That leaves x64/x86 Windows and Linux systems as the primary candidates for a modern server virtualization project for Oracle databases and applications.
In the x86 space, new CPU designs, falling price points and reduced energy use are all positives that can be maximized with virtualization. Some of those benefits, such as business continuity or load balancing, can only be realized fully by leveraging virtualization products. Does that mean mainframe or Unix systems should be abandoned in favor of x64/x86 systems? Probably not, but virtualized servers under Windows and Linux are usually less expensive to manage and support than legacy platforms, a fact that fuels many cross-platform migrations.
It all comes down to cost versus performance. If a new virtualized server environment can deliver improved performance at a reduced cost, then it makes sense to migrate from one platform to the other. But implementers need to consider both the tangible and intangible costs associated with virtualization, such as downtime, hardware upgrades, licensing fees, consulting costs, training and support.
With proper forethought, adequate planning and measurable expectations, the task of moving mission-critical Oracle applications to virtualized environments can be achieved with very little disruption or pain.
This was first published in March 2011