Takeaway: Virtual machines are becoming an increasingly hot topic in IT as a way to consolidate servers and reduce IT budgets. In case you’re not up to speed on virtual machines, this introduction will tell you what you need to know.
Virtualization is far from a new concept, but it has significantly evolved over the years. Originally developed in the 1960s as a way to share valuable mainframe resources, today’s virtual machines are really complete computers in their own right, running on their own inside the context of another operating system, with their own RAM and disk space. The virtualization software that makes this possible allocates these specific resources from the host computer for use on the virtual systems. Other hardware, such as the actual processor, keyboard, mouse, etc., is shared with the host operating system and with all of the other virtual machines that might be running on that particular host.
Understand, however, that even though these devices are shared, the virtual machine’s operating systems Windows, Linux, NetWare, etc. still has software drivers to enable the devices. It’s not a situation where you’re running something crippled or incomplete. Also, while a virtual machine runs inside the context of an operating system and shares the host’s hardware, the virtual machine is completely isolated from the host.
The question “What are virtual machines?” is as important as “What aren’t virtual machines?” Virtual machines are not emulated machines. For example, some software, such as Microsoft’s Virtual PC product for the Macintosh platform, emulates PC hardware so that you can run Windows XP on your Mac OS X system. When you load the Windows XP operating system into a Virtual PC/Mac partition, Windows XP looks around and understands that it’s being loaded into an Intel x86 environment.
The translation that has to take place from this “software-based x86 processor” requires some overhead and, as a result, emulated hardware does not have the performance of the real thing. In contrast, a virtual machine (as opposed to an emulated machine), makes direct use of the host’s hardware. It virtualizes the processor so that it appears to have it all to itself when, in reality, that type of resource is used across all of the virtual machines.
Why run a virtual machine when a real server will do? Let’s answer a more basic question first. Why do you run applications on individual servers rather than all on one big monster server? Here are a couple of reasons that applications are separated onto their own boxes:
- Application interaction: Some applications don’t get along well when running under the same hardware inside the same operating system.
- Manageability: Some administrators prefer to keep separate applications running on their own hardware for the simple reason that they might be easier to manage.
- Processing power: Applications such as Exchange or database applications often need their own hardware because the processing power required to drive these applications is much greater than that for normal apps.
- Security: A print server doesn’t need to be as secure as a database server handling credit card numbers. It might not make sense to run both of these apps in concert on the same hardware.
With the exception of processing power, all of these situations can be very well addressed by virtualization. And in some cases, the issues of application interaction, manageability, and security can even be improved.
Virtual servers address a number of situations and provide some excellent opportunities. These are just a few of the things virtual machines can help with:
- Server consolidation: Consolidate services that require fairly low horsepower onto a single hardware unit segregated into individual virtual machines. Benefits: less hardware to maintain, lower power requirements, and fewer network switch ports required.
- Quick server rollouts: Virtual machine software provides you with the ability to roll out new servers very quickly and easily. Need a new server up by lunch and don’t have a spare unit? No problem. Just add a virtual machine to one of your servers dedicated to this purpose. Even better, start with a prebuilt copy of the operating system to avoid that nagging installation time usually required to install an OS.
- Research and development: Tight budgets these days make the idea of having a fully equipped testing lab difficult to get by CFOs. With just one or two decent servers, you can use virtual machines to run eight or nine platforms for testing.
You might wonder if virtual machines are really useful for your environment. Let’s look at an example involving an organization with 250 employees. All employees work a 9-to-5 workday. They have a well-designed server room; the servers, plus all of the associated network and power backup hardware, take up a couple of racks. This organization has the following physical servers:
- File server
- Print server
- Exchange server
- Custom application server (low processing power required)
- “Database server” with just a bunch of Access databases
- DHCP server
- Primary DNS server
- Secondary DNS server
- Backup server (handles backups for all servers)
- Database server
- Primary Active Directory domain controller
- Secondary Active Directory domain controller
That’s a total of 12 servers, which is pretty reasonable for an organization with 250 employees. However, consider the fact that most of these processes don’t need a whole lot of horsepower. File serving and DHCP, for example, require very little in the way of processing power.
This organization might easily benefit from server consolidation. The best way to go about any kind of consolidation is to buy a fairly significant server up front and migrate each low-need process to a virtual machine on that server. Here’s an example of how this environment might be broken down:
- Server 1: Exchange
- Server 2: Database server
- Server 3: File server, print server, custom application server, primary DNS server, primary Active Directory domain controller
- Server 4: Access databases, DHCP server, secondary DNS server, secondary Active Directory domain controller
- Server 5: Backup server — and possibly used as backup in the event of a failure of one of the other servers
Servers 3 and 4 in this example run a multitude of virtual machines in order to support these applications.
This is just one example of how things could be broken down for this particular company, and it’s probably easy to see how this might scale very well for larger organizations. In this example, the number of physical servers were reduced from 12 to five, not resulting in huge power and network savings, but providing some hardware and maintenance savings.
First, if there’s a hardware failure, an administrator can quickly move an entire virtual machine to different hardware and just bring it up. No reconfiguration is required, as long as the new host server has adequate resources. You can even move virtual machines to servers running from different vendors, with different RAID controllers and different service pack levels for the host. This easy moving is the result of the fact that the virtual machine runs entirely isolated from the host. Further, the disk system for the virtual machine is located inside a single file on the host, meaning that moving it to a different host server really is as simple as copying a file.
Second, new machines can be easily provisioned. In fact, you can go so far as to initially create a base image for servers and then create all new machines from that image. If it sounds a lot like rolling out new desktops using Ghost, it is.
Third, if you need a server to test the latest operating system service pack, you can set up a virtual server fairly quickly and start testing. And, rather than having to wipe out the server and start from scratch in the event of a problem, just start from your baseline image!
No discussion of virtualization would be complete without talking about the buzzwords that are popular with IT managers these days: total cost of ownership (TCO), return on investment (ROI), and disaster recovery (DR).
A TCO comparison between virtual servers and physical servers usually results in the virtual server product making the grade. Initial implementation may be somewhat expensive, but with reduced electricity, switch port, and (possibly) human administrative costs, not to mention reduced server hardware maintenance costs (due to fewer servers) and the ability to quickly recover from failures, TCO figures are very likely to start tipping to the virtual server side.
With a lowered TCO, at some point, the ROI figures will also start to tip in favor of virtual servers. I can’t really provide much more than that in this short article, since the ROI figures depend on each company and how it does business.
Disaster recovery? Yes, virtualization can be useful as part of a comprehensive DR plan. If most of your infrastructure is housed inside single files that are actually complete servers and can be run on any adequate hardware, your recovery time (even in a temporary environment) is reduced greatly.
Companies selling virtual machine software, such as VMWare and, soon, Microsoft, tout very high TCO numbers. These are definitely best-case scenarios. While virtual machines can be great for some, notice my use of the words might and likely throughout this article. That choice of words is very deliberate. Not every organization will benefit from server virtualization, particularly those that run extremely lean to begin with and really push hardware to the limits.
Further, virtualization is still not always appropriate for high-level systems such as database servers. In short, look at the vendor’s TCO figures as a reason for considering the rollout, but not as the reason for the rollout. Do your own calculations based on how you operate.
Virtualization has come a long way from the mainframe days and are now servers in their own right, with advanced management tools and the ability to save organizations both time and money. In the rest of this series, we’ll look at the offerings from Microsoft and VMWare.