The dictionary definition of “virtual” is being in essence but not in fact. That’s kind of an interesting state when you are talking about computers. The reason is that for most folks, a web server or some other mysterious gadget that makes their email work is probably “virtual” to them already because most users don’t see the server itself in action, only its result (a web page served up or an email delivered to their inbox). Virtualization is one of those “deep IT” concepts that the average person probably doesn’t pay much attention to. But here is conceptually what’s going on.
Back in the bad old days of computing, if you wanted to have a Microsoft Windows Server installed and available for use, you would go out and buy yourself a physical piece of computing equipment, complete with hard drives, RAM, video card, motherboard, and the rest, and install the Windows operating system to it. If you needed another server (for perhaps another application like email or to share files), you would go out and buy a new piece of hardware and another license for Windows Server, and away you would go. This kind of 1 : 1 ratio of hardware to server installation was fine until you had more than a few servers installed on your network at home and the AC couldn’t keep up with the heat output of your computer equipment.
So a bunch of very smart people sat down and asked how this could be handled better. I’m sure someone in the room said just buy much more expensive hardware and run more applications on the same physical server. This is in fact the model of larger businesses back in the worse old days of mainframe computing (oh yeah, people still use mainframes today, they just keep them in the closet and don’t advertise this to their cool friends who have iPhones and play World of Warcraft online).
But the virtualization engineers weren’t satisfied with this solution. First off, what happens when the hardware that everything is running on, fails? All of your eggs, being in one basket, are now toast until you fix the problem, or bring up a copy of everything on another piece of equipment. Second, what happens when one of those pesky applications decides to have a memory leak and squeezes everybody else off the system? Same as number one above, though the fix is probably quicker because you can just bounce that ancient mainframe system (if you can find the monk in the middle ages monastery that actually knows where the power button is on the thing, that is). Third, mainframes are really pretty expensive, so not just any business is going to go and buy one, which means that a fair amount of the market for server equipment has been bypassed by the mainframe concept. And finally, mainframes aren’t cool anymore. No one wants to buy something new that isn’t also cool. Oh wait, I doubt the engineers that were sitting in the room having a brainstorming session would have invited the marketing department in for input this early on. But it is true – mainframes aren’t cool.
So, this room of very smart people came up with virtualization. Basically, a single piece of computing hardware (a “host” in the lingo) can be used to house multiple, virtual instances (“virtual machines”) of complete Windows Server installations (and other operating systems, though Windows virtualization is probably driving the market today). On top of that, they came up with a way for these virtual machines to move between physical servers without rebooting the virtual machines or even causing much of an impact on performance to the users. Housing multiple complete virtual machines on a single host works because most Windows machines sit around waiting for something to happen pretty much all day – I mean, even with Microsoft Windows, how much does a file server really have to think about the files that it makes available on shares? How much does a domain controller have to think in order to check if someone’s username and password are valid on the domain? Even in relatively large systems environments, there are a considerable number of physical servers that just aren’t doing all that much most of the time.
Virtualization provides a way to share physical CPU and memory across multiple virtual machines, so you can get more utility out of each dollar you have to spend on physical server equipment. Some organizations are therefore able to buy fewer physical servers each year. Sorry Dell and HP – didn’t mean to rain on your bottom line, but most IT departments are trying to stretch their capital budgets further because of the recession. Fewer servers also means less HVAC and power, both of which have increased in cost as energy markets have been deregulated and prices have started to more closely follow demand. I guess BG&E and Pepco are also sad, but look, some of your residential customers still set the AC at 65 degrees, so just charge them three times as much and everyone is happier!
Most of the leading vendors also offer “High Availability,” which means that virtual machines can automatically be moved between hosts, and if a host fails, the supervising software can restart those virtual machines on an available host in your cluster of hosts. For those IT people carrying blackberries that have to go to the server room at 3am to reboot physical equipment, welcome to the 21st century.
In addition, at least VMWare offers a way for virtual machines to automatically move between hosts when a particular host gets too many requests for CPU or RAM from the virtual machines running on it. This functionality helps to improve overall performance which makes all the users happy, and quiets the help desk (a little bit). Ok, so the users call you about something else, so the help desk is still not any less quiet, but at least you can cross one complaint off the list for the moment.
In sum, virtualization is a smart and efficient way to implement servers today. I imagine if you work in IT that you are very likely to come into contact with virtualization soon if you have not already. We converted about two years ago and we aren’t looking back!