Introduction
Over the course of the past couple of months, the AnandTech IT team has been putting a lot of time into mapping out and clarifying the inner workings of several interesting forms of virtualization. The purpose of these articles was not primarily to cover "news" (however current the topic might be), but to create a sort of knowledge base on this subject at AnandTech, as we find many people interested in learning more about virtualization are met with a large amount of misinformation. Secondly, we believe a better knowledge on the subject will empower the people in control of their company's IT infrastructure and help them make the correct decisions for the job.
There's no denying that virtualization is changing company's server rooms all over the world. It is promoting both innovation and the preservation of aged applications that would otherwise not survive a migration to modern hardware platforms. Virtualization is completely redefining the rules of what can and cannot be done in a server environment, adding a versatility to it that is increasing with every new release. We believe that when making big changes to any existing system, the more information that is available to the people given that task, the better.
But what about desktop users?
While the above should make it clear why businesses are extremely interested in this technology, and why we have been digging so deep into its inner workings, we are also noticing an increased interest from the rest of our reader base, and have been flooded with questions about the how and why of all these different kinds of virtualization. Since we don't want to leave any interested people out in the cold, and the in-depth articles may seem a bit daunting to anyone looking to get an introduction, here is another article in our "virtualization series". In it, we will attempt to guide our readers through the different technologies and their actual uses, along with some interesting tidbits for regular desktop users.
"New" Virtualization vs. "Old" Virtualization
The recent buzz around the word "virtualization" may give anyone the impression that it is something relatively new. Nothing is further from the truth however, since virtualization has been an integral part of server and personal computing, almost from the very beginning. To keep using the single term "virtualization" for each of its countless branches and sprouted technologies does end up being quite confusing, so we'll try to shed some light on those.
How to Define Virtualization
To define it in a general sense, we could state that virtualization encompasses any technology - either software or hardware - that adds an extra layer of isolation or extra flexibility to a standard system. Typically, while increasing the amount of steps a job takes to complete, the slowdown is made up for with increased simplicity or flexibility for the part of the system affected. To clarify, the overall system complexity increases, in turn allowing the manipulation of certain subsystems to become a lot easier. In many cases, virtualization has been implemented to make a software developer's job a lot less aggravating.
Most modern day software has become dependent on this, making use of virtual memory for vastly simplified memory management, virtual disks to allow for partitioning and RAID arrays, sometimes even using pre-installed "virtual machines" (think of Java and .net) to allow for better software portability. In a sense, the entire point of an Operating System is to allow software a foolproof use of the computer's hardware, taking control of almost every bit of communication with the actual machinery, in an attempt to reduce complexity and increase stability for the software itself.
So if this is the general gist behind virtualization (and we can tell you it has been around for almost 50 years), what is this recent surge in popularity all about?
Baby Steps Leading to World-Class Innovations
Many "little" problems have called for companies like VMware and Microsoft to develop software throughout the years. As technology progresses, several hardware types become defunct and are no longer manufactured or supported. This is true for all hardware classes, from server systems to those old yet glorious video game systems that are collecting dust in the attic. Even though a certain architecture is abandoned by its manufacturers, existing software may still be of great (or perhaps sentimental) value to its owners. For that reason alone, virtualization software is used to emulate the abandoned architecture on a completely different type of machine.
A fairly recent example for this (besides the obvious video game system emulators) is found integrated into Apple's OS X: Rosetta. Using a form of real-time binary translation, it is able to change the behavior of applications written for the PowerPC architecture to match that of an x86-app. This allows a large amount of software that would normally have to be recompiled to survive an otherwise impossible change in hardware platforms, at the cost of some of its performance.
Hardware platforms have not been the only ones to change, however, and the changes in both desktop and server operating systems might force a company to run older versions of the OS (or even a completely different one) to allow the use of software coping with compatibility issues. Likewise, developers have a need for securely isolated environments to be able to test their software, without having to compromise their own system.
The market met these demands with products like Microsoft's Virtual PC and VMware Workstation. Generally, these solutions offer no emulation of a defunct platform, but rather an isolated environment of the same architecture as the host system. However, exceptions do exist (Virtual PC for the Mac OS emulated the x86 architecture on a PowerPC CPU, allowing virtual machines to run Windows).
Putting together the results of these methods has lead to a solution for the problem quietly growing in many a company's server room. While the development of faster and more reliable hardware was kicked up a notch, a lot of the actual server software lagged behind, unable to make proper use of the enormous quantity of resources suddenly available to it. Companies were left with irreplaceable but badly aging hardware, or brand new servers that suffered from a very inefficient resource-usage.
A new question emerged: Would it be possible to consolidate multiple servers onto a single powerful hardware system? The industry's collective answer: "Yes it is, and ours is the best way to do it."
Read More......