Virtualisation part 1: OS

I had an interesting question from a business colleague of mine today – please spell out what types of virtualisation are available. It’s good to know that, even though it’s fairly pervasive these days, people can still inquire as to what is the reality. Because running virtualisation for production loads can have a big effect on systems management, resource utilisation and performance. And it’s a damn site more complex – anyone who tells you otherwise is fooling themselves. Even Scott Lowe, whom many would consider to be one of the most knowledgeable in the field, would profess to learning all the time.

The reason is because virt touches every single part of your IT infrastructure from networking to storage to server. And a host of vendors’ equipment works with your virt in a host of different ways. From a systems management point of view, this either requires very good communication between all IT members, or someone who knows everything which is both dangerous ( single point of responsibility and failure ) and rare. Virtualisation does not automatically mean you’ll suddenly have a whole bunch of spare boxes in the corner. In addition, the ( apparent ) ease of rolling out virt means that VMs can now spread like bunnies and suddenly resource utilisation and performance go out the window.

So it’s a careful balancing act that needs to be managed very well to get the most benefit from. And although virt is now a household name, there’s in fact very few people who understand it from top to bottom. Consider the delicate balance between servers, networking and storage, where one anonymous setting can bring performance to its knees. Storage virt in itself is a complex topic; fold that in with OS and networking virt, and things go up a  notch to where it’s a bit of wizardry or art.

What to consider when using virt?

  • performance ( read I/O here )  requirements of different VMs across multiple host nodes
  • virtualise in the storage/network or is the the VM solutions’ provided facilities enough
  • storage virt: in or out of band meta-data controllers; host, network or storage-based
  • storage: FC, iSCSI or NFS? ALUA options from your vendor?
  • host nodes: fast CPU ( scale up ) or multi-core/socket ( scale out )
  • timing sensitive applications
  • type of virtualisation: full, para, jails or kernel core
  • SRM/virt management tools
  • networking: VM vendors’ options, bonding, multi-path or virt networking, jumbo frames, VLANs
  • backup: how do I provide for failure in a virtual environment

Most of this is an article for another day but here I’d like to cover the VM types available and how they compare to each other. Let’s look at the 4 main types of virtualisation:

  • Full virt means that the VM application virtualises the whole infrastructure of an x86 ( or other ) machine. Also know as binary translation ( although not exactly the same ), it provides virtualisation of any OS that would normally run on the host platform, without extra assistance or modification. A downside is that this method has a fairly high CPU overhead ( 15 – 25% ) as part of the translation process. Examples include VirtualBox, Parallels, Microsoft Virtual PC ( now included with Windows 7 ) and VMware Workstation/Server. Some now have support for CPU virt extensions which can speed things up a bit, especially for 64-bit environments.
  • Hybrid/Para-virt/hardware-assisted virt provides a software interface to guest systems that is similar to full virt but not exactly the same, using features of the host CPU to speed up certain VM functions.  The guest OS needs some modification or drivers for I/O need to be supplied. Typically this will provide performance within the 5 – 10% range of raw hardware however this depends on the guest OS setup. Xen and VMware ESX for example, can provide for both full and para-virt, however there is a performance penalty for running full virt under these ( as there are on other platforms ). Xen in para-virt mode also requires CPU vm extensions. Microsoft Hyper-V, and IBM’s PowerVM to a degree, follows the same path.
  • Native virt is what I like to call kernel-core virt, as there is special support built into the host OS itself which provides the guest direct access to all the hardware that the host OS has access to. The advantage of this is a broad spectrum of hardware support as well as close to raw performance for guest vm’s. One does require drivers for non-native guest OS’s though. Examples of this are KVM ( fairly new in the Linux landscape ) and IBM P-series/PowerVM. KVM in particular is being touted for timing sensitive apps due to it’s very good performance. As with para-virt, CPU vm extensions are required.
  • Operating system-level virt provides for isolated user-space instances of the host OS also called jails, containers, VPSs or VEs. There is little to no overhead here as the guests use the normal host OS’s systems calls to perform and no specific hardware support is required. A disadvantage of this method is that only copies of the host OS platform can be run – other OS’s are not supported. Examples include FreeVPS, Linux VServer, Solaris containers and OpenVZ. This is a favourite of ISP hosting companies for obvious reasons ( think 1000 containers on a single machine only running web servers ).

In case you’re wondering ( yes what happened to VMware ESX/vSphere? ), ESX straddles the line between full and para-virt, while Xen straddles the line between para and kernel virt. VMware ( also known as a binary translator ) for example, traps certain critical code instructions and emulates them using software. Binary translation can incur a large performance overhead in comparison to a virtual machine running on natively virtualized architectures – ESX minimises this to an extent using the VMware Tools suite which provides accelerated drivers for certain functions ( Network, Video ). Any OS can be emulated but performance is only a given on those supported.

As virtualisation is implemented, it’s important to realise that the underlying server components are of far greater importance from a reliability and availability point of view than non-virtualised systems. A single host server failure will lead to the downing of all VMs running on that system. There is no longer a one-one relationship between server hardware and and the platforms running on it, so choose wisely when implementing virtualisation and make use of the high availability solutions provided by the virt vendors if you need highly available systems.

That in a nutshell summarises the OS component of of enterprise virtualisation. It’s easy to use virt off the cuff however it takes a lot more effort, planning and design to get it working well in an enterprise arena.

If you are new to virt, take a look at something like Sun’s VirtualBox which has a very low entry barrier, and can give one a good idea of what to expect. If you are an experienced virt person, then you’re welcome to follow this series and either learn something or teach me something through the comments section.

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security