Tag Archives: virtualisation

Virtualisation part 4: oVirt

Red Hat’s virtualisation product, RHEV, was slightly hamstrung in v2 seeing as the manager technology ( which had previously been purchased from Qumranet ) only ran on Windows. This requirement is something that has put me off that product until now – RHEV 3 has been released with a JBoss-based management server now and a whole lot of feature goodness. But it’s a paid-for product and not everyone can afford this. So the Open Source oVirt project is the next best thing.

oVirt comes in 2 parts:

  • ovirt-engine ( the manager )
  • ovirt-node ( the compute node )

These are currently based off of Fedora 16 with an additional repo for the engine but with Fedora 17, it should be integrated. oVirt currently supports FC, iSCSI and NFS as storage repositories for VM images while NFS is supported for ISO and export images. A requirement is that all CPU technology in the oVirt cluster is the same.

oVirt installation is quite straightforward. The first step is to install ovirt-engine.

  • install a machine with Fedora 16
  • add the ovirt repository
  • install the ovirt-engine packages
  • configure ovirt-engine
  • for the Spice-based console, install spice-xpi

Next, install one or more ovirt-node machines using the oVirt Project-supplied ISO. Once you’ve configured your nodes with basic networking, you can add them to your ovirt-engine through the management console.

Also configure:

  • a storage domain for iSCSI/FC/NFS storage of your VM images
  • an NFS storage domain for your ISO install images

Then you can get down to the business of creating virtual machines. Happy virtualising!

Microsoft virtualisation changes

Microsoft has announced Dynamic Memory and RemoteFX which directly affects their desktop virt platform. Dynamic memory allows users to adjust the memory of a guest virtual machine on demand. IT administrators will thus be able to pool all the memory available on a physical host and dynamically distribute it to virtual machines running on that host as necessary. Based on changes in workload, VMs will be able to receive new memory allocations without a service interruption.

Microsoft RemoteFX, which is based on the IP that Microsoft acquired and continued to develop since acquiring Calista Technologies over two years ago, enables users of virtual desktops to receive a rich, 3-D, multimedia experience while accessing information remotely. It functions independently of any graphics stack and supports any screen content, including Windows Aero, full-motion video, Flash and Silverlight content, and 3D applications. Because it uses virtualized graphics resources, RemoteFX works on a wide array of target devices, which means it can be deployed over both thick and thin client hosts and a wide variety of network configurations.

Windows 7 XP mode no longer requires hardware virt

Microsoft will be removing the hardware virtualisation extensions requirement with the next update of XP mode. The updates are available here:

win 7 32-bit

win 7 64-bit

Intel’s mechanism is know as VT-x while AMD’s is called AMD-V.

There are pros and cons with this change:

  • con – hardware virt extensions allow a CPU to process certain virt instructions ( especially context changes ) much quicker than CPUs without these extensions – using a non-hardware virt CPU is likely to drop performance, perhaps to a level that makes XP mode unusable in certain cases
  • pro – Intel CPUs have had spotty support for VT-x through their ranges meaning many were confused and unsure about whether they could or couldn’t run XP mode, this no longer matters ( unless you specifically want hardware virt )
  • pro – AMD CPUs from Athlon 64 and later have all had AMD-V support so there has never been confusion with these chips

While this might not make a big difference seeing as the usage of XP mode is very low, let’s wait and see what the reaction is.

Virtualisation part 3: VMware backup scenarios

Backups in a virtualisation environment take on a whole new meaning, typically complex ( as opposed to the simple outlook that the vm vendors would like to portray )  because now you are dealing with shared SAN storage, vm images instead of files, very specific requirements around backup hardware and setup, 3rd party backup agents and multiple methods for backup from VMware themselves. So much for simplifying your IT infrastructure. But it’s not all doom and gloom – I’ll try and break things down so VMware backup is not as dark as it appears to be.

VCB

VMware provides a Backup/DR API in the form of VMware Consolidated Backup ( VCB ) which allows you to either do backups with the VMware service console tools or integrate with a 3rd party backup tool. Doing backups via the service console is not for the faint of heart so most VMware users will go the 3rd party route. This is done by installing VCB and the 3rd party backup server on a machine ( vm or physical ), configuring VCB to talk to VCenter or ESX, and then setting up your backups via the 3rd party tool’s VMware agent/plugin.

VCB works by generating a snapshot of a vm and then storing it in a specified location on the VCB proxy machine. VCB proxy can run on a physical machine or in a vm. If running in a vm then you need to make sure you have sufficient space to store the vm images that are generated at backup time – take 3 to 4x the space of the largest vm you have as a rule of thumb.

VCB Modes

There are 3 main modes of backup using VCB proxy depending on your hardware setup/design. Let ‘s take a look at each of these in turn and the pros and cons.

SAN mode

ESX needs to have its VMFS stored on shared FC SAN storage or iSCSI shared disk. Backups are offloaded to a physical VCB proxy which is also connected to the shared storage. In this mode, the LUNs exposed to the ESX servers need to be exposed to the VCB proxy machine as well.

SCSI Hot-add mode

In the SCSI Hot?Add mode, you set up one of your virtual machines as a VCB proxy and use it to back up other virtual machines residing on storage visible to the ESX Server that hosts the VCB proxy virtual machine. This mode eliminates the need of having a dedicated physical machine for your VCB proxy and does not require you to expose SAN LUNs to the Windows VCB proxy.

In this mode, you can use Consolidated Backup to protect any virtual disks on any type of storage available to your ESX Server host, including NAS or local storage. The only exception is that it does not back up any disks of the virtual machine that has any independent disk, Physical Compatibility RDM, or IDE (This is applicable to ESX 4 and ESXi 4.)

Consolidated Backup creates a snapshot of the virtual disk to be protected and hot adds the snapshot to the VCB proxy, allowing it to access virtual machine disk data. The VCB proxy reads the data through the I/O stack of the ESX host. If the ESX servers only have local disk then you need a VCB proxy on each ESX host.

LAN ( NBD ) mode

In this mode, Consolidated Backup uses an over-the-network protocol to access the virtual disk. The ESX Server host reads the data from the storage device and sends it across a network channel to the VCB proxy. A limitation is that vm’s should not be over 1TB in size.

You can also use a vm for this mode to host VCB proxy however remember the issues relating to tape connectivity. A separate ‘backup’ network can be used between a physical VCB proxy and the ESX hosts to split normal and backup traffic.

Types of backups

There are also a number of various ways of doing the backup:

  • image level where one backups up the vm and all its associated files
  • file-level ( via VCB or 3rd party agent ) where you backup the contents of the vm at the file level; this can be combined with full and incremental/differential backups ( note that file-level backups via VCB are only supported on Windows platforms )

In addition you can also load a backup agent in the vm and treat it as a normal backup client. Most likely you will use a combination of image-level VCB backup with agent-based file-level backup. Note that VCB can not be used to backup clustered vms and you will need to use 3rd party cluster aware backup tools for this scenario.

Tape connectivity/VTL

Tape can be connected in a variety of ways for the purposes of VMware backup, mostly depending on the storage mechanism you are using with VMware.

  • connect the tape device to a standalone VCB proxy using FC, SAS or SCSI – this gives you the flexibility of breaking backups out of your VMware space
  • connect the device to your FC fabric if you make use of shared storage for ESX VMFS
  • connect a tape device to an ESX host using pass-through SCSI

In addition, you can use a 2 stage backup solution with disk-2-disk ( d2d ) as the first stage and tape as the 2nd. Using VTL is quite useful as it allows you to send more data flows to the backup server than you have physical tape drives. Tape devices are expensive and as such you may only have one or 2. With VTL, you can simulate as many drives as you’d like and send that amount of flows concurrently to the backup server. You need to be reasonable with this of course especially if you are using Ethernet for your transport.

VMware Data Recovery

DR is a new product with VSphere 4 that does VCB-style backups via the VCenter console and provides for de-duplication of data. There are some fairly severe restrictions with DR, so it’s possibly a tool that you will use in conjunction with VCB and a 3rd party tool.

DR is only a d2d tool so you will need disk in your SAN ( you can also use NAS or iSCSI ) to store the backup data, as well no tape devices are supported. It always de-dupes – you have no option in this regard. DR is an appliance that runs as a vm and provides what is essentially incremental forever backups. You can have up to 8 concurrent backups in flight however you can only use to simultaneous backup destinations. VSS is supported on certain Windows platforms and application-level integrity is provided under certain conditions. Stores can be up to 1TB in size and you can use a maximum of 2 stores with each de-dupe appliance.

Conclusion

Both VCB and DR provide for backups of running vms however they do so under different circumstances. Most often you will use VCB with a 3rd party tool, and perhaps DR when you need fast restore capability for vms. A 3rd party tool gives you the flexibility of backup/restore outside of your immediate VMware environment while the VMware tools give you highly integrated ability within VMware.

Virtualisation part 2: Storage

The first part of the series focussed on the OS layer of virtualisation. This second part will focus on storage in relation to server virtualisation.

Storage on its own, is a minefield of standards, specifications, technologies, protocols and incompatibilities. Add to this the concept of virtualisation and you’re looking at an area that’s difficult to manage let alone comprehend. The primary motivators when selecting storage are I/O performance, throughput and size vs. cost. But first one needs to understand the available choices so that you can make an informed decision. Let’s take a brief tour around storage technologies and then we’ll discuss how these align with virtualisation.

Core storage types:

1. DAS ( Direct Attached Storage ) – typically comprises either server internal disk or an external storage array directly connected to the server via SCSI, SAS or Fibre. This type of storage provides a block level protocol on top of which one installs standard RAID/logical drive/partition/file-system structures. Typical internal and external units will provide RAID along with drive sparing and automatic rebuild, hot-swap functions and good throughput. A 7 disk RAID using 1TB SATA disks will give you around 300MB/s of throughput and you can extrapolate performance from there. While Fibre is expensive, it provides the best interface throughput although SAS and UW SCSI will be fine for entry-level deployments. Security in this scenario is determined by the host OS using the storage device. Some DAS external units include multiple host controllers and ports allowing for link/controller fail-over and cluster setups.

2. NAS ( Network Attached Storage ) – designed to provide storage over a network using file-level protocols like CIFS, NFS and iSCSI. The device itself will run some form of standard or proprietary file-system with back-end RAID ( eg. Netapp with WAFL and Linux-based devices with ext3/4, reiserfs, xfs and jfs ) which in turn is provisioned using the network protocol. CIFS is often aligned closely with the Windows platform however most other OS’s are able to connect to CIFS shares. NFS is mostly used in Unix environments and can provide good throughput if correctly tuned at both the OS and NAS levels. iSCSI takes block storage at the device level and shares it over a network at which point it is seen as block level storage by a client using an iSCSI agent. A NAS device by default provides for very fine-grained file serving ie. you can carve up the physical disk using RAID and logical partitions, and then carve things up further by directory sharing. This is almost storage virtualisation in a way as one can make efficient usage of available physical space. Growing shared network file-systems is also easy to do with most ( if not all ) NAS devices by adding either physical space to the logical drives ( and growing the share ) or jsut adding additional logical spaces to the shared area. Security is provided at both the physical and shared levels providing granular although more complex capabilities. Performance can be aggregated by taking multiple GbE ports on the NAS device and creating bonds/teams ( OS/NAS level ) or aggregates ( Network level = Etherchannel or 802.3ad ). Maximum throughput is often limited by the network abilities unless you are using enterprise level devices which may have support for many network interfaces. Many enterprise-level devices offer additional features in respect of storage types with SAN Fibre connectivity ( using block level protocols like FC ) however, even entry level systems offer features like iSCSI ( block level protocol encapsulated over IP networks ), replication ( rsync ), backup, AD integration and uPNP functions. Note that there is a wild difference in performance between consumer NAS devices and enterprise systems.

3. SAN ( Storage Area Network ) – many incorrectly assume they have a SAN when using Fibre/Fibre Channel-connected storage. I would go so far as to say this is basically DAS even if you are using an FC switch. SANs typically involve multiple FC devices ( storage subsystems and other devices like tape libraries ), FC switches and FC-attached clients all on a separate Fibre Channel network which incorporates zoning functions within the FC fabric to control the flow of data between endpoints. SANs deliver high performance using link aggregation, high core switching throughputs and I/O rates and the FC family of protocols which were designed with storage in mind. These are block level protocols providing block devices to end points. While the core is designed specifically around the networking component of FCP ( this is changing with virtualisation being pulled into the core ), the storage endpoints provide enterprise features like block-level mirroring and synchronisation, snapshots, remote replication and storage virtualisation ( using in and out-of-band solutions ).

4. Storage Virtualisation ( don’t confuse this with storage used within server virtualisation ) – is provided within all storage systems ranging from simple RAID/Logical Drives to enterprise-level SV solutions built on SAN fundamentals. In the enterprise space, SV is taken to be a form of meta-data controller that manages a single or multiple storage devices and the clients that access those. SV is provided in both in-band ( where a controller is placed between/in-line with the storage and the clients ) and out-of-band ( the controller is part of the switching infrastructure and not in-line ) mechanisms. An example of in-band is FalconStor’s IPStor solutions which provides the whole gamut of storage virtualisation features including synchronous mirroring, de-duplication, CDP, replication and virtual carving of all backend disk subsystems. IBM’s SVC follows this route as well. For out-of-band solutions, StoreAge’s SVM product provides all the above features without the performance limit that is common with in-band solutions. SVM can aggregate multiple storage back-ends in parallel to provide massively scalable solutions to end-point clients, especially for streaming requirements. Most storage subsystem vendors also provide limited support for storage virtualisation within their own storage subsystems.

A quick note here to say that the base disk for VMs will always come from the host server itself which in turn gets its storage from the storage systems, but this doesn’t stop you from provisioning storage directly to a VM as non-boot disks eg. you can run an iSCSI initiator inside a VM to get block level disks from an iSCSI storage server that is separate from the storage systems used to provide base disks to the host servers. Or you could use NFS/CIFS to connect to shares on a NAS server …

File and Block protocols

There are 2 types of protocols when it comes to generating storage for clients.

File level protocols ( used in NAS devices ) provide shared volumes via a network mechanism ( eg. CIFS or NFS ) which inherently allows simultaneous multi-client access. These protocols include file-locking features to enable simultaneous file access without corruption. CIFS is the more recent name for Microsoft’s SMB connection-orientated protocol which has been in use since IBM invented the original PC. It has always been a moving target with regards to specifications and in its more recent SMB2 form, breaks compatibility with the former version. NFS has historically been used by Unix-based OS’s and has good connectionless support, can be used over UDP or TCP and provides full backwards compatibility from all later server versions ( 2, 3 and 4 ). Performance is reasonable with the option of bonding multiple 1GbE links together or you can use 10GbE. Security is granular ( if a bit complex ) with share and file-level features, while the shared nature of file protocols makes it a very flexible mechanism. By masking the nature of the back-end file-system, NAS devices allow clients to concentrate on the files themselves without having to worry about file-system semantics and management.

Block level protocols export block devices to clients that are seen as a local disk. These protocols include FCP and iSCSI ( amongst others ). FCP is a storage-specific protocol designed with high-throughput, low latency storage networking in mind. It runs over a fibre network ( in the core ) and has numerous features including zoning, multi-link aggregation with multi-path fail-over and core-based data movers. iSCSI encapsulates the SCSI storage protocol over IP providing a flexible and cost effective method of linking remote block devices to clients. Block protocols are by nature not shareable, hence the specific support for zoning in FCP fabrics and host group LUN mappings by all/most storage devices. Windows systems more specifically, will overwrite or corrupt data on LUNs connected to more than one client unless there is specific support in the OS for this ( eg. cluster quorum disks ). One will use cluster-aware file-systems ( eg. GFS, Lustre or others ) to provide simultaneous block device access to more than one client. SANs do NOT provide for data sharing – they only provide physical device sharing, something which is commonly misunderstood.

The rule is if you need shared data access, use a file-level protocol. For one-to-one high-performance data device access, use block-level protocols.

The virtualisation section

When deploying virtualisation, one needs to take into consideration the amount of virtual servers ( and VM’s ) per storage device as well as the type of loads that will be running. For loads that require low latencies, it’s important to look at storage that provides high I/O rates as well as non-network storage protocols – this will typically be DAS FC units or SAN-type storage with multi-path fail-over and aggregated links at the virtual server. For more moderate and general loads, one can use SAS/SCSI-connected DAS, entry-level SAN or even iSCSI/NFS solutions ( higher latencies and lower performance ). You can also decide between SATA, SAS and Fibre HDDs within the storage subsystems or use a combination of these for tiered-performance VMs.

Storage virtualisation solutions make these types of decisions a snap as they inherently implement the concept of storage performance tiering, making it simple to provision logical storage to a client from predefined performance pools ( eg. SATA disk for high-capacity, low performance requirement and FC disk for high-performance, lower capacity requirement ). Something as simple as individual HDD rotational speeds ( eg. 10K rpm vs 15K ) can also make a big difference in terms of latency and I/O rates. SV solutions also include some features specifically designed for server virtualisation like CDP of live VMs.

As a guideline, the following shows an increasing performance/cost list that you can tentatively base your decisions on:

  • NFS NAS
  • iSCSI NAS
  • accelerated iSCSI NAS
  • UW SCSI/SAS DAS
  • Fibre DAS
  • enterprise iSCSI NAS
  • FC SAN with SATA storage
  • FC SAN with SAS storage
  • FC SAN with Fibre storage
  • FC SAN with multiple virtualised storage subsystems

The availability/backup section

While backup and recovery is critical in any organisation, it is doubly so when running virtualised systems. The failure of a single virtual server host will bring down all the VM’s running on it resulting in far more issues than non-virtualised systems. The way to solve this is to make use of HA virtualisation solutions ( eg. VMWare HA, clustered RHEL 5.4 with KVM and GFS or XenServer HA ) – at the heart of all of these is centrally available storage ala SAN, multi-controller DAS or shared iSCSI/NFS.

VMWare’s VMFS is a cluster aware file-system that can span multiple host servers and be provisioned from any block-level source ( and now NFS too ). Red Hat’s GFS, although not specifically designed with virtualisation in mind, provides similar features.

Most backup software vendors provide agents/plugins to backup either VMWare VM contents ( standard file-system agent ), standalone VM’s or via VCB Proxy. For KVM and Xen, you can make use of standard file-system solutions, snapshot solutions at the OS level ( eg. LVM ) or storage-level snapshots. To go one step further, you can combine this with synchronous mirroring at the storage level to provide remote active HA hosts or passive standby hosts.

It’s critical to backup both the VM and it’s contents using your chosen methods – VM level for catastrophic VM failure or VM contents ( file level ) for file restoration. In addition, if you are using your virtualisation vendors’ storage functions, make sure you follow the recommended capacity planning w.r.t. storage – running out of space on a host file-system will likely take your VMs and host system down.

Fragmentation

Remember that with virtualisation, you will have file fragmentation at 2 levels now – the VM file as well as the guest OS. This can cause havoc with performance and it’s important if using Windows as a virt host to perform defragmentation regularly ( DiskKeeper has a good on-line tool ). Unix file-system based host virt solutions tend to fair much better in this regard but it’s still important to keep as much space open as possible on the vmFS so that it does not become an issue.

Conclusion

This is a complex topic which I’ve barely touched on in this article however I hope I’ve given you some ideas of where to start. Try and keep it simple and logical, and most of all keep the data safe.

Virtualisation part 1: OS

I had an interesting question from a business colleague of mine today – please spell out what types of virtualisation are available. It’s good to know that, even though it’s fairly pervasive these days, people can still inquire as to what is the reality. Because running virtualisation for production loads can have a big effect on systems management, resource utilisation and performance. And it’s a damn site more complex – anyone who tells you otherwise is fooling themselves. Even Scott Lowe, whom many would consider to be one of the most knowledgeable in the field, would profess to learning all the time.

The reason is because virt touches every single part of your IT infrastructure from networking to storage to server. And a host of vendors’ equipment works with your virt in a host of different ways. From a systems management point of view, this either requires very good communication between all IT members, or someone who knows everything which is both dangerous ( single point of responsibility and failure ) and rare. Virtualisation does not automatically mean you’ll suddenly have a whole bunch of spare boxes in the corner. In addition, the ( apparent ) ease of rolling out virt means that VMs can now spread like bunnies and suddenly resource utilisation and performance go out the window.

So it’s a careful balancing act that needs to be managed very well to get the most benefit from. And although virt is now a household name, there’s in fact very few people who understand it from top to bottom. Consider the delicate balance between servers, networking and storage, where one anonymous setting can bring performance to its knees. Storage virt in itself is a complex topic; fold that in with OS and networking virt, and things go up a  notch to where it’s a bit of wizardry or art.

What to consider when using virt?

  • performance ( read I/O here )  requirements of different VMs across multiple host nodes
  • virtualise in the storage/network or is the the VM solutions’ provided facilities enough
  • storage virt: in or out of band meta-data controllers; host, network or storage-based
  • storage: FC, iSCSI or NFS? ALUA options from your vendor?
  • host nodes: fast CPU ( scale up ) or multi-core/socket ( scale out )
  • timing sensitive applications
  • type of virtualisation: full, para, jails or kernel core
  • SRM/virt management tools
  • networking: VM vendors’ options, bonding, multi-path or virt networking, jumbo frames, VLANs
  • backup: how do I provide for failure in a virtual environment

Most of this is an article for another day but here I’d like to cover the VM types available and how they compare to each other. Let’s look at the 4 main types of virtualisation:

  • Full virt means that the VM application virtualises the whole infrastructure of an x86 ( or other ) machine. Also know as binary translation ( although not exactly the same ), it provides virtualisation of any OS that would normally run on the host platform, without extra assistance or modification. A downside is that this method has a fairly high CPU overhead ( 15 – 25% ) as part of the translation process. Examples include VirtualBox, Parallels, Microsoft Virtual PC ( now included with Windows 7 ) and VMware Workstation/Server. Some now have support for CPU virt extensions which can speed things up a bit, especially for 64-bit environments.
  • Hybrid/Para-virt/hardware-assisted virt provides a software interface to guest systems that is similar to full virt but not exactly the same, using features of the host CPU to speed up certain VM functions.  The guest OS needs some modification or drivers for I/O need to be supplied. Typically this will provide performance within the 5 – 10% range of raw hardware however this depends on the guest OS setup. Xen and VMware ESX for example, can provide for both full and para-virt, however there is a performance penalty for running full virt under these ( as there are on other platforms ). Xen in para-virt mode also requires CPU vm extensions. Microsoft Hyper-V, and IBM’s PowerVM to a degree, follows the same path.
  • Native virt is what I like to call kernel-core virt, as there is special support built into the host OS itself which provides the guest direct access to all the hardware that the host OS has access to. The advantage of this is a broad spectrum of hardware support as well as close to raw performance for guest vm’s. One does require drivers for non-native guest OS’s though. Examples of this are KVM ( fairly new in the Linux landscape ) and IBM P-series/PowerVM. KVM in particular is being touted for timing sensitive apps due to it’s very good performance. As with para-virt, CPU vm extensions are required.
  • Operating system-level virt provides for isolated user-space instances of the host OS also called jails, containers, VPSs or VEs. There is little to no overhead here as the guests use the normal host OS’s systems calls to perform and no specific hardware support is required. A disadvantage of this method is that only copies of the host OS platform can be run – other OS’s are not supported. Examples include FreeVPS, Linux VServer, Solaris containers and OpenVZ. This is a favourite of ISP hosting companies for obvious reasons ( think 1000 containers on a single machine only running web servers ).

In case you’re wondering ( yes what happened to VMware ESX/vSphere? ), ESX straddles the line between full and para-virt, while Xen straddles the line between para and kernel virt. VMware ( also known as a binary translator ) for example, traps certain critical code instructions and emulates them using software. Binary translation can incur a large performance overhead in comparison to a virtual machine running on natively virtualized architectures – ESX minimises this to an extent using the VMware Tools suite which provides accelerated drivers for certain functions ( Network, Video ). Any OS can be emulated but performance is only a given on those supported.

As virtualisation is implemented, it’s important to realise that the underlying server components are of far greater importance from a reliability and availability point of view than non-virtualised systems. A single host server failure will lead to the downing of all VMs running on that system. There is no longer a one-one relationship between server hardware and and the platforms running on it, so choose wisely when implementing virtualisation and make use of the high availability solutions provided by the virt vendors if you need highly available systems.

That in a nutshell summarises the OS component of of enterprise virtualisation. It’s easy to use virt off the cuff however it takes a lot more effort, planning and design to get it working well in an enterprise arena.

If you are new to virt, take a look at something like Sun’s VirtualBox which has a very low entry barrier, and can give one a good idea of what to expect. If you are an experienced virt person, then you’re welcome to follow this series and either learn something or teach me something through the comments section.

VMWare and virtualisation

It seems to me that there is still a large amount of confusion concerning virtualisation in general and VMware specifically. The problem stems from the fact that most users of virtualisation don’t understand the real benefits ( and pitfalls ) of this interesting technology – I’ll attempt to provide some insight here.

Benefits

  • Make better use of hardware resulting in better efficiencies – instead of having machines idling for most of the time running only one application, get them to work harder on average loads by having them run multiple applications ( which are still partitioned by the separate VMs )
  • Easily bring new machines online either via a normal install or VM imaging
  • Centralise install images using ISO files
  • Run more applications than you would normally have hardware for
  • Some virtualisation solutions have the ability to move a VM from one physical hardware platform to another while the VM is online ( hot migration ) – this allows you to take hardware in and out of service without affecting your applications ( for VMware this requires VirtualCentre with VMotion )
  • Most virtualisation solutions have a VM snapshot capability which provides a point-in-time backup ability
  • Virtualisation can bring some power savings to computing data centres as less physical hardware is typically used and the average load of power supplies is increased resulting in better power efficiencies

Disadvantages

  • Reliability of the host server becomes much more important as it’s no longer running just one system, but multiple
  • The ease of installing new VMs can result in overloading or mismatching of VM logical hardware
  • Due to requiring iSCSI or SAN-based storage for the central VM store, setting up virtualisation can initially be much more complex

And finally, some tips and tricks

  • If you are going to use iSCSI, make sure you setup a service console for the Ethernet card(s) being used for the iSCSI connection, otherwise you won’t be able to see anything ( if you are using VMWare ESX )
  • While iSCSI is fine for low end use, you should really be using an FC-based storage subsystem for the central VMFS store
  • Various virtualisation solutions have different architectures, eg. paravirtualisation, emulation, hardware level virtualisation – make sure you understand what you require and what you are getting