Virtualisation part 2: Storage

The first part of the series focussed on the OS layer of virtualisation. This second part will focus on storage in relation to server virtualisation.

Storage on its own, is a minefield of standards, specifications, technologies, protocols and incompatibilities. Add to this the concept of virtualisation and you’re looking at an area that’s difficult to manage let alone comprehend. The primary motivators when selecting storage are I/O performance, throughput and size vs. cost. But first one needs to understand the available choices so that you can make an informed decision. Let’s take a brief tour around storage technologies and then we’ll discuss how these align with virtualisation.

Core storage types:

1. DAS ( Direct Attached Storage ) – typically comprises either server internal disk or an external storage array directly connected to the server via SCSI, SAS or Fibre. This type of storage provides a block level protocol on top of which one installs standard RAID/logical drive/partition/file-system structures. Typical internal and external units will provide RAID along with drive sparing and automatic rebuild, hot-swap functions and good throughput. A 7 disk RAID using 1TB SATA disks will give you around 300MB/s of throughput and you can extrapolate performance from there. While Fibre is expensive, it provides the best interface throughput although SAS and UW SCSI will be fine for entry-level deployments. Security in this scenario is determined by the host OS using the storage device. Some DAS external units include multiple host controllers and ports allowing for link/controller fail-over and cluster setups.

2. NAS ( Network Attached Storage ) – designed to provide storage over a network using file-level protocols like CIFS, NFS and iSCSI. The device itself will run some form of standard or proprietary file-system with back-end RAID ( eg. Netapp with WAFL and Linux-based devices with ext3/4, reiserfs, xfs and jfs ) which in turn is provisioned using the network protocol. CIFS is often aligned closely with the Windows platform however most other OS’s are able to connect to CIFS shares. NFS is mostly used in Unix environments and can provide good throughput if correctly tuned at both the OS and NAS levels. iSCSI takes block storage at the device level and shares it over a network at which point it is seen as block level storage by a client using an iSCSI agent. A NAS device by default provides for very fine-grained file serving ie. you can carve up the physical disk using RAID and logical partitions, and then carve things up further by directory sharing. This is almost storage virtualisation in a way as one can make efficient usage of available physical space. Growing shared network file-systems is also easy to do with most ( if not all ) NAS devices by adding either physical space to the logical drives ( and growing the share ) or jsut adding additional logical spaces to the shared area. Security is provided at both the physical and shared levels providing granular although more complex capabilities. Performance can be aggregated by taking multiple GbE ports on the NAS device and creating bonds/teams ( OS/NAS level ) or aggregates ( Network level = Etherchannel or 802.3ad ). Maximum throughput is often limited by the network abilities unless you are using enterprise level devices which may have support for many network interfaces. Many enterprise-level devices offer additional features in respect of storage types with SAN Fibre connectivity ( using block level protocols like FC ) however, even entry level systems offer features like iSCSI ( block level protocol encapsulated over IP networks ), replication ( rsync ), backup, AD integration and uPNP functions. Note that there is a wild difference in performance between consumer NAS devices and enterprise systems.

3. SAN ( Storage Area Network ) – many incorrectly assume they have a SAN when using Fibre/Fibre Channel-connected storage. I would go so far as to say this is basically DAS even if you are using an FC switch. SANs typically involve multiple FC devices ( storage subsystems and other devices like tape libraries ), FC switches and FC-attached clients all on a separate Fibre Channel network which incorporates zoning functions within the FC fabric to control the flow of data between endpoints. SANs deliver high performance using link aggregation, high core switching throughputs and I/O rates and the FC family of protocols which were designed with storage in mind. These are block level protocols providing block devices to end points. While the core is designed specifically around the networking component of FCP ( this is changing with virtualisation being pulled into the core ), the storage endpoints provide enterprise features like block-level mirroring and synchronisation, snapshots, remote replication and storage virtualisation ( using in and out-of-band solutions ).

4. Storage Virtualisation ( don’t confuse this with storage used within server virtualisation ) – is provided within all storage systems ranging from simple RAID/Logical Drives to enterprise-level SV solutions built on SAN fundamentals. In the enterprise space, SV is taken to be a form of meta-data controller that manages a single or multiple storage devices and the clients that access those. SV is provided in both in-band ( where a controller is placed between/in-line with the storage and the clients ) and out-of-band ( the controller is part of the switching infrastructure and not in-line ) mechanisms. An example of in-band is FalconStor’s IPStor solutions which provides the whole gamut of storage virtualisation features including synchronous mirroring, de-duplication, CDP, replication and virtual carving of all backend disk subsystems. IBM’s SVC follows this route as well. For out-of-band solutions, StoreAge’s SVM product provides all the above features without the performance limit that is common with in-band solutions. SVM can aggregate multiple storage back-ends in parallel to provide massively scalable solutions to end-point clients, especially for streaming requirements. Most storage subsystem vendors also provide limited support for storage virtualisation within their own storage subsystems.

A quick note here to say that the base disk for VMs will always come from the host server itself which in turn gets its storage from the storage systems, but this doesn’t stop you from provisioning storage directly to a VM as non-boot disks eg. you can run an iSCSI initiator inside a VM to get block level disks from an iSCSI storage server that is separate from the storage systems used to provide base disks to the host servers. Or you could use NFS/CIFS to connect to shares on a NAS server …

File and Block protocols

There are 2 types of protocols when it comes to generating storage for clients.

File level protocols ( used in NAS devices ) provide shared volumes via a network mechanism ( eg. CIFS or NFS ) which inherently allows simultaneous multi-client access. These protocols include file-locking features to enable simultaneous file access without corruption. CIFS is the more recent name for Microsoft’s SMB connection-orientated protocol which has been in use since IBM invented the original PC. It has always been a moving target with regards to specifications and in its more recent SMB2 form, breaks compatibility with the former version. NFS has historically been used by Unix-based OS’s and has good connectionless support, can be used over UDP or TCP and provides full backwards compatibility from all later server versions ( 2, 3 and 4 ). Performance is reasonable with the option of bonding multiple 1GbE links together or you can use 10GbE. Security is granular ( if a bit complex ) with share and file-level features, while the shared nature of file protocols makes it a very flexible mechanism. By masking the nature of the back-end file-system, NAS devices allow clients to concentrate on the files themselves without having to worry about file-system semantics and management.

Block level protocols export block devices to clients that are seen as a local disk. These protocols include FCP and iSCSI ( amongst others ). FCP is a storage-specific protocol designed with high-throughput, low latency storage networking in mind. It runs over a fibre network ( in the core ) and has numerous features including zoning, multi-link aggregation with multi-path fail-over and core-based data movers. iSCSI encapsulates the SCSI storage protocol over IP providing a flexible and cost effective method of linking remote block devices to clients. Block protocols are by nature not shareable, hence the specific support for zoning in FCP fabrics and host group LUN mappings by all/most storage devices. Windows systems more specifically, will overwrite or corrupt data on LUNs connected to more than one client unless there is specific support in the OS for this ( eg. cluster quorum disks ). One will use cluster-aware file-systems ( eg. GFS, Lustre or others ) to provide simultaneous block device access to more than one client. SANs do NOT provide for data sharing – they only provide physical device sharing, something which is commonly misunderstood.

The rule is if you need shared data access, use a file-level protocol. For one-to-one high-performance data device access, use block-level protocols.

The virtualisation section

When deploying virtualisation, one needs to take into consideration the amount of virtual servers ( and VM’s ) per storage device as well as the type of loads that will be running. For loads that require low latencies, it’s important to look at storage that provides high I/O rates as well as non-network storage protocols – this will typically be DAS FC units or SAN-type storage with multi-path fail-over and aggregated links at the virtual server. For more moderate and general loads, one can use SAS/SCSI-connected DAS, entry-level SAN or even iSCSI/NFS solutions ( higher latencies and lower performance ). You can also decide between SATA, SAS and Fibre HDDs within the storage subsystems or use a combination of these for tiered-performance VMs.

Storage virtualisation solutions make these types of decisions a snap as they inherently implement the concept of storage performance tiering, making it simple to provision logical storage to a client from predefined performance pools ( eg. SATA disk for high-capacity, low performance requirement and FC disk for high-performance, lower capacity requirement ). Something as simple as individual HDD rotational speeds ( eg. 10K rpm vs 15K ) can also make a big difference in terms of latency and I/O rates. SV solutions also include some features specifically designed for server virtualisation like CDP of live VMs.

As a guideline, the following shows an increasing performance/cost list that you can tentatively base your decisions on:

  • NFS NAS
  • iSCSI NAS
  • accelerated iSCSI NAS
  • UW SCSI/SAS DAS
  • Fibre DAS
  • enterprise iSCSI NAS
  • FC SAN with SATA storage
  • FC SAN with SAS storage
  • FC SAN with Fibre storage
  • FC SAN with multiple virtualised storage subsystems

The availability/backup section

While backup and recovery is critical in any organisation, it is doubly so when running virtualised systems. The failure of a single virtual server host will bring down all the VM’s running on it resulting in far more issues than non-virtualised systems. The way to solve this is to make use of HA virtualisation solutions ( eg. VMWare HA, clustered RHEL 5.4 with KVM and GFS or XenServer HA ) – at the heart of all of these is centrally available storage ala SAN, multi-controller DAS or shared iSCSI/NFS.

VMWare’s VMFS is a cluster aware file-system that can span multiple host servers and be provisioned from any block-level source ( and now NFS too ). Red Hat’s GFS, although not specifically designed with virtualisation in mind, provides similar features.

Most backup software vendors provide agents/plugins to backup either VMWare VM contents ( standard file-system agent ), standalone VM’s or via VCB Proxy. For KVM and Xen, you can make use of standard file-system solutions, snapshot solutions at the OS level ( eg. LVM ) or storage-level snapshots. To go one step further, you can combine this with synchronous mirroring at the storage level to provide remote active HA hosts or passive standby hosts.

It’s critical to backup both the VM and it’s contents using your chosen methods – VM level for catastrophic VM failure or VM contents ( file level ) for file restoration. In addition, if you are using your virtualisation vendors’ storage functions, make sure you follow the recommended capacity planning w.r.t. storage – running out of space on a host file-system will likely take your VMs and host system down.

Fragmentation

Remember that with virtualisation, you will have file fragmentation at 2 levels now – the VM file as well as the guest OS. This can cause havoc with performance and it’s important if using Windows as a virt host to perform defragmentation regularly ( DiskKeeper has a good on-line tool ). Unix file-system based host virt solutions tend to fair much better in this regard but it’s still important to keep as much space open as possible on the vmFS so that it does not become an issue.

Conclusion

This is a complex topic which I’ve barely touched on in this article however I hope I’ve given you some ideas of where to start. Try and keep it simple and logical, and most of all keep the data safe.

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security