Cloud storage is increasingly becoming a popular storage virtualisation solution to meet continuing demand to provide more storage and computing capacity. However many organisations don't have good metrics for their storage performance requirements, so their Service Level Assurance (SLA) for storage cloud services can be somewhat disingenuous, warns Hermon Yonhannes, senior storage engineer at ControlCircle.
The first steps to understanding the storage requirements of any organisation is to undertake a performance analysis or audit of those requirements, benchmarking I/Os to compensate against any under-specification, throughput bottlenecks or resource contention.
Until recently, a barrier existed to placing many of the most demanding applications within a virtualized infrastructure. This barrier has now been lifted with the advent of storage Service Level Assurance (SLA) functionality in cost-effective packaged SAN solutions
Don’t make it a guessing game when provisioning your NAS or SAN storage on the cloud. Configure and size storage resources for optimal I/O performance then for storage capacity. This means take into consideration the size and storage resources necessary to handle your volume of traffic—as well as the total capacity. Of course with your storage is in the cloud you can grow on-demand and have that elasticity to expand non-disruptively.
Understand how the storage is being used. Examining the throughput characterisation of the physical environment before virtualisation can help you predict what throughput each workload will generate in the virtual environment.
And by establishing your benchmarks the I/O workload can be distributed across the right types of storage. High-throughput fibre channel for high-performance low latency apps to SATA disks for less demanding. Monitor your applications I/O throughput and capacity before moving to a virtualised production environment.
Cloud storage brings many opportunities for enterprise and SOHO clients. Accurate storage sizing is not only essential to meet enterprises performance and capacity demands but is also good for their bottom line as their upfront investment only match their current needs. Then as the environment grows their storage on-demand can be expanded to meet their growing requirements.
Storage is one of the four keys resources within a virtualised environment alongside, CPU, memory and network, explains Warren Reid, EMEA marketing director at Dot Hill, whilst warning of the potential pitfalls.
Many companies, especially SMBs, may be tempted to use direct attached storage arrays (DAS) to host storage virtualisation. While DAS may provide good performance it should only be considered as a proof of concept ahead of implementing a true shared SAN. DAS presents a single point of failure and fails to fully abstract the hardware from the virtual OS.
Be sure to check that your chosen shared storage infrastructure is listed on the hardware compatibility matrix of your chosen virtual OS provider.
Companies should look carefully at external storage virtualisation solutions to ensure they do not fall victim of vendor lock-in. The most flexible solutions offering the biggest ROI will be those which support a heterogeneous storage environment.
With servers becoming ever more powerful, beware of hosting many storage I/O intensive applications on one server which may quickly saturate the available storage bandwidth. Make use of load balance techniques and policies, also consider which applications could make best use of today’s high performance networks such as 8Gb FC and 10Gb iSCSI.
Having the right infrastructure to support server virtualisation helps businesses save time and effort. As a bonus, organisations experience improved service levels and greater availability of applications for everyone, says Kevin Epstein, VP of Marketing at Drobo.
Historically only large enterprises could afford SAN storage and virtualisation software with mobility and high availability (HA) features. Millions of smaller organisations know they need it but think they can’t afford it - until now.
Operational efficiency is harder to quantify, but no less valuable. When an IT manager can do more in the same amount of time, the business benefits. Especially the IT manager supporting everything IT at a small company! While previously storage technology was cost prohibitive, consumerisation is bringing new storage options for SMBs that support server virtualisation.
Server Virtualisation Technologies
In addition to free versions, all of the popular server virtualisation technologies now have low-cost options with sophisticated features for high availability. Built especially for small business IT, VMware vSphere Essentials 4.1 Kits are all-in-one solutions that combine virtualisation for up to three physical servers (up to two processors each) with centralised management capabilities.
Many of these capabilities are also available from other technology vendors. One example is the Advanced Edition of XenServer from Citrix, which has sophisticated features for HA in a licensing model with a low fixed cost per server for as many servers as you have. Microsoft with Windows Server 2008 R2 with HyperV also offers a server virtualisation solution that is very affordable for small business.
Shared SAN enables all virtual servers to see the same storage, enabling movement of virtual machines (VMs) and applications across redundant infrastructure. This architecture opens up tremendous flexibility and IT managers can move applications without impacting user access or uptime, which is a life saver when it comes to maintenance. Move a VM and application, perform maintenance, and migrate back--without users even noticing. Without SAN, VMs are limited to one physical server and direct-attached storage and are subject to downtime for maintenance or extended downtime in the event of a failure.
SAN storage is a must if you want to maximise mobility and availability for VMs. Fibre Channel SANs have a high entry cost preventing companies with smaller configurations and budgets from participating. iSCSI can be lower in price, but many iSCSI arrays are still expensive by most measures. However, new storage devices are available today that can represent a saving of about 35%.
With all the benefits of virtualisation, many organisations have been flocking to virtualise their server estates. However, for many companies, there has been a lack of awareness around the implications that virtualisation will have on their infrastructure, warns Evan Unrue, Solutions Architect, Magirus.
When backing up by traditional methods, backups will be more intense per physical server. This is because there is another layer of abstraction to consider when sizing storage. So how do we pre-emptively or retrospectively combat these challenges?
It is important for storage and other technology vendors to integrate with virtualisation technologies as best as possible. For example, they should address aspects such as mapping visibility between virtual machines and physical resources; management and provisioning of resources to the hypervisor; and optimisation of how resources are delivered. For instance, VMware has done this using its vStorage API set. It allows other vendors to plug into some of the inner workings of the hypervisor for certain offload tasks (using technologies like their Storage IO Control) and other tasks around data protection, management and storage network load balancing. One key criteria is to evaluate a storage vendor against how well they integrate with the virtualisation technology deployed. Doing so will generally reduce the management effort involved in deployment and management of the SAN.
Also, consideration should be given to the fact that virtual servers still have the same performance requirements as in the physical world. One obvious peak is during the backup window. The load could be mitigated on to production servers. With VMware, for example, VADP (vStorage API for Data Protection) may be used to offload the resource spike to a non-production server.
When looking towards the production storage element, it is important to stress that although servers are now virtual, the application requirements for IO remain the same. Any application best practices which should be applied in the physical server world also apply in the virtual world. This is to facilitate performance and availability requirements. Furthermore, it is worth exploring the technologies storage vendors are deploying to effectively utilise Solid State Disk (SSD). For example, using SSD to extend cache or tier storage volumes between SSD and magnetic disk. These methods are not quite “one size fits all” yet and there is still some consideration around how it’s deployed. However, in most cases, the storage deployment for virtual infrastructure is less painful than by traditional means.
I would also stress that implementing some degree of change control helps no end. Server sprawl is much more prominent in the virtualised environment due to ease of deployment. An unplanned virtual machine here and there can end up taking resource away from critical systems that need it most.
Ultimately, although the servers are sitting on a hypervisor, they are not an infinite resource. Always focus on the demands of your applications. Account for peaks of activity on resource intensive operations and maintain a healthy level of reporting/monitoring. This will enable you to react to bottlenecks within your infrastructure. Careful vendor selection is also important when looking at supporting the infrastructure; some storage vendors integrate with virtualisation vendors better than others.
Tags: Storage Virtualization