Translate this website:
Search this website:


CitrixDesktop VirtualizationHyper-VI/O VirtualizationNetwork VirtualizationRedHatServer VirtualizationStorage VirtualizationVMware

HOT TOPIC – STORAGE VIRTUALISATION

Once companies have implemented server virtualisation, the pressure is on to ensure that the underlying storage infrastructure is also virtualised to ensure optimum IT efficiency is achieved. Vendors offer their views on how to undertake this process.

 

Date: 8 Aug 2011

Essentials to consider when planning a storage virtualization strategy

“A virtualised storage environment should improve storage utilisation and efficiency, and deliver infrastructure and data management costs in a practical, easy to use solution. To achieve these objectives, consideration needs to be given to a range of factors, from technical capability to reporting tools, remote access and disaster recovery provisioning. Support is as important as the technology in this environment.”
Niall Smith, Sales Director, UIT
What should a virtualised storage environment achieve?
· Improved storage utilisation.
· Improved storage efficiency.
· Reduced infrastructure costs
· Reduced data management costs.
· Ease of use.
In order to meet these requirements consideration needs to be given to;
· High density storage, to reduce power consumption and floor space required.
· Centralised storage management software, provides a common management policy for all storage units.
· Centralised reporting system providing universal reports for the entire virtualised storage installation.
· Clear reporting of faults with good diagnostic tools to reduce repair time.
· Wide support for various operating systems – Windows, VMware, Linux, MAC, Solaris etc. to accommodate current and future application environments.
· Ability to manage multiple classes of media within one virtualised system – SATA, SAS, SSD’s. This allows hierarchal storage strategies to be implemented which provide higher performance at a lower cost.
· De-duplication to reduce the size of data to be stored and therefore the capacity required.
· Thin provisioning, provides virtual capacity that can be made available as required which can significantly reduce hardware and media costs and improve utilisation of physical media.
· AutoMAID to reduce power consumption, providing the ability to spin down HDD’s when not required, particularly relevant for archived data.
· Remote replication and disaster recovery ensures data integrity in the event of a catastrophic breakdown.
· Compression may also be beneficial if the data is to be migrated to a remote site or to cloud storage.

 

 


Bob Quillin, senior director of product marketing, SolarWinds:

Organisations planning a storage management strategy need to run an assessment of existing and future resources and needs. Many organizations looking at virtualising business-critical applications for example, need to understand and predict new storage requirements. Critical questions, in order to fully understand the requirements, include: when will I run out of physical storage capacity, how many virtual machines (VMs) can I fit into my existing SAN, where potential storage I/O bottlenecks are likely to be, and what storage can I reclaim now due to wasted space and VM Sprawl.
As part of this capacity planning, an assessment needs to be carried out of both existing and planned resources to decide how VMs will be distributed for best performance and ease of management.
In order to avoid any potential bottlenecks, businesses should look at deploying storage and virtualisation management solutions. By implementing these solutions, it will not only discover and map existing VMs down to the physical storage resources being used (e.g. which VMs are running, where bottlenecks are, how resources are being used such as CPU, memory, storage, etc.), but it will also allow for better analysis and forecasting. The ability to conduct historical forensics on one’s storage environment allows for more sophisticated capacity planning.
Virtualisation has already helped IT organizations around the world reduce server costs through consolidation, but while it offers a number of benefits and efficiencies, there are potential pitfalls organizations should look to avoid.
Some of these pitfalls are based on the lack of visibility to allocated or shared storage. As VMs multiply, customers begin to see an “I/O Blender” effect where traditional and predictable I/O patterns get thrown out the window. With virtualization, these new data streams blend together to create new I/O dynamics and hot spots. End-to-end visibility in these situations is critical, in particular to map from VM-down-to-spindle and trace real VM problems straight to their physical storage sources. Isolating storage bottlenecks as the demand for VMs increases is an important new core competency for IT as well as updating your storage tiering by prioritizing storage on a workload or virtualized application basis.
Too often, administrators are solving these problems as they arise. In order to avoid impact from potential pitfalls of any server virtualisation strategy, administrators need to be able to proactively monitor their IT infrastructure as well as diagnose the history of their virtualised environments. More importantly, proactive capacity forecasting will help you anticipate virtualization hot spots before they become operational problems.
Businesses should look at tools that provide real-time visibility and monitoring of virtualisation performance and proactive planning of storage capacity requirements. As the rate and deployment of virtualisation across companies accelerates, it is crucial to have the visibility into the virtualisation layer right down to the physical backend. The solution to the operational challenges associated with virtualisation’s impact on storage infrastructure requires tools that enable visibility, troubleshooting, and planning in order to ensure optimized capacity and performance across the virtual enterprise.
 

 

 

Eric Burgener, VP Product Management at Virsto Software


Virtual computing, regardless of whether it's focused around virtual servers
or virtual desktops, presents unexpected storage challenges, primarily
because of the types of I/O patterns generated. If you have experience
sizing storage configurations with physical servers, you should expect to
see your storage perform more slowly in virtual environments by at least
30%, and in some cases by up to 50%. This has implications for the storage
configuration needed to meet your performance requirements, and regardless
of how you go about resolving it, it will mean significantly more cost.
Make sure you test performance in your environment up front to accurately
set storage budgets.

ON WHAT IS DIFFERENT ABOUT I/O PATTERNS IN VIRTUAL ENVIRONMENTS

Virtual computing environments generate significantly more random, more
write-intensive I/O patterns than physical server environments. This is
because instead of having a single application running on a dedicated server
with dedicated storage space, you have multiple virtual machines, each with
their own independent I/O stream, generating the workload that must be
handled by a single instance of a hypervisor on a physical host. If you're
familiar with how storage technologies operate, you know they tend to
perform at their worst the more random and write-intensive an I/O workload
is. This is why storage performs more slowly in these environments.

ON HOW TO SOLVE THE PROBLEM

Virsto Software implements a software-based virtual storage layer that turns
the extremely random I/O pattern into a 100% sequential pattern, allowing
the underlying storage, regardless of what type it is (spinning disk, SSD,
etc.) to perform at its best. We typically see performance improvements of
up to 300% out of FC disks and up to 1,000% on high capacity SATA disks. At
roughly £1,700 British pounds per physical host regardless of the number of
virtual machines or desktops, this is a much more cost-effective way to
address the problem than buying additional disk spindles, upgrading to a
higher performing class of storage, or adding SSD.

 

 

Understanding storage virtualisation strategies
By Matt McPhail, global director of systems engineering, Scale Computing
Businesses of all shapes and sizes are rapidly adopting virtualisation and benefiting from reduced operating costs, greater productivity and increased efficiency. However, when choosing a storage virtualisation strategy, businesses need to understand that this demands a fundamental change and that they have to prepare accordingly. SMBs need to consider the impact of implementing a storage virtualisation strategy, including how the change will benefit the business, employees and workload.
Rolling out a virtualisation strategy will see businesses reduce time spent on routine IT administrative tasks, increase application availability and shorten disaster recovery time. This will significantly improve productivity as it allows the organisation to focus more on its core responsibilities rather than having to deal with recurring IT maintenance. IT managers should understand that they should leverage clustered storage virtualisation products to further benefit from server virtualisation flexibility. Therefore, when SMBs are implementing a storage virtualisation strategy, storage and virtualisation need to be considered together as one cannot exist without the other.
Virtualisation strategies have potential pitfalls if they are not carried out and monitored correctly by IT managers. A potential problem is the bottleneck issue – when a large amount of virtualisation data causes a severe strain on storage performance and capacity. If this problem arises, rather than following the traditional method of ripping and replacing the server with a faster controller-based storage, businesses can only eliminate the problem by using scale-out storage products from companies like Scale Computing.
Scale Computing’s Intelligent Clustered Operating System™ (ICOS) portfolio allows business to simply add nodes to their storage as and when they need it. There is no downtime or service disruption to the storage architecture, which will preserve data access speeds and easily manage larger workloads, thus transforming virtualisation resource needs and easily changing business needs.
IT leaders need to review the tools and best practices for building an effective virtualisation strategy, with a particular focus on the role storage has for this success. Using scale-out storage solutions will see businesses achieve greater efficiencies through their virtualisation strategy.

ShareThis

« Previous article

Next article »

Tags: Storage Virtualization

Related White Papers

11 Jan 2012 | White Papers

The Infoblox VMware vCenter Orchestrator Plug-in by Infoblox

The ease and speed with which enterprises can deploy virtual machines has outpaced their ability to provide IP address services to them in a timely fashion. ... Download white paper

Read more White Papers»

Related News

22 Oct 2014 | Server Virtualization

21 Oct 2014 | Storage Virtualization

20 Oct 2014 | Storage Virtualization

17 Oct 2014 | Server Virtualization

Read more News »
Related Web Exclusives

20 Oct 2014 | Server Virtualization

13 Oct 2014 | Server Virtualization

22 Sep 2014 | Server Virtualization

Read more Web Exclusives»

Advertisement
Recruitment

Latest IT jobs from leading companies.

 

Click here for full listings»

Advertisement