Translate this website:
Search this website:

CitrixDesktop VirtualizationHyper-VI/O VirtualizationNetwork VirtualizationRedHatServer VirtualizationStorage VirtualizationVMware

Optimising the WAN to deliver unified virtual storage

By Jeff Aaron, VP marketing, Silver Peak Systems.


Date: 21 May 2012

It has been argued that those who work in storage are from Mars while network managers are from Venus. Despite the interdependencies of the two functions, each often has its own language and metrics which makes it difficult to communicate between the two silos. At the same time, data volumes continue to grow while backup distances are increasing, which makes the underlying network infrastructure more critical than ever to key virtual storage initiatives.

Poor WAN performance can lead to high disaster recovery costs and missed Recovery Point Objectives (RPO). It can also make it difficult for remote users to easily access centralised storage. This threatens the success of virtual storage centralisation projects, and it can result in increased expenditures as organisations try to make up for limited replication throughput and poor connectivity by buying more WAN capacity or upgrading servers.

Virtualisation complicates matters even more. How do you move virtual machines between data centres when bandwidth is limited, WAN distances are long, and poor WAN quality results in limited data throughput?

WAN optimisation presents a possible solution. By manipulating IP packets to overcome common network challenges, such as bandwidth, latency and packet loss, WAN optimisation maximises data throughput while minimising disaster recovery costs. As such, WAN optimisation is frequently bundled with SAN/NAS replication, remote backup and VM mobility as a key enabler for these solutions.
Maximising RPO

When replicating between two geographically disperse locations, RPO is governed by the amount of data that can be transferred over the WAN in a given period of time. If the WAN limits throughput, RPO suffers.

Three network elements that impact replication throughput are bandwidth, latency and packet loss. The relationship between the three is complicated, with some having a greater impact than others in any given network environment. Adding more bandwidth, for example, will not always make a difference to storage virtualisation projects if there is too much latency due to extremely long distances. Similarly, all the bandwidth in the world will not matter if packets are being dropped or delivered out of order due to congestion, as is often the case in MPLS and cloud environments.

All three of these challenges need to be addressed to optimise replication throughput across a WAN, which will help guarantee successful storage virtualisation while maximiing RPO. This can ultimately be achieved with three real-time optimisation techniques. Network Acceleration mitigates latency using various protocol acceleration techniques, Network Integrity fixes packet delivery issues and enables enterprises to prioritise key traffic to ensure it gets allocated necessary resources, while Network Memory maximises bandwidth utilisation using compression and WAN deduplicaiton. The result is that more data can ultimately be accessed and transferred between source target locations in less time and across longer distances – making it even easier for organisations to enjoy the benefits unified virtual storage has to offer.

By maximising data throughput over the WAN, more data can be protected in the shortest time. As expected, throughput is also critical in the reverse direction – i.e. when data needs to be recovered from the target to the source. Once again, if bandwidth, latency and packet loss are problematic, data throughput, and consequently, the ability to access data on a virtualised server, will suffer.

Business continuity concerns
At the same time, it is increasingly common to see multiple data centres configured in a warm/warm or hot/hot configuration. In the event one data centre goes down, the goal is to get users up and running on the secondary data centre as quickly as possible. However, when data centres are geographically separated by long distances, a strategy which makes sense to avoid catastrophic disasters, users end up having to traverse a WAN to access stored information on virtualised servers in secondary locations. If the network between the data centres is not performing well, this failover is not seamless and can lead to delays in accessing required data. As a result, deploying WAN optimisation to enable successful initiatives, such as storage virtualisation, is becoming increasingly strategic to business continuity objectives.

In the past, enterprises were effectively limited in where they could backup data due to network constraints. More specifically, it was recommended not to replicate over distances with more than 80 milliseconds of latency and only replicate over expensive dedicated lines, where packet delivery issues are less common. This placed an enormous burden on many enterprises, often requiring them to deploy and manage expensive data centres with dedicated networks for storage.

Indeed, by overcoming bandwidth, latency and packet loss challenges across the WAN, optimisation solutions ensure that remote users can access all of their applications and data with LAN-like performance, regardless of where the data centre is located. Deploying WAN optimisation then becomes a strategic investment by acting as an insurance policy that guarantees seamless business continuity.

Battling bandwidth
In many instances, bandwidth is extremely expensive or difficult to attain. Many storage companies have therefore incorporated compression and deduplication technology into their replication solutions to help alleviate this burden. When the network is dedicated just for storage, these technologies work well. However, when the storage traffic is sharing the WAN with other enterprises traffic, such as file, email, web and video, then deduplication within the storage medium is simply not enough – it also needs to take place within the network.

Ultimately, organisations need to ensure that their underlying network infrastructure is stable enough to cope with the extra burden of increased data flowing back and forth, and avoid jeopardising strategic storage virtualisation and centralisation projects. By deduplicating traffic at the IP network layer, non storage traffic is optimised for even greater bandwidth savings, making data backup on a converged network that much more affordable.
WAN optimisation not only maximises bandwidth utilisation, but it overcomes distance limitations by minimising the effects of latency and ensuring maximum network quality so that dropped packets do not adversely impact the performance of all applications. This maximises RPO, ensures seamless continuity in the event of a failure, and minimizes ongoing disaster recovery costs. From traditional storage environments to unified virtual solutions that span multiple data centres, WAN optimisation is a strategic enabler for all enterprise storage initiatives.




« Previous article

Next article »

Tags: Storage Virtualization

Related White Papers

11 Jan 2012 | White Papers

The Infoblox VMware vCenter Orchestrator Plug-in by Infoblox

The ease and speed with which enterprises can deploy virtual machines has outpaced their ability to provide IP address services to them in a timely fashion. ... Download white paper

Read more White Papers»

Related News

28 Aug 2015 | Server Virtualization

27 Aug 2015 | Server Virtualization

  • Hyperconverged Hitachi

    Hitachi Data Systems has introduced new hyper-converged infrastructure solutions and software enhancements for VMware environments. The solutions help elimin... Read more

27 Aug 2015 | Server Virtualization

27 Aug 2015 | Storage Virtualization

Read more News »
Related Web Exclusives

24 Aug 2015 | Server Virtualization

Read more Web Exclusives»


Latest IT jobs from leading companies.


Click here for full listings»