Translate this website:
Search this website:


CitrixDesktop VirtualizationHyper-VI/O VirtualizationNetwork VirtualizationRedHatServer VirtualizationStorage VirtualizationVMware
Advertisement

Cloud consequences: Virtualised data centre testing challenges

Eddie Arrage, Manager, Market Development, Ixia.

 

Date: 30 Jan 2012

In recent years, IT organisations have moved their platforms and applications to “the cloud”. Enterprises sometimes maintain a centralised cloud computing site, but most often the vast facilities of commercial cloud service providers are used.

What is the cloud? The details are still evolving, but for most enterprises the cloud is a set of services, data, resources and networks located “elsewhere”. This contrasts with the historical centralised data centre model – where enterprises purchased, configured, deployed and maintained their own servers, storage, networks and infrastructures.

The resources of the cloud, while owned and maintained by a “cloud service provider”, are often “borrowed” by the enterprise. There are three acknowledged types of service offerings:

· Software-as-a-Service – examples include Salesforce.com, Google Apps, SAP, Taleo, WebEx and Facebook. These are full-service applications accessed from anywhere on the Internet. These services are implemented through the use of distributed data centres.
· Platform-as-a-Service – examples include Windows Azure, Google AppEngine, Force.com, Heroku and Sun/Oracle. These are distributed development platforms used to create applications, web pages and services that run in cloud environments.
· Infrastructure-as-a-Service – as offered by VMware, Citrix, Dell, HP, IBM, Disco, F5, Juniper and others. These companies offer the building blocks of cloud services that are available through a number of cloud hosting services such as Amazon’s Elastic Computing Cloud (EC2). They include a virtualisation layer, database, web and application servers, firewalls, server load balancers, WAN optimisers, routers and switches.

Why have major applications and web sites moved to the cloud? One of the biggest reasons is the widespread availability of broadband networks such as 10 Gigabit Ethernet (GE) that connect the enterprise with cloud providers’ sites. Broadband to the home has created an expectation of flawless delivery for bandwidth-hungry, high resolution content – which is better served by distributed cloud providers that own higher bandwidth connections and more storage. The use of a cloud-based infrastructure means there is no local infrastructure to purchase, manage, secure or upgrade. Rather than attempting to estimate peak and growth data centre usage, enterprises can adopt a pay-as-you-go structure, paying for only what they use.

Cloud elasticity, scalability and performance are perhaps the most compelling reasons to adopt a cloud strategy. Computing, storage and network resources are easily and quickly deployed using cloud providers – allowing an enterprise’s internal applications and/or external web site elastic adaptation to demand.

This elasticity also provides the means of scaling to any size desired, and to match performance requirements and ensure customer SLAs are maintained and end user experience unaffected during peak utilisation.

The virtualised data centre, whether within the enterprise or located at a cloud service provider, must be properly provisioned in order to provide the necessary functions and performance of cloud-based applications. Testing of cloud services has some familiar aspects and some new challenges. Even though they will be used in a cloud environment, the basic components that populate the data centre need to be tested for functionality, performance and security. This is complemented with testing of the data centre and end-to-end services.

At the network interconnectivity infrastructure level, testing must validate:
· Routers
· Switches, including fibre channel forwarders
· Application delivery platforms
· Voice over IP (VoIP) gateways

At the server and storage infrastructure level, testing must validate:
· Data centre capacity
· Data centre networks
· Storage systems
· Converged network adapters

At the virtualisation level, testing must validate:
· Virtual hosts
· Video head ends
· VM instantiation and movement

At the security infrastructure level, testing must validate:
· Firewalls
· Intrusion Prevention Systems (IPS)
· VPN gateways

Some cloud service providers offer access control and encryption services that enable the safe storage of sensitive company and customer data. Such services are often more secure than those available with local IT staff and facilities.

Although the infrastructure has moved off-site, there is no less need for pre-deployment testing. In fact, the cloud enables such testing without additional equipment or scheduled downtime – through the simple expedient of building a temporary, sample test environment. Such test setups can be perhaps 1/10th the size of the “normal” network and still provide meaningful quality measurements.

Virtualisation is the key enabling technology for cloud computing, utilising many virtual machines on powerful host computers. Many virtualised hosts are assembled together into large data centres, along with standard networking components: routers, switches, server load balancers, networking appliances and storage area networks.

With this centralisation comes a loss of control for the IT organisation. There is no longer any ability to tune hosts, networks and storage; the cloud service provider does all this.

This, of course, means that network and application testing is more necessary than ever. Capacity, latency, throughput and end-user quality of experience must be measured in realistic scenarios, especially in cases where shared hosts are used by other applications. For the most part, quality guarantees are not offered by service providers and must be regularly tested by the application owner.

Security testing, too, is a different beast in the cloud. Large numbers of clients share cloud computing facilities and must be isolated from each other. Further, cloud data centres represent a “big piggy bank”, where large numbers of hacker targets are co-located. This may mean that cloud-hosted applications be made more secure than when run in the private data centre.

Network and performance testing are often ignored or insufficiently supported activities. Skipping or minimising pre-deployment testing may appear to save money in the short run, but can be far more expensive in the long run.

Network and application performance can lead to expensive downtime – sometimes measured in millions of dollars per hour. Programming and configuration errors can open up applications to security breaches, which can be far more expensive. They can result in loss of data, outright theft of information, legal exposure and customer churn.

The virtualised data centre is perhaps the most challenging context for T&M. Testing of the data centre from the outside is usually ineffective, due to the large, variable number of applications run there. In order to measure the performance of network and application elements that run on virtual machines within powerful hosts, it is necessary to get inside the host itself. The latest T&M techniques now call for VM-based software implementations of test ports, allowing quality of experience (QoE ) measurements of any component of the virtualised data centre or virtualised applications.

Common to both virtualised and wireless core networks is the need to deliver application traffic to end-users with a high QoE. QoE is an estimate of how users “feel” about their interaction and is measured in such terms as latency, jitter, throughput, MOS and MOS-V. QoE is a very real concern. When moving to virtualisation or into a wireless environment, information providers lose a good deal of control. They can no longer tune their computer systems to provide optimal performance just for their applications; they now must depend on the cloud, network and wireless service providers to provide the compute power, network tuning and optimisation needed.

Pre-deployment testing is the only means of ensuring sufficient QoE and capacity for all users of multiplay services. The most realistic means of testing such services and the networks that deliver those services is with large-scale stateful subscriber emulation. This type of testing simulates the operation of many thousands of users requesting data, voice and video services with high-performance, purpose-built hardware. Load balancing and other network devices require that such simulation must be fully stateful; that is, walking through the handshakes and protocols associated with each interchange.

Looking forward, the next generation networks and cloud infrastructures that will satisfy broadband and smartphone users will require next generation test and measurement architectures and techniques. Multiplay equipment requires highly intelligent Ethernet test ports, ports that operate at speeds ranging from 10Mbps to 100Gbps. Collections of test ports will emulate users in the millions so as to test the largest services and networks with realistic traffic.
 

ShareThis

« Previous article

Next article »

Tags: Server Virtualization

Related News

22 Aug 2014 | Server Virtualization

22 Aug 2014 | Server Virtualization

22 Aug 2014 | Server Virtualization

22 Aug 2014 | Server Virtualization

Read more News »
Related Web Exclusives

18 Aug 2014 | Server Virtualization

28 Jul 2014 | Server Virtualization

  • The need for speed

    Data boom pressures operators to speed up deployment of new data centers – increased focus on turnkey prefabricated solutions. Read more

14 Jul 2014 | Server Virtualization

Read more Web Exclusives»

Recruitment

Latest IT jobs from leading companies.

 

Click here for full listings»

Advertisement