|Home > Server Virtualization > News > Virtualisation – a critical solution demanding critical testing||
Virtualization World 365: Server virtualization, hardware consolidation, virtual machines, P2V transformation, physical to virtual transformation
The complexity of combined business IT systems has lead to a proliferation of hardware installations. When adding a major new software system, the safest option has often been to install it on its own dedicated server, rather than risk unforeseen problems from resource conflict or software compatibility with other software systems. Thus datacenters evolved with ever increasing numbers of servers, many running at as little as five to ten percent capacity, contributing to the fact that IT power consumption already has a larger carbon footprint than the airline industry.
Meanwhile server performance has soared, and now multisocket, quad-core systems with 32 or more gigabytes of memory are the norm. With engines of such power available, it no longer makes sense to proliferate under-worked servers. There is a growing need for consolidation, and the development of new ”virtualization” technologies allows multiple independent workloads to run without conflict on a smaller number of servers.
Virtualization can be seen as an integration of two opposing processes. On the one hand a number of separate servers can be consolidated so that software sees them as a single virtual server – even if its operation is spread across several hardware units. On the other hand a single server can be partitioned into a number of virtual servers, each behaving as an independent hardware unit, maybe dedicated to specific software applications, and yet bootable on demand at software speeds..
This means that IT departments are now able to control server sprawl across multiple datacenters, using a combination of multicore computing and virtualization software such as VMware Virtual Infrastructure combined with hardware support from Intel Virtualization Technology (IVT) and AMD Virtualization (AMD-V). Fewer servers are now required and server utilization levels have increased – allowing greater energy efficiency and lower power and cooling costs in addition to savings from reduced capital investment.
This virtualization process applies not just to the processing function, but also to storage. Massive data storage can be safely located in high security sites and replicated for added disaster proofing, while high speed networking maintains performance levels matching local storage.
The flexibility of virtualization – its ability not just to consolidate hardware but also to allow the processing power to be split into virtual servers as required – brings further advantages as it decouples the application environment from the constraints of the hosting hardware. Disaster-recovery becomes simpler, more reliable and cost effective, as systems work around individual hardware failure by automatically diverting the load to other virtual servers. Virtual desktop environments can use centralized servers and thin clients to support large numbers of users with standard PC configurations that help to lower both capital and operating costs. Virtualization allows development, test, and production environments to coexist on the same physical servers while performing as independent systems. Because virtualization safely decouples application deployment from server purchasing decisions, it also means that virtual servers can be created for new applications and scaled on demand to accommodate the evolving needs of the business.
In addition to these benefits within any organization, virtualization also offers a new business model for service providers. They can outsource customer applications and locate any number of these services within their regional data centers, each in its own well-defined and secured virtual partition.
A question of bandwidth
The typical I/O arrangement for a server has been to provide Gigabit Ethernet channels to the LAN plus Fiber Channel links to storage – the difference being that traditional Ethernet tolerates latency delays and some loss of frames that would be unacceptable for storage access, so provision is made for lossless Fiber Channel links. With the server running at 6 or more times the load, the I/O ports are under pressure. Adding further gigabit network interface cards is costly because every new card adds to server power consumption, cabling cost and complexity, and increases the number of access-layer switch ports that need to be purchased and managed. In addition, organizations often must purchase larger, more costly servers just to accommodate the number of expansion slots needed to support increasing network and I/O bandwidth demands.
The evolution in the datacenter has moved from GbE plus FC to 10GbE ports and onward to racks of state-of-the-art blade servers connected via 10GbE to a high speed switch. A large datacenter could have thousands of such servers, requiring a new generation of powerful low latency, lossless switching devices typified by Cisco’s Nexus 5000 offering up to fifty two 10GbE ports, or their massive 256 port Nexus 7000.
The testing imperative
Firstly the problem of scale. In Spirent’s TestCentre this is addressed by a rack system supporting large numbers of test cards, including the latest HyperMetrics CV2 and 8port 10GbE cards, to scale up to 4.8 terabits in a single rack.
A second problem involves measuring such low levels of latency, where the very presence of test equipment produces delays that must be compensated for. Manual compensation is time consuming and even impossible in some circumstances, whereas in the Spirent system described this compensation is automatic and adjusts according to the interface technology and speed.
Latency compensation is just one example of the specific needs for testing either FCoE or DCE/CEE switching capability. Spirent TestCenter with HyperMetrics is used to test both and was the system of choice when Network World last year put Cisco’s Nexus 7000 to their Clear Choice test. The test addressed six areas – availability, resiliency, performance, features, manageability and usability. It was the biggest test Network World or Cisco’s engineers had ever conducted, but is a sure sign of the way things are going.
Virtualization has largely driven the development of 10GbE, and it will remain the technology of choice for the near future, even though the first 40GbE enabled switches are predicted for 2010 to provide even faster throughput between switches and, to meet the growing appetite for backbone bandwidth, work is already under way on 100GbE switches for service providers. As speed goes up, cost comes down, with Intel’s ’LAN on motherboard’ building 10GbE into the very foundation of the server.
|Related White Papers|
|Read more News »|
|Related VW365 TV|
|Related Web Exclusives|
|White Paper Downloads|
Keep up to date with the latest industry products, services and technologies from the world's leading IT companies.