|Home > Server Virtualization > News > Could Virtualization Run Aground?||
Virtualization World 365: Server virtualization, hardware consolidation, virtual machines, P2V transformation, physical to virtual transformation
Most IT revolutions have their tipping point, where the market expands beyond brave (or reckless) early adopters, through mainstream acceptance to suddenly becoming a universal ”must have”.
Virtualization has certainly passed beyond the first stage and, as a concept, is widely accepted – with nearly 40% of x86 workloads already running in virtual machines at the start of this year, and expected to grow to over 75% by 2015 according to Gartner (March 2011 data).
But the tipping point to mass acceptance would mean spreading from the deep waters of Fortune 500 companies and reaching into the broad shallows of the SME market. Are they ready for that?
Judging by the disturbing results of a recent survey by Infoblox, the answer is a definite ”no”. The good ship Virtualization is about to run aground…
These simple tools are cheap and ubiquitous, but can usually only handle multiple domains and subnets by providing coordination outside of those systems – and such coordination is itself often based on manually maintained spreadsheets. Furthermore, in the many organizations that operate both Linux and Windows, some form of manual coordination between the two is also required.
If enterprises try to scale beyond phase one to support dynamic workloads and private cloud infrastructures, they will be faced with unprecedented levels of change and complexity. In the virtual data center, administrators must be able to re-provision processing power at a stroke. If this change is not highly automated, the many steps involved in re-provisioning the network infrastructure and its core services such as IP assignment, DNS and DHCP, will demand considerable time and manpower. These changes include: firewall, VLAN, QoS, and policy settings, and other changes to both physical and virtual network elements to support the virtual data center.
While this rate of change and complexity is accelerating, few IT departments are expanding to keep pace. As Zeus Kerravala, Senior Vice President and Distinguished Research fellow at Yankee Group explained:
“Throughout a virtual machine lifecycle, there are multiple events that require visibility and change; most server-centric functions for managing this are built into the hypervisors, but the flip side of the coin – causing tremendous complexity as enterprises try to scale their virtual deployments – are the associated network changes that are typically performed manually. Automated technology is the only way to overcome this hurdle.”
Nor is it just a question of managing change, for there is also the problem of keeping subsequent track of those changes, as Daniel Boyd, IT administrator at Berry College points out:
“Virtualization has been touted as the best thing since sliced bread in terms of cost savings and flexibility. We have realized part of that promise. But, there have been some visibility and management issues. It’s still daunting to know where everything is and it would be helpful to have a single application that shows us where all the resources are and which virtual machine is on what host with all the information in a single view, instead of having to check five different management applications to find the information I need.”
Berry College’s experience is far from unique, according to industry analyst Jim Frey of Enterprise Management Associates, who adds:
“There is little if any hope for manual processes to keep pace with the rate of change introduced by server virtualization and cloud services – the only reasonable answer is automation. In this case, network managers could benefit greatly from tighter automation and control around IP address management as an essential aspect of maintaining a highly functional, highly performing network.”
Yet, according to our survey, 40% of organizations have not even begun to address this problem, and those that have are still predominantly relying on the most basic software tools. Even without the added pressures of virtualization, the analysts’ statistics are disturbing:
· Two thirds of all system performance issues are linked to network change, according to Gartner and IDC
Without extensive automation, this wasteful situation will continue, and virtualization will remain a niche technology for all but a few of the largest organisations. Because it will be enjoyed only by those that have already invested in flexible, automated systems that deliver the necessary visibility, change management and compliance capabilities in both the physical and virtual environments.
What is the secret of their success? It is no secret that today’s sophisticated and yet simple to deploy solutions for network change and configuration management combine powerful automation together with clear visibility into the health, policy and compliance of the network. They can collect and analyze network infrastructure configurations, identify policy violations, show the impact of change on network health and remediate issues. In short, they enable the enterprise to automate core network infrastructure management and support highly dynamic networks, applications and initiatives – such as virtualization and cloud computing.
If that was not enough, they also provide immediate ROI in terms of less reliance on specialist IT staff, many fewer repetitive manual tasks – saving time and reducing the risk of human error – and reducing operational and logistical delays that hinder business agility.
|Related White Papers|
|Read more News »|
|Related VW365 TV|
|Related Web Exclusives|
|White Paper Downloads|
Keep up to date with the latest industry products, services and technologies from the world's leading IT companies.