Its not surprising as the cost of adding new capacity to the environment goes way beyond just the cost of the hardware. Each server requires software licenses for the hypervisor, operating system, back up & recovery, system management etc. Beyond this you have other costs such as space, power & cooling, hardware and software maintenance and incremental resources to support the environment. When you add up the costs these more than double the cost of the investment in hardware over the lifetime of the investment. Now add to this picture the fact that many organisations are scaling out the number of virtual machines by a factor of 20% or more per annum and the costs really start to escalate.
The challenge for the folks managing the virtual infrastructure is they invariably don't have the tools with the ability to analyse their existing workload demands & capacity, in concert with the planned growth & changes, to determine precisely what they should do to deliver a reliable & efficient infrastructure now and in the future.
Many organisations have invested in performance monitoring and reporting tools for their virtual environment. Many of the solutions available gather & store hundreds of different types of performance metrics and analyse trends in each one to determine when a critical threshold will be exceeded either now or in the future so the root cause can be determined as quickly as possible. Every quarter a different vendor claims they have invented a new extension to their algorithms which increase the sophistication or speed with which they can tell when something is wrong.
However, none of these solutions provide any intelligence to analyse the virtual environment and come up with an optimal set of Actions that can be taken to prevent the danger, or in the event you are in the danger zone, what Actions to take to get back to a performant state. Some of these actions might involve re-distributing workloads across the cluster or storage LUNs to prevent performance bottlenecks, adding additional physical resources, changing virtual machine configurations or horizonotally scaling out application instances. With a lot of performance & capacity management solutions this decision making is left to the most skilled and scarce resources and even then it is impossible for a human to consistently perform this analysis and come up with the right answers.
So let's consider this in the context of the types of changes you might be planning to accomodate future demand and ensuring you can maintain an efficient environment but at the same time qualify of service for your virtualised workloads. Common scenarios that need to be accomodated include adding new applications with different performance characteristics, changing the performance charactistics of applications to reflect increased or decreased use, changing the way virtual machines are distributed across clusters & storage, merging clusters, changing underlying hardware, failing servers, changing storage & network constraints, changing application affiniti, setting target utilisation levels to maintain efficiency & QOS, and the list goes on.
Without a tool that can simulate these changes accurately and come up with the answers of how to optimise your resources and provide specific answers about what hardware will be required, you are playing a guessing game and you will not be able to present a very convincing story to the people who hold the purse strings.
In summary, in order that you can build your credibility with the CFO look for performance & capacity management solutions which provide analytics to work out how to optimise resource allocation decisions to maintain a healthy and efficient environment and unison support a very broader set of simulation capabilities so you can precisely and confidently plan for your future investment needs.
Tags: Server Virtualization, Storage Virtualization, I/O Virtualization, Network Virtualization, Desktop Virtualization, VMware, Hyper-V, Citrix, RedHat