|Home > Server Virtualization > News > Big Data: focussing on the CORE issues||
Virtualization World 365: Server virtualization, hardware consolidation, virtual machines, P2V transformation, physical to virtual transformation
There is no denying that businesses are under immense pressure to manage massive amounts of complex data. Information levels are estimated to be growing at up to 80 percent year on year and the biggest challenge associated with that comes from the dramatic increase in managing unstructured data from emerging sources – desktops/laptops, audio/visual files, images, databases, social media and a variety of other data types that are prominent in an organisation, but frequently managed in ‘silos’.
This unrelenting growth is a major force driving the ‘Big Data’ debate, which is further compounded by the universal adoption of virtualisation, the rapid shift to cloud-enabled services, the influx of mobile computing devices, demand for 24x7 operations and increasing consolidation.
Whilst Big Data brings with it a lot of good regarding new ways to create information that offers real business value it also presents a new set of challenges for the IT department as organisations struggle to find ways in which to keep pace with more demanding service levels for recovery and collapsing backup windows - which often leads to overloaded networks and a tendency to turn to more costly alternatives. A fundamental issue here appears to simply be that there just isn't enough time, resources or budget to manage, protect, index and retain massive amounts of unstructured data. The negative side effects of Big Data, which include risk, complexity and cost, clearly need to be met head on if the positive benefits are to win out.
There is still a long way to go in managing this change effectively according to recent research. A survey of 207 security and IT operations professionals by LogLogic recently found "significant" gaps between theory and practice, across industries, with regard to preparation for and management of big data and cloud environments with more than a third of those questioned citing that they did not understand the concept of 'big data'.
In fact, just under half (49 per cent) said they were "somewhat" or "very" concerned about managing big data, whilst 38 per cent said they do not have a clear understanding of what big data is. A further 59 per cent said that they lacked the tools required to manage data from their IT systems, resorting to using separate, disparate systems and even spreadsheets.
Legacy solutions are not ‘fit for purpose’
Traditional solutions also have two stages for each protection operation – scan and collection. In order to perform backup, archive and file analytic operations, each product must scan and collect files or information from the file system. Synthetic full, de-duplication and VTL solutions may have been introduced to try to reduce repository problems but a lack of integration capabilities causes these solutions to fall short in the longer term. Typically, incremental scan times on large file systems can also require more time than actual data collection. Regularly scheduled, full protection, operations then exceed back up windows and require heavy network and server resources to manage the process. It’s a vicious circle.
Convergence is the way forward
The advantages here are immediately clear. Built-in intelligent data collection classification will help to reduce scan times, which in turn allows companies to maintain incremental backup windows. Improved single pass and data collection for backup, archive and reporting also helps to reduce server load and operations. Integration, source-side de-duplication and synthetic full back up then further reduces and the network load whilst a single index instantly decreases the silos of information.
Instead of moving the pain point, a converged solution, such as Simpana, will create a single process that has the potential to reduce the combined time typically required to backup, archive and report by more than 50 percent compared to traditional methods and will deliver the simplified management tools required to affordably protect, manage and access data on systems that have become ‘too big’.
Whilst there are many ways to create Big Data, organisations that want to take control of the data mountain would be advised to consider adopting a ‘Copy Once Re-use Extensively’ (CORE) strategy if they want to manage Big Data cost effectively in the long term. The key benefits to CORE are simple:
There is no doubt that many organisations are having to walk a fine line between over-collection of data, which brings companies higher review costs, and under-collection, which presents them with the risk of missing key information, perhaps located in one of the emerging data sources - a critical issue in today’s world of information-on-demand, regulation and compliance.
The overall idea that all data sources, even those at the "edge” of the network could be accounted for – without adding to the data mountain - is a major factor behind CommVault’s recommendation to move to converged backup, archive and protection. Easing e-Discovery burdens was cited as the number one pressure point in the Forrester Research, Inc. ‘Global Message Archiving Online Survey’, above lowering storage costs and boosting application performance. I believe that convergence is absolutely the best way to take the pain out of finding key information in the ‘Big Data’ haystack.
What companies should be focused on achieving is the use of one platform that will enable those working with the information to intelligently manage and protect enormous amounts of data across a number of applications, hypervisors, operating systems and infrastructure from a single console. A policy-driven approach to protecting, storing and recovering vast amounts of data whilst automating administration will always be the best way to maximise IT productivity and reduce overall support costs. Eliminating manual processes and seamlessly tiering data to physical, virtual and cloud storage helps to decrease administration costs whilst increasing operational efficiencies - enabling IT departments to ‘do more, with less’ resources.
A single data store would empower businesses to streamline data preservation and eliminate data redundancy during the review process which is now considered to be one of the major causes of skyrocketing data management costs. The ability to more easily navigate, search and mine data could fundamentally mean that Big Data is finally viewed as an asset to the business, not a hindrance.ShareThis
|Related White Papers|
|Read more News »|
|Related VW365 TV|
|Related Web Exclusives|
|White Paper Downloads|
Keep up to date with the latest industry products, services and technologies from the world's leading IT companies.