Translate this website:
Search this website:


CitrixDesktop VirtualizationHyper-VI/O VirtualizationNetwork VirtualizationRedHatServer VirtualizationStorage VirtualizationVMware

Big Data: focussing on the CORE issues

By Simon Gregory, Business Development Director, CommVault.

 

Date: 11 Jun 2012

There is no denying that businesses are under immense pressure to manage massive amounts of complex data. Information levels are estimated to be growing at up to 80 percent year on year and the biggest challenge associated with that comes from the dramatic increase in managing unstructured data from emerging sources – desktops/laptops, audio/visual files, images, databases, social media and a variety of other data types that are prominent in an organisation, but frequently managed in ‘silos’.

This unrelenting growth is a major force driving the ‘Big Data’ debate, which is further compounded by the universal adoption of virtualisation, the rapid shift to cloud-enabled services, the influx of mobile computing devices, demand for 24x7 operations and increasing consolidation.

Whilst Big Data brings with it a lot of good regarding new ways to create information that offers real business value it also presents a new set of challenges for the IT department as organisations struggle to find ways in which to keep pace with more demanding service levels for recovery and collapsing backup windows - which often leads to overloaded networks and a tendency to turn to more costly alternatives. A fundamental issue here appears to simply be that there just isn't enough time, resources or budget to manage, protect, index and retain massive amounts of unstructured data. The negative side effects of Big Data, which include risk, complexity and cost, clearly need to be met head on if the positive benefits are to win out.

There is still a long way to go in managing this change effectively according to recent research. A survey of 207 security and IT operations professionals by LogLogic recently found "significant" gaps between theory and practice, across industries, with regard to preparation for and management of big data and cloud environments with more than a third of those questioned citing that they did not understand the concept of 'big data'.

In fact, just under half (49 per cent) said they were "somewhat" or "very" concerned about managing big data, whilst 38 per cent said they do not have a clear understanding of what big data is. A further 59 per cent said that they lacked the tools required to manage data from their IT systems, resorting to using separate, disparate systems and even spreadsheets.

Legacy solutions are not ‘fit for purpose’
Unfortunately, legacy data management methods and tools simply aren't up to the task of managing or controlling the data explosion. Originally created to solve individual challenges, which has since led to multiple products being deployed to manage backup, archive and analytics and resulted in complex administration, information silos have now been created causing upgrade concerns and bringing forward the debate around the cost of alternatives versus current maintenance issues. Lack of reporting across these platforms ultimately reduces data visibility across an organisation and impacts the ability to introduce effective archiving strategies.

Traditional solutions also have two stages for each protection operation – scan and collection. In order to perform backup, archive and file analytic operations, each product must scan and collect files or information from the file system. Synthetic full, de-duplication and VTL solutions may have been introduced to try to reduce repository problems but a lack of integration capabilities causes these solutions to fall short in the longer term. Typically, incremental scan times on large file systems can also require more time than actual data collection. Regularly scheduled, full protection, operations then exceed back up windows and require heavy network and server resources to manage the process. It’s a vicious circle.

Convergence is the way forward
CommVault believes that there is an alternative approach, which is to adopt a unified data management strategy which collapses data collection operations into a single solution enabling the copying, indexing and storage of data in an intelligent, virtual repository that provides an efficient and scalable foundation for e-Discovery, data mining, and retention. Such an approach also enables data analytics and reporting to be performed from the index in order to help classify data and implement archive policies for data tiering to lower cost media. This also serves to reduce the total cost of ownership.

The advantages here are immediately clear. Built-in intelligent data collection classification will help to reduce scan times, which in turn allows companies to maintain incremental backup windows. Improved single pass and data collection for backup, archive and reporting also helps to reduce server load and operations. Integration, source-side de-duplication and synthetic full back up then further reduces and the network load whilst a single index instantly decreases the silos of information.

Instead of moving the pain point, a converged solution, such as Simpana, will create a single process that has the potential to reduce the combined time typically required to backup, archive and report by more than 50 percent compared to traditional methods and will deliver the simplified management tools required to affordably protect, manage and access data on systems that have become ‘too big’.

Whilst there are many ways to create Big Data, organisations that want to take control of the data mountain would be advised to consider adopting a ‘Copy Once Re-use Extensively’ (CORE) strategy if they want to manage Big Data cost effectively in the long term. The key benefits to CORE are simple:
· Process data once
· Store data once
· Retain data once
· Search data from one place
· Centralise policy management
· Automate tiering of data while maintaining hardware and storage flexibility
· Synchronise data deletion and automate space reclamation

There is no doubt that many organisations are having to walk a fine line between over-collection of data, which brings companies higher review costs, and under-collection, which presents them with the risk of missing key information, perhaps located in one of the emerging data sources - a critical issue in today’s world of information-on-demand, regulation and compliance.

The overall idea that all data sources, even those at the "edge” of the network could be accounted for – without adding to the data mountain - is a major factor behind CommVault’s recommendation to move to converged backup, archive and protection. Easing e-Discovery burdens was cited as the number one pressure point in the Forrester Research, Inc. ‘Global Message Archiving Online Survey’, above lowering storage costs and boosting application performance. I believe that convergence is absolutely the best way to take the pain out of finding key information in the ‘Big Data’ haystack.

What companies should be focused on achieving is the use of one platform that will enable those working with the information to intelligently manage and protect enormous amounts of data across a number of applications, hypervisors, operating systems and infrastructure from a single console. A policy-driven approach to protecting, storing and recovering vast amounts of data whilst automating administration will always be the best way to maximise IT productivity and reduce overall support costs. Eliminating manual processes and seamlessly tiering data to physical, virtual and cloud storage helps to decrease administration costs whilst increasing operational efficiencies - enabling IT departments to ‘do more, with less’ resources.

A single data store would empower businesses to streamline data preservation and eliminate data redundancy during the review process which is now considered to be one of the major causes of skyrocketing data management costs. The ability to more easily navigate, search and mine data could fundamentally mean that Big Data is finally viewed as an asset to the business, not a hindrance.

ShareThis

« Previous article

Next article »

Tags: Server Virtualization, Storage Virtualization, I/O Virtualization, Network Virtualization, Desktop Virtualization, VMware, Hyper-V, Citrix, RedHat

Related News

1 Oct 2014 | Server Virtualization

1 Oct 2014 | Server Virtualization

1 Oct 2014 | Server Virtualization

30 Sep 2014 | Server Virtualization

  • Proact to launch in Germany

    Proact will be active in Germany as of 1 October this year. The launch will initially take place as a “greenfield”, with extensive skilled staff ... Read more

Read more News »
Related Web Exclusives

22 Sep 2014 | Server Virtualization

15 Sep 2014 | Server Virtualization

18 Aug 2014 | Server Virtualization

28 Jul 2014 | Server Virtualization

  • The need for speed

    Data boom pressures operators to speed up deployment of new data centers – increased focus on turnkey prefabricated solutions. Read more

Read more Web Exclusives»

Recruitment

Latest IT jobs from leading companies.

 

Click here for full listings»

Advertisement