Data management and ROI value have long been serious IT challenges. Whether you run a small home office or are part of an expansive enterprise, the issues remain the same; we are drowning in data. Not only drowning, but also required to store and safeguard this data for many reasons ranging from security requirements, across legal compliances, disaster recovery in the case of system failure or compromise, right down to pure access to our data for both business use and ever more critical analysis to leverage the information it contains to benefit our businesses.
The many problems IT managers face regarding their data responsibilities include:
● Larger volumes of data combined with 24 x 7 online business availability shrinking the backup window period to a point where more time than is physically available is needed;
● Business continuity not being able to afford downtime and certainly not the downtime resulting from IT infrastructure or data availability failure;
● Recovery time objectives (RTO) in these cases becoming critical;
● Similarly, recovery point objectives (RPO) to roll back to and the determination of what losses can be afforded between recovery points before failure occurred;
● Slow data access as data grows becoming an ever more significant issue; and
● How to prepare now for more applications that will eventually be delivered in the Cloud.
And not only this, but searching for solutions to these issues via multiple vendors’ offerings to be configured together sees costs skyrocket.
A rethink is well overdue
This has led to a rethink on a grand scale of the old school approach to data, with the search on for solutions that keep data at our command always, essentially an ultimate goal of achieving a zero back up strategy while retaining data integrity and fast access to even long term archived data. This new thinking encompasses array-based snapshots coupled with replication and elements of traditional backup/ recovery software related technologies such as copy data virtualisation, copy data management and converged data management.
Nightly incremental backups and weekly full backups are fading fast as a single tiered strategy of data management. They simply do not make the cut any longer, have remained cumbersome, time consuming costly, complex and under performing; the integrated systems themselves subject to multiple points of failure. Primary storage, via its snapshot technology, now formulates part of the new thinking in data-protection strategies.
This market space has several drivers
Test and development needs, and data analytics are two areas of value being widely explored. However, both are costly to deploy successfully and performance issues remain pain areas in old school solutions. complexity of multi-vendor product integration equally so. Many of these product sets require specialised skills to setup and manage to drive the TCO up where the opposite is the desired result. So, the thinking needs to broadly address these issues as well.
In a research paper on the State of Data Management Research group, 451 Research says: “More than 35 percent of business IT users plan to address their backup and disaster-recovery infrastructure to alleviate pain points. This is a large segment that knows that they have been addressing a modern data centre with 20-year-old technology and that a fundamental change in solution thinking is required. Many of them have, or are introducing primary storage points into their solutions basket, and we are seeing a steady trend towards array-based snapshots as a fundamental mind shift in data protection strategies.”
Many more of them are seeing the expanding role of backup data from a mere insurance policy that protects against failures and outages into a business asset, such as test and development and data analytics, as a critical factor in their evaluation of next steps regarding data management.
Meanwhile, the use of traditional backup/ recovery software alone is steadily decreasing as more than 50 percent of midsize and large enterprises are using array-based snapshots in conjunction with replication and, in many cases, elements of traditional backup/ recovery software, such as cataloguing and indexing. This market segment is being driven by end users’ demands for deriving more business value from backup data.
Data however continues to rank high in the pain stakes. Each point is a familiar tune being sung. Even the SOHO will recognise at least three of the top five. Medium enterprises and upward struggle with all of them to a degree more or less:
● Exceeding backup/ recovery window;
● Data growth;
● Managing backup hardware/ software;
● Tape management; and
● Defining a retention policy.
Twenty two percent of participants in a recent survey say backup redesign is a storage project priority, another 13 percent say redesigning their disaster-recovery (DR) procedures is a priority, we therefore see that 35 percent of the organisations plan to redesign their data-protection procedures and infrastructure and in most instances to alleviate pain and cost with little visible ROI and continued high TCO.
The modern paradigm: Virtualisation and Cloud
Virtualisation and the inevitable use of cloud infrastructure is a reality for enterprises, this is even true for organisations well below that scale of size. The development of virtualisation has in many ways been because of growing needs for both data expansion, and performance scalability, and as such has exacerbated the data-management and protection problems. We have seen the appearance of backup/ recovery tools that specialise in protecting only virtualised environments make their appearance, adding more applications, more complexity and more specialised skills requirements to manage the complexity. Suffice it to say the problems with existing backup-and-recovery methods are essentially the same in the virtualised data centre as in the non-virtualised data centre.
Technologies such as incremental backups, data de-duplication and compression address some of the problems, but not the important issue of recovery. Deduped data often has to be rehydrated for recovery, this is a time-consuming play because data has to be searched for and recovered in large segments as opposed to granular specific files, before the rehydration process, this is time consuming which can negatively affect the ability to meet RTO requirements.
What about Cloud? As more companies evolve their on-premises infrastructure to private clouds and build more applications in the public cloud, infrastructure needs to adapt. Companies need to protect, manage and secure their data at a granular level, with more intelligence, to capture the economics of cloud.
Snapshots
Most of the newer, more innovative approaches to data protection have their roots in snapshots. The term ‘snapshots’ here refers to disk-array-based snapshots sometimes referred to as hardware-based snapshots, SAN-based snapshots or storage snapshots. Snapshots are not true copies of data. They are ‘virtual copies’ that keep track of changes made to a base copy of a volume, file or file system over time by using metadata pointers or reference markers. Essentially, snapshots allow users to roll back a file, application, virtual machine (VM), and so on, to a previous point in time. The key benefits of using snapshots, compared to traditional backup software, are
● considerably faster backups and,
● more significantly, restores ‐ in part because the snapshots are resident on high-performance primary storage systems.
In addition, most snapshots techniques are very space efficient.
So are snapshots the complete solution? No, they don’t provide the functionality that users expect from traditional backup applications. Combining snapshots and replication with elements of traditional backup/ recovery software, such as indexing, cataloguing and scheduling brings forth a more holistic solution. Snapshots enable rapid (almost instantaneous) recovery, while backup software provides affordable retention, catalogue-based searches and restores. These hybrid systems are able to resolve the two critical pain points of RTO and RPO, as well as boosting efficiencies in meeting SLA objectives. A further benefit is that the choice of product could lead to further resource efficiencies and staffing requirements with associated financial savings.
Snapshots also have minimal or no impact on the performance of production servers and they enable application owners (for instance, database administrators, VM administrators) to handle backup and recovery tasks, thus reducing backup-specific staffing requirements. We see immediate longer-term TCO reductions.
However, as with everything there can be drawbacks:
● Making the wrong applications decisions could lead to high costs and complexity best avoided.
● Inefficient snapshot technology could lead to large volumes of expensive primary storage capacity being used.
Convergence and hyper-convergence solutions address these issues in next generation architectures, finally bringing the data management strategic solutions in line with modern data centre delivery capability. Moreover, solutions that have been architected with the Cloud in mind should be able to manage and orchestrate data quickly, intelligently and efficiently from on-premises to cloud and back.
Rubrik, the new standard in data management
As has been established, instantaneous application recovery and data delivery remain elusive challenges for any IT organisation. Businesses demand instant recovery from failures, quick access to test and development resources, and the latest data for business intelligence and analytics. To solve data management challenges, IT needs to integrate multiple points of technologies based on a legacy architecture comprising backup software, backup storage, long deployment cycles, management complexity, inability to meet SLAs, and lack of scalability
At Networks Unlimited we offer our customers Rubrik solutions, which redefine how data can be simply managed across public and private Clouds for: data protection, disaster recovery, archival for compliance and long-term retention, application development, and data analytics.
The Rubrik converged data management platform incorporates modern design principles:
● Software convergence: We distil physically disparate components that comprise multi-tiered, legacy backup and recovery architecture into a single software.
● Simplicity: We solve for ease of use through simplicity. For example, we design our user interface to display only information that requires user attention, reducing cognitive overload.
● Web-Scale: We adopt the same web-scale technologies used by Google, Facebook, and Amazon, allowing our users to easily handle rapidly increasing volumes of information by adding more appliances to the cluster.
● Efficiency: We build intelligence into our software to help users efficiently manage data without incurring unnecessary costs (for example, zero-byte cloning to save on storage capacity, sending only de-duplicated data to the public cloud to reduce data transfer and storage) and labour (for example, file search across a global index that spans private and public clouds).
Rubrik pioneers a radically innovative approach in data management by distilling formerly discrete, complex components that require manual stitching into a single, elegant software fabric packaged with industry standard hardware. Thus, our customers experience unprecedented simplicity, ease of use, and substantial cost savings.
Staff Writer
Yes, its a good article. Saturam are experts at building Connected-smart-data Apps. Be it Banking, Retail, Food or Hospitality, at Saturam our domain experts will help you architect, develop and deploy Embedded Analytics and IoT Streaming Apps. Clink Here: http://www.saturam.com/
Yes, its a good article. Saturam are experts at building Connected-smart-data Apps. Be it Banking, Retail, Food or Hospitality, at Saturam our domain experts will help you architect, develop and deploy Embedded Analytics and IoT Streaming Apps. Clink Here: http://www.saturam.com/