Blog

How Zmanda Manages Your Petabyte-Scale Storage, Backup, & Restore

One of the biggest challenges to IT is the voluminous workflows with petabytes of technical and rich content or data sets spread across thousands of file systems and billions of files. No doubt that you need to back up, restore, and protect this massive scale of data securely for decades or even longer. But, with such uncontrolled growth of the volume of data into multiple petabytes, as backup admin, you surely need to find new ways to offload them. Because to backup a petabyte of data growth will take you weeks and even more time-consuming to restore. Forget the time it takes. Imagine the worst. Storage subsystems might go out of maintenance, physical storage media might degrade because of its age, magnetic tape data storage will disintegrate as it decomposes — sadly, no medium for data storage lasts forever.

You might think that the additional machines and network resources will save your day, but unfortunately, a more permanent fix is still required. Because when it comes to addressing the “at a scale” workloads or backing up and restoring, the performance of the scale-out or scale-up architecture is often full of compromises.

At this juncture, how would you meet the specific demands for a petabyte-scale backup and storage target being:

  • Scaling out or scaling up throughput to support multiple incremental backups and restore streams without any downtimes.
  • Data being available and intact with tighter security configurations and control; and,
  • Recovery is quick and worry-free.

The article delves deeper into the challenges in Petabyte scale computing and how Zmanda, a comprehensive, robust, and efficient backup and restore solution consolidates the workloads and achieves availability in real-time while drastically reducing the management cost. Going ahead you will also explore 3 powerful ways that detail how the backup and recovery solutionreduces the storage space usage significantly while leaving more bandwidth for all your production tasks.

But, before you dive into the challenges of how to backup and store massive data securely, it’s worth familiarizing yourself with the idea of how big is this plethora of data and the rising challenges in hindsight.

So How Big Is the Size of a Petabyte of Storage?

One petabyte = 1,024 terabytes (TB) of data.

I.e, one billion times larger than the standard megabyte.

And such a Petabyte scale of unstructured data can quickly balloon 5x, 10x, or more! Imagine that you manage the company data of 400 employees on an average who use desktops and laptops, generating more than 100GB of data every day. Given the fact that a PC can accommodate 150 TB of data, a backup admin would need to provision at least 75TB for every full backup. Besides, you need to ensure enough space for incremental and differential backups. That’s one petabyte of storage for PC backups alone.

And that’s only a part of the story.

What’s interesting are some of the mind-bending stats that project data growth to even the Exabyte level:

  • By 2027, the big data market segment will maximize up to US$103 billion.
  • In 2020, every single person has alone generated 1.7 megabytes in just a second.
  • Every day 306.4 billion emails are sent.
  • In 2021, we generate 2.5 quintillion bytes of data daily.
  • Data is growing at a CAGR of 10.6%.
  • By 2025, the world will be creating 463 exabytes of data each day.
Petabytes
Unstructured data growth spanning tens of petabytes

Image credit: http://datanami.com

So, you think you can calculate the cost of one gigabyte at a time or you know the cost of 5 Petabytes of data? Get realistic. Because, things can get much more complicated with performance metrics such as writing speed, IOPS, interface throughput, uptime, etc. In fact, the problem looks much deeper than it seems.

The Need: A Deeper Look Into the Problem

With the storage data growth of data centers that average 50-80% of the compound annual growth rate, many enterprises are looking for alternatives. There are many variables to consider apart from the given challenges that look valid and are quite convincing. Some of them are listed as under:

  • The NAS and SAN storage was never meant for petabyte-scale workloads.
  • While you can pack in more capacity with traditional flash- or disk-based storage, it only makes sense if the storage system has 24 or 48 drives. But for a system that holds several thousands of drives, storing multiple petabytes of data on flash without any performance impact is a deal.
  • The biggest issue is the scale-out performance for lots of fast petabytes. Adding on are the rising demands of operational smoothness, faster IOPs, more bandwidth, and greater reliability that necessitates paramount considerations.
  • Requirements of randomized access patterns to datasets for computational needs have become a challenge if querying data is not supported.
  • For local and distributed data storage performance, you need multi-petabyte scale-out storage infrastructures that require multiple connections, protocols, and devices to consume and produce data without compromising on high efficiency and productivity. This is complex to achieve.
  • For systems scale to petabytes, finding a reliable infrastructure that is designed and architected for meeting scalability, reliability, and availability requirements is a big deal.

It’s time that enterprises must get prepared with the right infrastructure and support to handle this massive amount of data distributed across different environments and systems. But does that mean that you need to adopt a rip-and-replace approach to data storage, restoration, backup, and security management? Not necessarily. Because automated storage tiering is a thing of the past. What you really need to is, move the data for backup and recovery around the infrastructure which should be done in real-time. Thankfully, there is a solution to this.

The Solution: Introducing Zmanda on Coping With the Server Load

Reducing the network load and storage costs with Backup Deduplication

Thanks to Zmanda’s mechanisms of data compression and data deduplication technology. This involves architecting an infrastructure to perform deduplication that can maximize capacity savings significantly while reducing the number of bytes transferred between two endpoints.

  • Data Compression – Given the petabyte of storage sizes, interestingly, Zmanda integrates the LZO and LZMA algorithm to that improves the performance of large-scale data analytics to a great extent. You can achieve high compression ratios to store data effectively, thereby saving the data storage space, reducing retrieval costs, and transfer overheads.
  • Data Deduplication – Sometimes, multiple redundant files of VM data blocks are backed up which you need to eliminate and free up space quickly to avoid wasting time and resources. With data deduplication enabled, the Zmanda backup and recovery solution employs block-level deduplication and file-level deduplication which explicitly identifies and avoids duplicate units of data and stores only the unique data. This makes the life of the backup admins a lot easier because you get more space for incremental and differential backups since less data is transferred. The result? Well, you save on storage costs as you don’t need to invest in data deduplication-specific hardware plus you can backup a lot of similar VMs.
Deduplication Process
Redundant files are removed during data Deduplication
Restoring voluminous data in less than a second
  • Indeed, a lengthy restore time to backup data from a petabyte server distributed across multiple locations is painful. This is where Zmanda backup and recovery technologydecreases the time drastically to restore data cross-region. Zmanda backup engine lets you run a large fleet of MSQL servers using a storage snapshot backup feature to create consistent MySQL full backups. That is to say, you can create a regular reproduction of the MySQL database using third-party snapshots in less than a second, NOT minutes regardless of the database size.
  • As an IT admin, using the filesystem or volume manager, or disk snapshot mechanism, you can perform a backup with minimal impact on MySQL applications. You can even leverage the snapshot capability for supporting systems such as Veritas File System, EMC CLARiiON, ZFS filesystem, and Linux Volume Manager (LVM) that creates temporary snapshots at a point-in-time to create backups that are highly suitable for large databases.
Reducing recovery time to less than 20 seconds

As per the IDC survey, downtimes can cost your company more than $1 million per hour and small businesses $20,000 an hour. But now, with Zmanda’s instant backup, you can reduce RTOs and RPOs significantly – from hours to seconds! Additionally, you can restore petabyte-scale data with 20X faster storage snapshots to near-real-time RPO.

  • Built with native code architectures that provide near real-time RPO, Zmanda Recovery Manager for MySQL running in VMs outperforms other backup solutions by reducing the backup window using storage snapshots. This leverages instant VMware backups, and you can significantly reduce the server recovery time from 15-20 seconds.
  • The integration of vStorage APIs that allows you to directly communicate with VMware ESXi host (Hypervisor) servers facilitates live storage migration. Here, not only you do take the advantage of flexible vStorage data protection but also recover the raw backup image 20X faster than the recovery of logical backup.

Zmanda: Meeting the Challenges

Understandably, petabytes are huge and embrace more than one quadrillion bytes of data. Some of the best advantages of the Zmanda management console in managing petabytes of storage could be listed as: 

  • Zmanda is successful in restoring several petabytes of data storage for customers and running backups and application data with even very significant daily incremental – all managed from a single centralized management console. With the integration of Amazon S3 and Amazon Glacier, you can save incredible amounts of cold data’ (infrequent access data) for a long time while paying no extra cost for the setup.
  • Zmanda instantaneous backup and restore is the only technology that helps you lower the RTO to seconds.
  • Zmanda uses a cloud-native set-up or client native tools for backups and restores, so the less you have to worry about orchestrating the infrastructure via code. If an outage occurs or something fails, you will be instantly notified as the back-end is entirely automated accelerating the time to recovery.
  • Such an architectural style not only accelerates the application delivery but also ensures the 24/7 availability of your mission-critical applications, thereby allowing you to respond rapidly to market conditions.
  • With a unique combination of block, file, and object workloads within a single solution, Zmanda continues to push the limits of a petabyte of storage capacity. By restoring only the blocks that have changed since the last full backup, Zmanda intelligently reduces the backed-up time to minutes without compromising on downtimes. Zmanda enterprise backup solution supports several block storage platforms such as tapetape librariesdisks (DAS, NAS & SAN backup, file servers, RAID), & optical jukeboxes. For object-based storage architecture, Zmanda offers infinite capacity for scale-out object storage infrastructure that consists of  Amazon’s Simple Storage Service (S3) and S3 compatible service providers, PostgresSQL backup and storageOpenStack SwiftAzure object storage, and google cloud object storage.

Get Started With Zmanda

Putting all the pieces together, the whole becomes much larger than its parts. Zmanda’s robust and rich backup and recovery suite offer the most powerful platform for service providers as well as customers to ensure reliable storage for petabyte backup and data protection.

With a plethora of automated modern storage systems and state-of-art innovative technologies that improve uptime, optimizes storage consumption, and reduces your Total Cost of Ownership (TCO), Zmanda has got your back.