Archive for the ‘Cloud Backup’ Category

Amanda Enterprise 3.3.5 Release with Hyper-V and Vaulting Support

Friday, December 6th, 2013

Our objective with Amanda Enterprise is to continue to widen its umbrella to protect most common IT infrastructure elements being deployed in the modern data center. We also seek to support various workflows for secondary and tertiary data as desired by the IT administrators. The new release of Amanda Enterprise (3.3.5) comes with two key features for these objectives.

Hyper-V Support

Hyper-V is a native hypervisor, which is getting increasingly popular especially in Windows server-centric IT shops. Amanda Enterprise now supports image level backup of live VMs running on Hyper-V. We support Hyper-V which is installed as a role either on Windows Server 2012 or Windows Server 2008.

One license for Zmanda Windows Client and Zmanda Client for Microsoft Hyper-V enables you to backup an unlimited number of VMs on one hypervisor. You need to run the Zmanda Windows Client on the hypervisor, and configure backups of VMs via Zmanda Management Console (GUI for Amanda Enterprise). ZMC discovers the VMs running on each hypervisor. You can choose whether you want to backup all or a selected set of VMs on each hypervisor.

A full consistent image of the VM is backed up to media of your choice - disk, cloud storage or tape.

Of course, you can continue to install and run separately licensed Zmanda Clients within your virtual machines (guests) to perform finer granular backups. This enables you to recover from smaller disasters (like accidental deletion of a file) without recovering the whole VM. You can consider having different schedules for your file-level or application-level backups and VM level backups - e.g. file-level backups may be performed daily and VM level backups may be performed each weekend.

Vaulting Support

While many of our customers have already implemented disk-to-disk-to-cloud (d2d2c) or disk-to-disk-to-tape (d2d2t) data flows using our products, we have formally introduced Vaulting as a feature supported via Zmanda Management Console. There are two key use cases supported by this feature:

  • Backup to disk and then Vault to lower cost disk or remote disk (via NFS)
  • Backup to disk and then Vault to cloud or tape for offsite storage

Our implementation allows very flexible media duplication (e.g., you can vault from multiple disk volumes to a smaller number of physical tapes). You can also choose to vault only the latest full backups for off-site archiving purposes. All storage devices supported by Amanda Enterprise can be a source or destination for vaulting (e.g., for one of the backup sets you can vault to a tape library and for another backup set on the same backup server you can vault to cloud storage). The Backup catalog of Amanda Enterprise keeps tracks of various backup copies and will allow for restoration from either the backup media volumes or vaulted volumes.

Note that vaulting is most useful when you want to duplicate your backup images. If you want to have a data flow to periodically move your backups e.g. from disk to cloud, you can simply use the staging area in Amanda Enterprise and treat the storage on staging area as your first backup location.

In addition to above new features, Amanda Enterprise 3.3.5 has several bug fixes and usability improvements. If you are interested in learning more, you can join one of our upcoming webinars (next one scheduled on December 18th): http://zmanda.com/webinars.html

If you are already on the Amanda Enterprise platform, you can discuss upgrading to 3.3.5 with your Zmanda support representative. If you are interested in evaluating Amanda Enterprise for your organization, please contact us at zsales@zmanda.com

Introducing ZCB 4.5: We continue to make it better!

Sunday, March 24th, 2013

We’re excited to inform you that our latest release of Zmanda Cloud Backup – ZCB 4.5 – is now available. With many great new features and usability and performance improvements, below are some of the highlights for our customers:

Hello Hyper-V servers!

Now, you can use ZCB to protect your guest virtual machines running on a Hyper-V 2008 Server and Hyper-V 2012 Server. You can backup specific VMs or back all up in a single backup set.

Hyper V

Both “Saved State” and “Child VM Snapshot” backup mechanisms are supported. Since the latter method doesn’t cause any downtime for the guest VM, it is the preferred method of ZCB. Disabling “Saved State” Method can also be achieved by simply checking the “Backup a running VM only if its hot backup can be performed” checkbox.

In the event of a disaster, restoring your guest VM(s) is easy. You can restore a VM to the same Hyper-V server or a different one. All you need to do is open ZCB on the target Hyper-V server, select the backup you want to restore and click “Restore.” ZCB will take care of the rest.

To get started, login to your Zmanda account and download the latest 4.5 version from the Download tab!

Introducing Near-Continuous Data Protection (CDP) of SQL Server

With version 4.5, ZCB now supports incremental (log-based) backups of SQL Server. This helps a great deal since log backups contain a list of all the individual changes to the database, and hence, provide an ability to restore the database to any historic timestamp, regardless of the actual backup time.

Why do we call this near-CDP and not CDP? Note that the log backups are not fired automatically upon each change event and are still “schedule based.” While this minimizes the backup overhead on your CPU/memory/bandwidth resources, it also means that you can only restore to a timestamp before a log backup has successfully taken place. That said, note that you can always choose a higher frequency for log backups (for example, “every 15 minutes”) to keep this as close to a true CDP system as you want.

Here is how one can specify any point in time for restore:

point in time for restore

While we’re discussing SQL Server backups, there is some more good news:  ZCB now performs Virtual Device Interface (VDI) based differential backups that can be significantly more compact and efficient than our earlier VSS based differential backups!

To see these improvements in action, login to your Zmanda account and download the latest 4.5 version from the Download tab!

Better performance and usability

Performance and usability are always top of mind for us when it comes to development and in ZCB 4.5, you’ll see some great enhancements, including:

Better network fault tolerance: The era of dial-up connections may be over, but the some of our customers often report temporary network outages. We have been following a “learn-fix-test-learn” approach to counter such network issues users may face, and have already added several defence mechanisms in past ZCB releases. Taking that approach further, we have now added another fail-safe mechanism to auto-detect and resume tasks that may still fail despite all these mechanisms.

Proactive validation of backup sets: New ZCB users looking to backup Exchange or SQL Server will benefit from a proactive validation mechanism in ZCB 4.5. When you create this type of backup set and go to save it, ZCB will now pop up a message if it detects a problem with your backup environment. Here is an example:

Proactive validation of backup sets

ZCB Global Dashboard improvements: Our dashboard team has been busy rolling out new features at a fast pace. Two of the most recent and powerful features are:

Delete backups from cloud

For our customers backing up to Google Cloud Storage (we’ll update customers using Amazon S3 in the nearfuture on this feature), you can now delete backups from the cloud. Whether you want to make some room in the cloud or simply want to clear old or unnecessary backups from cloud, you can quickly delete the backup runs using the dashboard from anywhere (screenshot below).

Track backup retention periods

The ability to specify the exact retention duration of cloud backups is one of the most valued features of ZCB. From a compliance or backup strategy standpoint, this control is absolutely necessary.

With ZCB 4.5, you now can monitor the retention period of all cloud backups on the dashboard. If a backup is going to be removed by ZCB in next 7 days (per your retention policy for example), it is also highlighted to get your attention.

dashboard

How are we doing?

As we continue to focus on making ZCB more useful for our customers, we really need to know how you’re using our service and how we can further improve to meet your needs. Please continue to send your comments and feature requests to zcb@zmanda.com.

Also, if you are a ZCB customer and would like to be kept up-to-date on product enhancements, please consider subscribing to this blog.

Looking forward to hearing from you soon!

-Nik

Amanda Enterprise 3.3 brings advanced backup management features

Wednesday, March 20th, 2013

Built on extensive research and development, combined with active feedback from a thriving open source community, Amanda Enterprise (AE) 3.3 is here! AE 3.3 has significant architecture and feature updates and is a robust, scalable and feature-rich platform that meets the backup needs of heterogeneous environments, across Linux, Windows, OS X and Solaris-based systems.

As we worked to further develop Amanda Enterprise, it was important to us that the architecture and feature updates would provide better control and management for backup administration.  Our main goal was to deliver a scalable platform which enables you to perform and manage backups your way.

Key enhancements in Amanda Enterprise include:

Advanced Cloud Backup Management: AE 3.3 now supports use of many new and popular cloud storage platforms as backup repositories. We have also added cloud backup features to give users more control over their backups for speed and data priority.

 Backup Storage Devices Supported by Amanda Enterprise 3.3


Backup Storage Devices Supported by Amanda Enterprise 3.3

Platforms supported now include Amazon S3, Google Cloud Storage, HP Cloud Storage, Japan’s IIJ GIO Storage Service, and private and public storage clouds built on OpenStack Swift. Notably, AE 3.3 supports all current Amazon S3 locations including various locations in US (including GovCloud), EU, Asia, Brazil and Australia.

Cloud Storage Locations Supported by Amanda Enterprise


Cloud Storage Locations Supported by Amanda Enterprise

In addition to new platforms, now, you can control how many parallel backup (upload) or restore (download) streams you want based on your available bandwidth. You can even throttle upload or download speeds per backup set level; for example, you can give higher priority to the backup of your more important data.

Optimized SQL Server and Exchange Backups: If you are running multiple SQL Server or Exchange databases on a Windows server, AE 3.3 allows selective backup or recovery of an individual database. This enables you to optimize the use of your backup resources by selecting only the databases you want to back up, or to improve recovery time by enabling recovery of a selected database. Of course, the ability to do an express backup and recovery of all databases on a server is still available.

Further optimizing, Zmanda Management Console (which is the GUI for Amanda Enterprise) now automatically discovers databases on a specific Windows server, allowing you to simply pick and choose those you want to backup.

Improved Virtual Tape and Physical Tape Management: Our developers have done extensive work in this area to enhance usability, including seamless management of available disk space. With extensive concurrency added to the Amanda architecture, you can eliminate using the staging disk for backup-to-disk configurations. AE 3.3 will write parallel streams of backups directly to disk without going through the staging disk. You can also choose to optionally configure staging disk for backup to tapes or clouds to improve fault tolerance and data streaming.

Better Fault Tolerance: When backing up to tapes, AE 3.3 can automatically withstand the failure of a tape drive. By simply configuring a backup set to be able to use more than one tape drive in your tape library, if any of the tape drives is not available, AE will automatically start using one of the available drives.

NDMP Management Improvements: AE 3.3 allows for selective restore of a file or a directory from a Network Data Management Protocol (NDMP) based backup. Now, you can also recover to an alternative path or an alternative filer directly from the GUI. Support for compression and encryption for NDMP based backups has also been added to the GUI. Plus, in addition to devices from NetApp and Oracle, AE now also supports NDMP enabled devices from EMC.

Scalability, Concurrency and Parallelism: Many more operations can now be executed in parallel. For example, you can run a restore operation, while active backups are in progress. Parallelism also has been added in various operations including backup to disk, cloud and tapes.

Expanded Platform Support: Our goal is to provide a backup solution which supports all of the key platforms deployed in today’s data centers. We have updated AE 3.3 to support latest versions of Windows Server, Red Hat Enterprise Linux, CentOS, Fedora, Ubuntu, Debian and OS X. With AE, you have flexibility of choosing the platforms best suited for each application in your environment – without having to worry about the backup infrastructure.

Want to Learn More?

There are many new enhancements to leverage! To help you dive in, we hosted a live demonstration of Amanda Enterprise 3.3. The session provides insights on best practices for setting up a backup configuration for a modern data center.

ZCB 4.4 out for download. How to say “cloud backup” in Australian?

Wednesday, November 28th, 2012

We kicked off the holiday season this year with our latest release - ZCB 4.4. Here is a quick overview of what’s new.

Hello Windows Server 2012 and Windows 8!

This release of ZCB fully supports Windows Server 2012 and Windows 8. All applications and system state backup are supported. So go ahead and protect your investment in your latest Windows systems!

Super-fast local backups

While ZCB’s network performance has been quite trendsetting (some of our customers have reported uploads at more than 130+ Mbps in their environments!), recently we set our sights on the performance of our local backups. This is an important area for ZCB since it supports unlimited local backups to local/external/network drives, unlike many other backup services.

I’m happy to share that our engineering team has been able to significantly improve the performance of local backups in ZCB 4.4. Here is a chart to show a comparison:

Note that to reap the full benefits of this improvement, you will need to turn off compression or the load on CPU may become the bottleneck.

User-friendly restore of AES 256 bit encrypted backups

In case you lost your encryption key after using it to backup your data, ZCB 4.4 allows you to recreate decryption keys for restore. All you need is the passphrase you used earlier. Of course, you need to at least remember that passphrase to take advantage of this feature!

Here is this option to create the decryption key on the Restore page of the ZCB user interface:

A new Amazon region - Asia Pacific (Sydney)

If you are in or near to the Australian region and would like to backup to a local data-center for compliance and performance reasons there is a good news: ZCB now supports the brand new Amazon S3 region in Sydney.

And as a welcome gesture, like always, we have made usage to this new region absolutely free until December 31, 2012! So purchase ZCB today and give it a spin!

We are excited to hear about our customers’ experience with this new version. As always, please let us know your comments and feature requests at zcb@zmanda.com.

If you are a ZCB customer and would like to see our product updates, please consider subscribing to this blog and our tweets.

Thank you!
-Nik

Zmanda - A Carbonite Company

Wednesday, October 31st, 2012

I am very excited to share that today Zmanda has combined forces with Carbonite - the best known brand in cloud backup. I want to take this opportunity to introduce you to Carbonite and tell you what this announcement means to the extended Zmanda family, including our employees, customers, resellers and partners.

First, we become “Zmanda - A Carbonite Company” instead of “Zmanda, Inc.” and I will continue to lead the Zmanda business operations. Carbonite will continue to focus on backing up desktops, laptops, file servers, and mobile devices. Zmanda will continue to focus on backup of servers and databases. Carbonite’s sales team will start selling Zmanda Cloud Backup directly and through its channels. Since Carbonite already has a much larger installed base of users and resellers, our growth should accelerate considerably next year which will allow us to innovate at an even higher level than before. Zmanda’s direct sales team and resellers will continue to offer the Zmanda products they respectively specialize in.

I’ve gotten to know Carbonite over the last few months and I am very impressed with their organization and am looking forward to joining the management team. One of the things that attracted me to Carbonite was its commitment to customer support. Carbonite has built a very impressive customer support center in Lewiston, Maine, about a two hour drive north of their Boston headquarters, where it now employs a little over 200 people. We’ll be training a technical team in Maine to help us support Zmanda Cloud Backup, and of course we’ll also be keeping our support teams in Silicon Valley and in Pune, India for escalations and support of Amanda Enterprise and Zmanda Recovery Manager for MySQL. Please note that at this point, all our current methods of connecting with customer support including Zmanda Network, will continue as is.

Another thing that makes Carbonite a good fit for us is its commitment to ease-of-use. Installing and operating Carbonite’s backup software is as easy as it gets. We share this goal, and we hope to learn a thing or two from the Carbonite team on this front - as we continue to build on aggressive roadmap of all our product lines.

We’ve worked hard to make Zmanda products as robust as possible. Our technologies, including our contributions to the open source Amanda backup project, have been deployed on over a million systems worldwide. Amanda has hundreds of man years of contributed engineering. We believe it is one of the most solid and mature backup systems in the world. Much of what we have done for the past five years has been to enhance the open source code and provide top notch commercial support. Carbonite, too, understands that being in the backup business requires the absolute trust of customers and I believe that every day the company works hard to earn that trust: it respects customer privacy, is fanatical about security, and has made a real commitment to high quality support.

I and the other Zmanda employees are very enthusiastic and proud to be joining forces with Carbonite. We look forward to lots of innovation in the Zmanda product lines next year and hope that you will continue to provide us with the feedback that has been so helpful in the evolution of our products.

ZCB 4.3 now available; ever easier cloud backup!

Wednesday, September 5th, 2012

Past few months have been really busy around here. After releasing Zmanda Cloud Backup (ZCB) 4.2, which had a brand new purchasing option with a new cloud storage provider (Google), we had our task clearly cut out: make ZCB even easier to understand and use. So we got back to our laundry list and picked up a few items our users were requesting the most. The result is our latest release 4.3, available for download to all current and new ZCB customers.

So what’s new? Well, quite a lot:

A brand new help portal

Before we deep dive into the product improvements, let me introduce you to our new help portal:

This portal is available at http://help.zmanda.com and has all the documentation available in an easily searchable format. So whether you need any help in configuring ZCB or getting through a technical issue, please search this portal to see if your question has already been answered. We plan to constantly add to the knowledgebase, so please leave us your comments on the pages there!

Encryption made easier

We added a new, user friendly encryption method: Passphrase based AES 256 bit encryption. As compared to our current PFX based encryption method, this method is easy to get started with. Just create your encryption key using a strong password:

AES encryption

and begin using it with your backup sets:

AES encryption use

As with our PFX encryption method, you own your encryption key completely. It is never transferred to Zmanda servers or to cloud storage. What’s great is that you can change the encryption key whenever your security policies want you to (just make sure you save a copy for future restores!).


Interruption-free data protection for laptops

If you are using ZCB to backup laptops, you will get two significant benefits by using ZCB 4.3:

  • When you lose network connection (say when you step out of a WiFi zone), you don’t have to worry about your ongoing upload tasks. These tasks will resume as soon as you get a network connection (on Windows Vista, Windows 7 and Windows 2008 Server) or at the next hour(on Windows XP or Windows 2003 Server).
  • While adding backup schedules, you will see a new option to wake up your computer (if it happens to be sleeping or hibernating at the scheduled backup time) to run the backup job:

wake up computer

Feeling unlucky? Verify your backups!

Moved your external or network drives around? Or simply need reassurance that your cloud data is safe and sound? ZCB 4.3 includes a new “Verify Backup Data” operation which can check the health status of your backups. To run this operation, go to the Report page, right click on a backup run and select the Verify Backup Data option:

verify backup data

If all is well, you will get this reassuring message:

verify backup data result

And if ZCB finds something out of order (we hope not!), you will see this message:

verify backup data result fail

If you see the above message, check the Details column of the backup runs to know the exact error.

Manage your cloud data better

Many ZCB customers value the flexible retention policy provided by ZCB for your data. To keep your costs low, it is important to configure desired retention policy before you start backing up. This will ensure that as your data grows older and stops being valuable to you, it is automatically removed by ZCB from the cloud. But what if you forget to set retention policies before backing up? In that case, your data will keep getting accumulated on the cloud forever, taking up valuable cloud storage space.

Also there may be cases when your backup policies change (for example your old Excel reports prove to be more valuable than you earlier thought and now want to retain them for 1 year instead of 6 months) and hence you need to change the retention policy of backups you have already made.

To support such cases, ZCB 4.3 now has a new operation which you can use to change the retention policy of already run backups. Just go to the Report page, select your backup runs (you can even select multiple runs), right click on them and click on the Change Retention Period option, as shown below:
change retention

This will bring up a window where you can change the retention policy as you desire:

change retention 2

Note that ZCB allows you to set independent retention policies for local backups and cloud backups.


Delete your old, unnecessary backup data easily

Sometimes you just want to delete backups older than a particular date. This may happen when you didn’t set retention policies or you just wish to free up some cloud storage space.

With ZCB 4.3, now you can delete such backups in a single shot by using a new handy option in the File menu:

purge all backup runs before

This will bring up this window with self explanatory options:


purge all backup runs before window

Just provide your preferences above and all older data belonging to your computer will be deleted from cloud. Like other deletions from cloud - this process is irreversible, so take caution!


Locate a backed up file and restore it quickly

If you are looking for a specific file (say a meeting document from 1 year ago), it is super easy in ZCB 4.3. Just select your backup set, go to Restore tab and click on the “Locate a Backup File” button:

locate a backed up file

In the next window, just type in what you remember from the desired file name (such as “team_meeting”) and hit the Find button. ZCB will then show you all the matching file names along with the most recent time of their backup. You can select the desired file from the list and restore it in a couple of mouse clicks.


Backup one computer, upload to cloud from another

If some of your computers (such as an old Desktop at your home) don’t have necessary internet bandwidth to finish cloud backups, now you don’t have to leave them unprotected.Using ZCB 4.3, you can upload the backup data of such computers from a different computer that does have a better internet connection (such as a server in your office).

The steps to do this are described here and can be particularly valuable to finish the upload of large backups (such as the first full backup).


Other important improvements

In addition to many bug fixes and usability improvements, ZCB 4.3 also features:

More robust backups to your network drives

While storage clouds provide unparalleled reliability, network drives are an excellent way to store your backups locally for faster restores. But unfortunately due to fluctuating network conditions and changing user demands, network drives are prone to read/write errors.

ZCB 4.3 has a retransmission mechanism to handle such cases better and hence should result in more reliable backups to your network drives.

Mixing of Differential and Incremental backups is now possible

Before 4.3, you had to select either of the two backup levels - incremental and differential for a backup set. But in 4.3 you can combine these together in the same backup set.

The benefit? For starters you can implement hierarchical backup strategies such Grandfather-Father-Son scheme which provide strong redundancy and quicker restores. The combination allows you to reduce your cost of cloud storage while keeping your backup windows to a minimum.

As always, please let us know your comments and feature requests at zcb@zmanda.com.

Also, to receive product related news from us, please subscribe to this blog and our tweets. We look forward to your feedback!

-Nik

Cyberduck with support for Keystone-based OpenStack Swift

Tuesday, August 28th, 2012

Cyberduck is a popular open source storage browser for several cloud storage platforms. For OpenStack Swift, Cyberduck is a neat and efficient client that enables users to upload/download objects to/from their Swift storage clouds. However, the latest version (4.2.1) of Cyberduck does not support  Keystone-based authentication method.  Keystone is an identity service used by OpenStack for authentication and authorization. We expect Keystone to be the standard identity service for future Swift clouds.

There has been intensive discussions on how to make Cyberduck work with Keystone-based Swift, for example [1], and this issue has been listed as the highest priority for the next release of Cyberduck.

So, we decided to dig into making Cyberduck work with Keystone-based Swift. First we start by thanking the Cyberduck team for making compilation tools available to enable this task. Second, special thanks to David Kocher for guiding us through the process.

The key is to make java-cloudfiles API support Keystone first, because Cyberduck needs java-cloudfiles API to communicate with Swift. We thank AlexYangYu for providing the initial version of the modified java-cloudfiles API that supports Keystone. We made several improvements based on that and our fork is available here:

https://github.com/zmanda/java-cloudfiles.git

The high-level steps are to replace the older cloudfiles-1.9.1.jar in the lib directory of Cyberduck with java-cloudfiles.jar that supports Keystone authentication. Besides, we also need to copy org-json.jar from the lib directory of java-cloudfiles to the lib directory of Cyberduck.

In order to make sure Cyberduck uses the modified java-cloudfiles API, Cyberduck needs to be re-compiled after making above changes. Generally, we need to follow the steps here to set the Authenticate Context Path. But, we need to add the following information to the AppData\Cyberduck.exe_Url_*\[Version]\user.config file

<setting name=”cf.authentication.context” value=”/v2.0/tokens” />

After that, we can run the re-compiled Cyberduck and associate it with a Swift cloud. For example,

In the field Username, we need to use the following style: username:tenant_name. The API Access Key is the password for the username. If the authentication information is correct, we will see that  Cyberduck has been successfully connected to the Keystone-based Swift Cloud Storage.

The following images show that you can use Cyberduck to save any kind of files, e.g. pictures and documents, on your Swift cloud storage. You can even rename any files and open them for editing.

You can download our version of Cyberduck for Windows with support for Keystone by running git clone https://github.com/zmanda/cyberduck or from here. Once the file is unzipped, you can execute cyberduck.exe to test against your Keystone-based Swift.

If you want to know more detail about how we made this work, or you would like to compile or test for other platforms, e.g. OS X, please drop us a note at swift@zmanda.com

Storing Pebbles or Boulders: Optimizing Swift Cloud for different workloads

Thursday, August 23rd, 2012

While many storage clouds are built as multi-purpose clouds, e.g. to store backup files, images, documents etc., but a cloud builder may be tasked to build and optimize the cloud for a specific purpose. E.g. a storage cloud made for cloud backup may optimize for large object sizes and frequent writes, whereas a storage cloud built for storing images may be optimized for relatively smaller objects with frequent reads.

OpenStack Swift provides a versatile platform to build storage clouds for various needs. As we discussed in our last blog, a cloud builder can choose faster I/O devices for storing their Container database to enhance performance under some scenarios. However, a careful analysis is required to determine under what scenarios the investment in the faster I/O devices for the container DB makes sense. More broadly, we are interested in how to properly provision the Swift cloud for different workloads.

In the first part of this blog, we will focus on how to provision the I/O devices for the container DB. After that, our discussion will be generalized on how to provision the storage nodes under the workloads that contain either small or large objects. We understand that in the real world, the object sizes in a workload may be varied in a wide range. However, in order to study the broad question of provisioning the Swift cloud, it is instructive to consider and separate two extreme workloads in which most objects in a workload are either pebble-sized or boulder-sized.

We will first present the experiments to show how to provision the I/O devices for the container DB with the workloads differing in object sizes.

Experimental Results

Workload Generator

As we did in our last blog, we use Swift-bench as the workload generator to benchmark the Swift cloud in terms of # PUT operations per second. We configured Swift bench for our experiments as follows:

object_size: we use 10KB or 1MB as the average object size to simulate two different workloads: (1) the average size of the objects in the workload is relatively small. (2) The average size of the objects in the workload is relatively large. Some real-world examples of small objects could be the PDF, MS Word documents or the JPEG-format pictures. While the backup or archiving data is usually large in size. (Note that: the real workloads in productions may have even larger average object size. But comparing Swift’s behavior for 10KB sized objects vs. 1MB sized objects provides useful insights to predict behavior as size of objects gets larger. Also, an application like Amanda Enterprise will typically chunk the archives into smaller objects before transferring to the cloud.)

concurrency: we set this parameter to 500 in order to saturate the Swift cloud.

num_container: we use 10 containers in our experiments. This may e.g. imply that there are 10 users of this storage cloud.

num_put: when the object size is 10KB, we upload (PUT) 10 million of such objects to the Swift cloud. However, when the object size is 1MB, we upload (PUT) 100K of such objects. As discussed in [2], the performance of container DB degrades (e.g. to 5-10 updates per second for the container DB) when the number of objects in each container is in the order of magnitude of millions. Since we have 10 containers, our target is to have 1 million objects in each container. So, we set 10 million for the num_put parameter for 10 KB objects. In order to have an equivalent total upload size, we set the num_put parameter to 100K when we upload 1MB objects.

Testing bench

Two types of EC2 instances are used to implement a small-scale Swift cloud with 1 proxy node and 4 storage nodes.

Proxy node: EC2 Cluster Compute Quadruple Extra large Instance (23 GB of memory, 33.5 EC2 Compute Units)

Storage node: High-CPU Extra large Instance (7GB of memory, 20 EC2 Compute Units)

Recently, AWS released the new EBS volume based on the Provisioned IOPS, which lets the AWS user specify the IOPS ( from 100 to 1000) for each EBS volume that will be attached to an EC2 instance. For example, an EBS with 1000 IOPS indicates that it can achieve a maximum of 1000 IOPS (for 16KB I/O request size) regardless of the I/O access pattern. So, a cloud builder can experiment with an EBS volume with higher IOPS to simulate a faster I/O device.

As mentioned in our last blog, the current version of Swift-bench only allows using 1 account. But an unlimited number of containers can be stored in that account. So, our benchmark is executed based on the following sequence: log into 1 existing account, then create 10 containers, and then upload 10 million or 100K objects (depending on the object size). In our experiments, we measure the upload (PUT) operations per second.

Two implementations of Swift cloud are compared: (1) Swift with 1000-IOPS based container DB (We call this 1000-IOPS Swift) and (2) Swift with 500-IOPS based container DB (We call this 500-IOPS Swift).

The 1000-IOPS Swift is implemented with 1 proxy node and 4 storage nodes. Each storage node attaches 9 of 1000-IOPS EBS volumes for storing all objects, 1 of 200-IOPS EBS volume for storing the account DB and 1 of 1000-IOPS EBS for storing the container DB.

The 500-IOPS Swift is implemented with 1 proxy node and 4 storage nodes. Each storage node attaches 9 of 1000-IOPS EBS volumes for storing all objects, 1 of 200-IOPS EBS volume for storing the account DB and 1 of 500-IOPS EBS for storing the container DB.

The proxy node has 10Gbps Ethernet, while the storage node has 1Gbps Ethernet.

Software Settings

We use OpenStack Swift version 1.6.1 and the authentication method on proxy node is TempAuth. All proxy, container, account and object-related parameters are set to Defaults, except: in proxy-server.conf, #worker = 500; in account-server.conf, # workers = 32; in container-server.conf, #worker = 32 and db_preallocation = on; in object-server.conf, # workers = 32.

The Swift-bench, proxy and authentication services run on the proxy node and we ensure that the proxy server is never the bottleneck of the Swift cloud. The account, container, object and rsync services run on the storage nodes.

The number of replicas in the Swift cloud is set to two and the main memory of each node is fully utilized for caching the files and data.

Benchmark results

Figure 1 show the operation rate (operations per second on the Y-axis) of the PUT operation for the two Swift implementations over the benchmark window, when the object size is 10KB. Overall, as seen from Figure 1, we notice that when the object size is set to 10KB, the 1000-IOPS Swift achieves higher operation rate than the 500-IOPS, and 68% extra operation rate is observed when 10 million objects have been uploaded.

Figure 1: Comparing two Swift implementations when the object size is 10KB


To compare with Figure 1, we also plot the operation rate of the PUT operation when the object size 1MB, as shown in Figure 2.

Figure 2: Comparing two Swift implementations when the object size is 1MB


In contrast with Figure 1, the two Swift implementations show the same performance when the object size is 1MB. Moreover, in Figure 1, when the objects (10KB size) are being uploaded, the performance of the two Swift implementations kept decreasing from first object upload onwards. . However, when the object size is 1MB (see Figure 2), the performance of the two Swift implementations increases initially and then becomes stable after the initial stage.

We conclude the results in Figure 1 and Figure 2 as follow:

(1) For the upload workload that mostly contains small objects (e.g. 10KB in our test), it is a good practice to use faster I/O devices for the container DB, because each small object can be quickly uploaded to I/O devices, the container DB should have a faster I/O device to keep up with the fast speed of uploading small objects.

(2) For the upload workload that mostly contains larger objects, using faster I/O devices for the container DB does not make much sense. This is because the storage node spends more time on storing the large objects to the I/O devices and consequently, the update frequency of the container DB is relatively slow. So, there is no need to supply the container DB with faster I/O devices.

Besides the discussion on how to provision the I/O device for the container DB, we also want to discuss how to provision other types of resources in the storage node for these two workloads. To this end, we also monitored the CPU usage, network bandwidth and the I/O devices (that are used for storing the objects) of the storage node during the runs of our benchmarks and summarize our observations below.

CPU: Comparing to case of uploading large objects, we note that the CPU usage is higher when the small objects are being uploaded. The reason is the object service is much busier to handle the newly uploaded small objects every second. (2) the container service has to deal with more updates generated from the container DB. Thus, more CPU resource in the storage node will be consumed when uploading the small objects.

Network bandwidth: Uploading large objects will consume more network bandwidth. This is can be verified by Figure 1 and Figure 2: in Figure 1, when the 10 millions of objects are uploaded, the operation rate of 1000-IOPS Swift is 361 and the object size is 10KB, so the total network bandwidth is about 3.5 MB/s. However, while uploading the large objects (see Figure 2), when 100K of objects are uploaded, the operation rate of 1000-IOPS Swift is 120 and the object size is 1MB, so the total network bandwidth is around 120 MB/s.

I/O devices for storing the objects: The I/O pattern of those I/O devices is more random when the small objects are being uploaded. This can be verified by Figure 3, where we plot the distribution of the logical block distance (LBN) distance between two successive I/Os. As seen from Figure 3, when uploading the objects of 1MB size, only 9% of successive I/Os are separated more than 2.5 million LBN away. However, for the case of uploading the objects of 10KB size, about 38% of successive I/Os are more than 2.5 million LBN away. So, this comparison shows that the I/O pattern generated by uploading 1MB objects is much less random. For reference we also plot the pattern for a large sequential write on the same storage node. We observe that for the case of uploading 1MB objects, 70% of successive I/Os are more than 80 and less than 160 LBN away, which is also the range where most of the successive I/Os for Sequential Write fall into.

Figure 3: The distribution of logical block number (LBN) between two successive I/Os for the 1MB object size and 1KB object size. (“M” denotes Million in x-axis)


To summarize, the important take-away points from the above discussion are:

For the upload workload that mostly contains small objects (pebbles), it will be rewarded for higher operation rate by provisioning the storage node with faster CPU, faster I/O devices for the container DB and only moderate network bandwidth. We should avoid using I/O devices with very low random IOPS for storing the objects, because the I/O pattern from those I/O devices is not sequential, and this will become the bottleneck of the storage node. So, use of SSDs can be considered for this workload.

For the upload workload that mostly contains the large objects, it is adequate to provision the storage node with the commodity CPU and moderate I/O speed for the container DB. In order to have better throughput (MB/s) of the Swift cloud, it is recommended to choose a large bandwidth network, and the I/O devices with high sequential throughput (MB/s) for storing the objects. IOPs are not critical for this workload, so standard SATA drives may be sufficient.

Of course, these choices have to be aligned with higher level choices e.g. number of storage and proxy nodes. Overall, cloud builders can benefit from the optimization practices we mentioned in our Swift Advisor blog.

If you are thinking of putting together a storage cloud, we would love to discuss your challenges and share our observations. Please drop us a note at  swift@zmanda.com

Sumo nodes for OpenStack Swift: Mixing storage and proxy services for better performance and lower TCO

Monday, June 25th, 2012

The ultimate goal of cloud operators is to provide high-performance and robust computing resources for the end users while minimizing the build and operational costs. Specifically OpenStack Swift operators want to build storage clouds with higher throughput bandwidth but with lower initial hardware purchase cost and lower ongoing IT-related costs (e.g. IT admin, power, cooling etc.).

In this blog, we present a novel way to achieve the above goal by using nodes in a Swift cloud which mix storage and proxy services. This is contrary to the existing common practice in Swift deployments to exclusively run proxy and storage services on separate nodes, where the proxy and authentication services are suggested to run on hardware with high-end CPU and networking resources, and the objects stored on storage nodes without the need for similar compute and networking resources as the proxy nodes. However, our proposal is to provide a new alternative for building cost-effective Swift clouds with higher bandwidth, but without compromising the reliability. This method can be easily adopted by cloud operators who have already built a Swift instance on some existing hardware or who are considering to buy new hardware to build their Swift clouds.

Our idea is based on following principle: when uploading an object to Swift with M replicas, (M/2)+1 out of M writes need to be successful before the uploader/client is acknowledged that the upload is successful. For example, in a default Swift environment which enforces 3 data replicas, when a client sends an object to Swift, once 2 writes complete successfully, Swift will declare a success message to the client - the remaining 1 write can be a delayed write and Swift will rely on the replication process to ensure that the third replica will be successfully created.

Based on the above observation, our idea is to speed up the 2 successful writes so that Swift can declare a success as soon as possible in order to increase the overall bandwidth. The third write is allowed to be finished at a slower pace and its goal is to guarantee the data availability when 2 zones fail. To speed up the 2 successful writes, we propose to run storage, proxy and authentication services on high-end nodes.

We call these mixed nodes Sumo Nodes: Nodes in a Swift cloud which provide proxy and authentication services as well as provide storage services.

The reason why we want to mix the storage and proxy services on these high-end nodes is based on the following observations: Proxy nodes are usually provisioned with high-end hardware, including 10Gbps network, powerful multi-core CPUs and large amount of memory. In many scenarios these nodes are over-provisioned (as we discussed in our last “pitfall” blog). Thus, we want to take full advantage of their resources by consolidating both proxy and storage services on one sumo node, with the goal to complete the 2 successful writes faster.

Sumo nodes will typically be interconnected via 10Gbps network. Writes to sumo nodes will be done faster because they are either local writes (proxy server sends data to storage service within the same node) or the write is done over a 10Gbps network instead of transferring to a storage node connected via a 1Gbps network. So, if two out of three writes for an upload are routed to the sumo nodes, the Swift cloud will return success much faster than the case when an acknowledgement from a traditional storage node is needed.

In the following discussion, we consider a production Swift cloud with five zones, three object replicas and two nodes with proxy services. (Five zones provide redundancy and fault isolation to ensure that three-replica requirement for objects is maintained in the cloud even when an entire zone fails and the repair for that zone takes significant time.)

Key considerations when designing a Swift cloud based on sumo nodes

How should I setup my zones with sumo nodes and storage nodes?

Each sumo node needs to belong to a separate zone (i.e. one zone should not have two sumo nodes). In our example Swift cloud the two sumo nodes represent two zones and rest of the three zones are distributed on the three storage nodes.

Can you guarantee that all “2 successful writes” will be done on the sumo nodes?

No. Upon each upload, every zone will have a chance to take one copy of the data. Based on the setup of two zones on the two sumo nodes and three zones on the storage nodes, it is even possible that all 3 writes will go to the three zones on the storage nodes.

However, we can increase the probability of having “2 successful writes” done on the sumo nodes. One straightforward but potentially costly way is to buy and use more sumo nodes. But, another way is to optimize the “weight” parameter associated with the storage devices on sumo nodes. (The weight parameter will decide how much storage space is used on a storage device, e.g. a device with 200 weight will take 2X amount of data than a device with 100 weight.) By giving higher weight value to the sumo nodes as compared to the storage nodes, more data will be written into the sumo nodes, thereby increasing the probability of “2 successful writes” being done on the sumo nodes.

What is the impact of using higher weight value on the sumo nodes?

As discussed above, when using a higher weight value on the sumo nodes, the probability of “2 successful writes” done at those nodes will be higher, so the bandwidth of the Swift should be better. However, using higher weight value will also speed up the device/disk space usage on the sumo nodes causing these nodes to reach their space limit quickly. Once the space limit on sumo node is reached, all remaining uploads will go the storage nodes. So, the performance of the sumo-node based Swift cloud would be downgraded to a typical Swift cloud (or even worse if the storage nodes being used are wimpy). However, the bottom line is that the storage nodes should have enough storage space to guarantee the data can be completely replicated on them and be able to accommodate failure of a sumo node. For example, assuming each sumo node has 44 available HDD bays and each HDD bay is populated with 2 TB HDD. Once the available 88 TB of space is used up on the sumo nodes, the users of the Swift cloud will stop enjoying the performance improvements because of sumo nodes (the “sumo effect”) even though there might be additional usable storage capacity on the cloud.

The disk usage of sumo nodes can be easily controlled by the weight value. Overall, the right value of the weight parameter will be determined by the individual cloud operators depending on (1) how much total data they want to save in the sumo-node based Swift cloud and (2) how much is the raw space limit on sumo nodes.

In a sumo-based Swift cloud, the function of storage nodes is to help guarantee the data availability. We expect the storage nodes to be wimpy nodes in this configuration. However, the storage nodes should have enough total disk space to handle the capacity requirements of the cloud. At a minimum, storage nodes should have the combined capacity of at least one sumo node.

While performance will increase when using sumo nodes, but what is the impact on the total cost of ownership of the Swift cloud?

Since the sumo nodes provide functionality and capacity of multiple storage nodes, cloud builders will save on the cost of purchasing and maintaining some storage nodes.

Using sumo nodes will not compromise any data availability because we still keep the same number of zone and data replicas at all times as compared to the traditional setup of Swift. However, we should pay attention to the robustness and health status of the sumo nodes. Failure of a sumo node is much more impactful than failure of a traditional proxy node or a storage node. If a sumo node fails, the cloud will lose one of the proxy processes as well as will need to replicate all of the objects stored on the failed sumo node to remaining nodes in the cloud. Therefore, we recommend investing more on the robustness features on the sumo nodes, e.g. dual fans, redundant power supply etc.

To choose the right hardware for sumo nodes and storage nodes, based on your cost, capacity and performance considerations, we recommend using the techniques and process of the Swift Advisor.

What use cases are best suited for sumo-based Swift configurations?

Sumo-based Swift configurations can be best applied in the following use cases: (1) high performance is a key requirement from the Swift cloud, but a moderate amount of usable capacity is required (e.g. less than 100TB) or (2) the total data stored in the Swift cloud will not change dramatically over time, so a cloud builder can plan ahead and pick the right sized sumo nodes for their cloud.

Sumo Nodes and Wimpy Storage Nodes in action

To validate the effectiveness of the sumo based Swift cloud, we ran extensive benchmarks in our labs. We looked at the impact of using Sumo nodes on the observed bandwidth and net usable storage space of the Swift cloud (we assume that cloud operators will not use the additional capacity, if any available, in their Swift cloud once the “sumo effect” wears off).

As before, we use Amanda Enterprise as our application to backup and recover a 15GB data file to/from the Swift cloud to test its write and read bandwidth respectively. We ensure that one Amanda Enterprise server can fully load the Swift cloud in all cases.

Our Swift cloud is built on EC2 compute instances. We use Cluster Compute Quadruple Extra Large Instance (Quad Instance) for the sumo nodes and use Small or Medium Instance for the storage nodes.

To build a minimal production-ready sumo-node based Swift cloud, we use 1 load balancer (pound-based load balancer, running on a Quad Instance), 2 sumo nodes and 3 storage nodes. We set 2 zones on the 2 sumo nodes and the remaining 3 zones are on the 3 storage nodes. To do an apples-to-apples comparison, we also setup a Swift cloud with traditional setup (storage services running exclusively on designated storage nodes) using the same EC2 instances on the load balancer and proxy nodes (Quad instance), and storage services running on five storage nodes (Small or Medium Instance). All storage nodes in traditional setup have the same and default values of the weight parameter.

We use Ubuntu 12.04 as base OS and install the Essex version of Keystone and Swift across all nodes.

We first look at the results of upload workload with different weight values on sumo nodes. In sumo-based Swift, we set the weight values on the storage nodes to be always 100. We use “sumo (w:X)” in Figure 1 and 2 to denote that the value of weight parameter on sumo node is X. E.g. sumo (w:300) represents a Swift cloud with sumo nodes with weight value set at 300.

Figure 1 shows the trade-offs between the observed upload bandwidth and projected storage space of a sumo-node based Swift cloud before the space limit of sumo nodes is reached – this is the point when the “sumo effect” of performance improvement tapers off. The projected storage space is calculated by assuming 44 HDD bays on each node and each HDD bay is populated with a 2TB HDD. The readers can have their own calculations on the storage space depending on their specific situation.

As we can see from the above Figure 1, with either Small or Medium Instances being used for storage nodes, the sumo-based Swift cloud with weight 300 always provides the highest upload bandwidth and lowest usable storage space. On the other hand, the traditional setup and “sumo (w:100)” both provide the largest usable storage space – but even in this case the sumo-based Swift cloud has higher upload bandwidth as it puts some objects on the faster sumo nodes.

In Figure 1, we also observe that: as the weight increases on the sumo nodes, the bandwidth gap between using Small Instances for the storage nodes and using Medium Instances for the storage nodes shrinks. For example, the bandwidth gap at “sumo (w:200)” is about 24 MB/s, but the it is reduced to 13 MB/s at “sumo (w:300)”. This implies that in a sumo-based Swift cloud, you may consider using wimpy storage nodes, as it may be more cost-effective as long as your performance considerations are met. The reason is that the effective bandwidth of a sumo-based Swift cloud is mostly determined by the sumo nodes when the weight of the sumo nodes is set to large.

For completeness, we also show the results of download workload in Figure 2. Similar to the Figure 1, the sumo-based Swift cloud delivers the highest download bandwidth and provides the lowest storage space when using weight 300 for sumo nodes. On the other hand, when using weight 100, it has the same usable capacity as the traditional setup, but still has higher download bandwidth as compared to the observed download bandwidth with traditional setup.

From the above experiments, we observe interesting trade-offs between bandwidth achieved and usable storage space until the point the Swift cloud is enjoying the “sumo effect”. Overall, the sumo-based Swift cloud is able to (1) Significantly boost the overall bandwidth with relatively straightforward modifications on the traditional setup and (2) Reduce the TCO by using smaller number of overall nodes, and requiring wimpy storage nodes, while not sacrificing the data availability characteristics of the traditional setup.

Above analysis and numbers should give Swift builders a new option to deploy and optimize their Swift clouds and seed other ideas on how to allocate various Swift services across nodes and choose hardware for those nodes.

If you are thinking of putting together a storage cloud, we would love to discuss your challenges and share our observations. Please drop us a note at swift@zmanda.com

Building a Swift Storage Cloud? Avoid Wimpy Proxy Servers and Five other Pitfalls

Wednesday, May 23rd, 2012

We introduced OpenStack Swift Advisor in a previous blog: a set of methods and tools to assist cloud storage builders to select appropriate hardware based on their goals from their Swift Cloud. Here we describe six pitfalls to avoid, when choosing components for your Swift Cloud:

(1) Do not use wimpy servers for proxy nodes

The key functionality of  a proxy node is to process a very large amount of API requests, receive the data from the user applications and send them out to the corresponding storage nodes. Proxy node makes sure that a minimum number of required replicas get written to storage nodes. Reply traffic (e.g. Restore traffic, in case the Swift Cloud is used for Cloud Backup) also flows through the proxy nodes. Moreover, the authenticating  services (e.g. keystone, swauth) may also be integrated into the proxy nodes.  Considering these performance critical functions being performed by proxy nodes, we strongly advise cloud storage builders to consider powerful servers as the proxy nodes. For example, a typical proxy node can be provisioned with 2 or more multi-core Xeon-class CPUs, large memory and 10G Ethernet.

There is some debate on whether a small number of powerful servers or a large number of wimpy servers should be used as the proxy nodes. It is possible that the initial cost outlay of a large number of wimpy proxy nodes may be lower than a smaller number of powerful nodes, while providing acceptable performance. But for data center operators, a large number of wimpy servers will inevitably incur higher IT related costs (personnel, server maintenance, space rental, cooling, energy and so on). Additionally, more servers will need more network switches, thus decreasing some of the cost benefits as well as increasing failure rate. As your cloud storage service gets popular, scalability will be challenging with wimpy proxy nodes.

(2) Don’t let your load-balancer be overloaded

Load-balancer is the first component of a Swift cluster that directly faces the user applications. Its primary job is to take all API requests from the user application and evenly distribute them to the underlying proxy nodes. In some cases, it has to do the SSL termination to authenticate the users, which is a very CPU and network intensive job. An overloaded load-balancer inherently defeats the purpose by becoming the bottleneck of  your Swift cluster’s performance.

As we have discussed in a previous blog (Next Steps with OpenStack Swift Advisor), the linear scalability of a Swift cluster on performance can be seriously inhibited by a load-balancer which doesn’t keep up with the load. To reap the benefits of your investment in proxy and storage nodes, you should make sure that the load-balancer is not underpowered especially for peak load conditions on your storage cloud.

(3) Do not under-utilize your proxy nodes

Proxy node is usually one of the most expensive component in the Swift cluster. Therefore, it is desirable for cloud builders to fully utilize the resources in their proxy nodes. A  good question being asked by our customers is: how many storage nodes should I attach to a proxy node ? or what is the best ratio between the proxy and storage nodes ? If your cloud is built with fewer storage nodes per proxy node, you may be  under-utilizing your proxy nodes, as shown in the following Figure 1(a). (While we have simplified the illustrations, the factors of performance changes indicated in following figures are based on actual observations in our labs.) In this example, initially, the Swift cluster consists of 3 nodes: 1 proxy node and 2 storage nodes (we use capital P and S in the picture to denote proxy and storage nodes respectively). The write throughput of that 3-node Swift cluster is X MB/s. However, if we add two more storage nodes to that Swift cluster, as shown in Figure 1(b), the throughput of the 5-node Swift cluster becomes 2X MB/s.  So the throughput along with capacity of Swift cluster can be doubled (2X) by simply adding in two storage nodes. In terms of the cost per throughput and cost per GB,  the 5-node Swift cluster in this example will likely be more efficient.

(4) Do not over-utilize the proxy nodes

On the other hand, you can’t keep attaching the storage nodes without increasing your proxy nodes at some point. In Figure 2(a), 1 proxy node has been well-utilized by the 4 storage nodes with 2X MB/s throughput. If more storage nodes are attached to the proxy node,  as shown in Figure 2(b), its throughput will not increase because the proxy node is already busy with the 4 storage nodes. Therefore, attaching more storage nodes to a well-utilized (nearly 100% busy) proxy node will only make the Swift cluster less efficient in terms of the cost per throughput. However note that you may decide to over-subscribe proxy nodes, if you are willing to sacrifice potential performance gains by adding more proxy nodes, and you simply want to add more capacity for now. But to increase capacity, first look into making sure you are adding enough disks to each storage node, as described in the next pitfall.

(5) Avoid disk-bounded storage nodes

Another common question we get is: how many disks should I put into my storage node? This is a crucial question with implications on cost/performance and cost/capacity. In general, you want to avoid storage nodes which are bottlenecked on performance due to less number of disk spindles as illustrated by the following picture.

Figure 3(a) shows a Swift cluster consisting of 1 proxy node and 2 storage nodes, with each storage node attached to 1 disk. Let’s assume the throughput of this Swift cluster is Y MB/s. However, if we add one more disk on each storage node based on Figure 3(a), we will have two disks on each storage node, as shown in Figure 3(b). Based on our observations the throughput of the new Swift cluster may increase by as much as 1.5Y MB/s. The reason why the throughput is improved by simply attaching more disks is: in Figure 3(a), one disk in each storage node can easily be overwhelmed (i.e. 100% busy) when transferring the data from/to the storage nodes, while other resources (e.g. CPU, memory) in the storage node are not fully-utilized, hence the storage node being “disk-bounded”. However, since more disks are added to each storage node and all disks can work in parallel during the data transfers, the bottleneck of the storage node is shifted from disks to other resources, and thus, the throughput of Swift cluster can be improved. In terms of cost per throughput, Figure 3(b) is more efficient than Figure 3(a), since the cost of adding more disk is significantly less than the cost of the whole server.

An immediate follow-up question is: can the throughput keep increasing by attaching more disks to each storage node? Of course, the answer is No. Figure 4 shows the relationship between the number of disks attached to each storage node and the throughout of Swift cluster. As the number of disks increases from 1, the throughput is indeed improved but after some point (we call it “turning point”), the throughput stops increasing and becomes almost flat later on.

Even though the throughput of Swift cluster can not keep improving by attaching more disks,  some cloud storage builders may want to put large number of disks in each storage node, as doing that does not hurt the performance. Another metric, cost per MB/s per GB of available capacity,  tends to be minimized by adding more disks.

(6) Do not rely on two replicas of data

One more question we get frequently asked from our customers is: can we use 2 replicas of data in the Swift cluster in order to save on cost of storage space ? Our recommendation is: No. Here is why:

Performance: it may seem that a Swift cluster which maintains 2 replicas of data will have better performance when the data is written to the storage nodes as compared to a cluster which maintains 3 replicas (which has one more write stream to the storage nodes). However, in actuality,  when the proxy node attempts to write to N replicas, it only requires  (N/2)+1 successful responses out of N to declare a successful write. That is to say, only (N/2)+1 out of N concurrent writes are synchronous, while the rest of the writes can be asynchronous and Swift will rely on the replication process to ensure that the remaining copies are successfully created.

Based on the above, and in our tests comparing the “3-replication Swift cluster” and  “2-replication Swift cluster”:  they will both generate 2 concurrent synchronous writes to the storage nodes.

Risk of data loss: We recommend using commodity off-the-shelf storage for Swift Storage Nodes, without even using RAID. So, the replicas maintained by Swift are your defense against data loss. Also, lets say a Swift cluster has 5 zones (which is the minimum number of recommended zones) and 3 replicas of data. With this setup, up to two zones can fail at the same time without any data loss. However, if we reduce the number of replications from 3 to 2, the risk of data loss is increased by 100%, because the data can only survive one zone failure.

Avoiding above pitfalls will help you to implement a high-performance and robust Swift Cloud, which will scale to serve your cloud storage needs for several years to come.

If you are thinking of putting together a storage cloud, we would love to discuss your challenges and share our observations. Please drop us a note at  swift@zmanda.com