Posts Tagged ‘Swift Benchmark’

How swift is your Swift? Benchmarking OpenStack Swift.

Monday, October 8th, 2012

The OpenStack Swift project has been developing at a tremendous pace. The version 1.6.0 was released in August followed by 1.7.4 (Folsom) just after two months!  In these two recent releases, many important features have also been implemented, for example the optimization for using SSD, object versioning, StatsD logging and much more – many of these features have significant implications for performance planning for the cloud builders and operators.

As an integral part of deploying a cloud storage platform based on OpenStack Swift, benchmarking a Swift cluster implementation is essential before the cluster is deployed for production use. Preferably the benchmark should simulate the eventual workload that the cluster will be subjected to.

In this blog, we discuss following Swift benchmarking concepts:
(1)    Benchmark Dimensions for Swift cluster: performance, scalability and degraded-mode performance (e.g. when hardware and software failures happen).
(2)    Sample workloads for Swift cluster

Benchmark Tools for Swift

There are currently two Swift benchmark tools available: swift-bench and COSBench.

swift-bench is a command-line benchmark tool that is shipped along with Swift distribution. Recently,  we improved swift-bench to allow for random object sizes and better usability.

COSBench is a fairly new web-based benchmark tool, led by the researchers at Intel. Fortunately, we obtained a trial version of COSBench. Based on our initial experience with COSBench, we believe it represents a very helpful tool, and may become the the de facto Swift benchmarking tool in the future.

Benchmark Dimensions

Dimension 1 – Performance

The performance dimension is to measure the performance of the Swift cluster when it is under a certain load. The performance metrics can be specified in many ways. In most cases, the cloud operators will be interested in the following four performance metrics:

(1)    The average throughput (number of operations per second)
(2)    The average bandwidth (MB/s)
(3)    The average response time of all requests.
(4)    Response time for a certain percentage of requests (e.g. 95 percentile).

To measure the performance, we first need to populate a Swift cluster with some data (i.e. objects) to simulate an initial stage. The size of the initially loaded objects can be controlled by the inputs of the benchmark client. Subsequently, a pre-defined workload is executed against the Swift cluster while the performance is measured.

When measuring the performance, there is one key issue we need to pay attention to:  First, we need to carefully adjust the number of threads because it determines how much workload the benchmark clients will generate against the Swift cluster. Since we want to measure the performance of the Swift cluster when it is under load or saturated, we need to increase the number of threads, until the point at which the bandwidth/throughput becomes stable and the average response time starts to increase very sharply.

As the number of threads increases, the benchmark client will get busier. We need to make sure that it has enough resources (CPU, memory, network bandwidth) to use and should not be the performance bottleneck.

While the performance of the client software (Cyberduck, Cloud Backup software etc.), that is connecting with Swift, is an important factor in the overall usability of the storage cloud, the scope of this blog is the performance of the storage cloud platform itself.

Dimension 2 – Scalability

The benchmark on scalability is to test if a Swift cluster can scale out gracefully by adding more servers and other resources. We can conduct this benchmark in the following steps:  we proportionally add more servers for each type of node in the Swift cluster. For example, we double the number of the storage nodes and proxy nodes with the same hardware and software configurations. Then, we run the same workloads to measure the performance. If a Swift cluster can scale out nicely, then its bandwidth/throughput will be increased in proportion to the number of new servers we added in. Otherwise, the cloud operators should analyze what is the bottleneck to prevent it from scaling well.

To simulate a real-world scenario, we need to test the scalability of a Swift cluster while it is running. As suggested by a blog from SwiftStack, cloud operators may consider adding new servers gradually in order to avoid the performance degradation because of the data movement between the existing and new servers. During the measurement, we want to observe: (1) if the Swift cluster operates normally (i.e. no period of service disruption) and (2) the increase on performance when the new servers are added into the Swift cluster.

Dimension 3 – Degraded Mode Performance

The cloud operators will face hardware or software failures at some points. If their objective is to ensure that their clusters will perform at a certain level (e.g. abide by the performance SLA) even in face of the failures, they should benchmark their Swift cluster appropriately upfront.

The most straightforward way to measure the availability of a Swift cluster is to intentionally shut down some nodes and measure the number of errors (e.g. failed operations) and performance degradation when the Swift is running in the degraded mode.

There are some factors that increase the complexities of benchmarking the degraded Swift cluster. For example, the failures can happen at every possible system level. For example, I/O devices, OS, Swift processes or even the entire server. The impact of failures is different when they occur at different levels. So, the failure scenarios at all system levels need to be considered. Such as, to simulate a disk failure, we may intentionally umount the disk; To simulate a Swift process failure, we need to kill some or all Swift processes on a node; To simulate an OS or entire server failure, the server could be temporarily powered off; Or a whole zone could be powered off (to simulate power failure of an entire rack of servers).

By combining the above considerations together, we notice that the total problem space for analyzing all failure scenarios may be very huge for a large-scale Swift cluster. So, it is more practical to prioritize those failure scenarios. For example, only the worst scenarios or more common scenarios are evaluated first.

In our presentation at the coming OpenStack Summit, we will present our empirical results to show how a Swift cluster performs when the hardware failures occur.

Sample Workloads

The COSBench tool allows users to define a Swift workload based on the following two aspects: (1) range of the object sizes in the workload (e.g. from 1MB to 10MB). (2) the ratio of PUT, GET and DELETE operations (e.g. 1:8:1).

The object sizes in a workload may have certain distributions. For example, uniform, Zipfan and more. At this point, based on our experiences with COSBench, it assumes the object sizes are uniformly distributed within the pre-defined range. Plus, it assumes all objects have the equal possibility to be accessed by the GET operation. It may be a good direction for COSBench to add more choices on the distribution when the users want to specify the object size and access pattern.

In the following table, we provide some sample Swift workloads in the following table.

Upload Intensive

Download Intensive

Small Objects (size range:1KB-100KB)

GET: 5%, PUT: 90%, DELETE:5%

Example: Online gaming hosting service — the game sessions are periodically saved as the small files which record the user profiles and game information in the order of the time series.

GET: 90%, PUT: 5%, DELETE:5%

Example: Website hosting service — once a new webpage is published by the owner, lots of read requests will hit on the new webpage.

Large Objects (size range:1MB – 10MB)

GET: 5%, PUT: 90%, DELETE:5%

Example: Enterprise Backup — small files are compressed into large trunk of data and backed up to cloud storage. Occasionally, the recovery and delete operations are needed.

GET: 90%, PUT: 5%, DELETE:5%

Example: Online video sharing service — once the new video clips are uploaded, lots of download traffic will be generated when people watch those new video clips.

Plus, the benchmark users are free to define their own favorite workloads based on the two inputs: range of object sizes and ratio between PUT, GET and DELETE operations.

We will discuss above dimensions and benchmarks workloads in detail in future blogs, as well as at our presentation at the OpenStack Summit in San Diego (Presentation at 4:10PM on October 18th). We hope to see you there.

If you are thinking of putting together a storage cloud, we would love to discuss your challenges and share our observations. Please drop us a note at  swift@zmanda.com

Storing Pebbles or Boulders: Optimizing Swift Cloud for different workloads

Thursday, August 23rd, 2012

While many storage clouds are built as multi-purpose clouds, e.g. to store backup files, images, documents etc., but a cloud builder may be tasked to build and optimize the cloud for a specific purpose. E.g. a storage cloud made for cloud backup may optimize for large object sizes and frequent writes, whereas a storage cloud built for storing images may be optimized for relatively smaller objects with frequent reads.

OpenStack Swift provides a versatile platform to build storage clouds for various needs. As we discussed in our last blog, a cloud builder can choose faster I/O devices for storing their Container database to enhance performance under some scenarios. However, a careful analysis is required to determine under what scenarios the investment in the faster I/O devices for the container DB makes sense. More broadly, we are interested in how to properly provision the Swift cloud for different workloads.

In the first part of this blog, we will focus on how to provision the I/O devices for the container DB. After that, our discussion will be generalized on how to provision the storage nodes under the workloads that contain either small or large objects. We understand that in the real world, the object sizes in a workload may be varied in a wide range. However, in order to study the broad question of provisioning the Swift cloud, it is instructive to consider and separate two extreme workloads in which most objects in a workload are either pebble-sized or boulder-sized.

We will first present the experiments to show how to provision the I/O devices for the container DB with the workloads differing in object sizes.

Experimental Results

Workload Generator

As we did in our last blog, we use Swift-bench as the workload generator to benchmark the Swift cloud in terms of # PUT operations per second. We configured Swift bench for our experiments as follows:

object_size: we use 10KB or 1MB as the average object size to simulate two different workloads: (1) the average size of the objects in the workload is relatively small. (2) The average size of the objects in the workload is relatively large. Some real-world examples of small objects could be the PDF, MS Word documents or the JPEG-format pictures. While the backup or archiving data is usually large in size. (Note that: the real workloads in productions may have even larger average object size. But comparing Swift’s behavior for 10KB sized objects vs. 1MB sized objects provides useful insights to predict behavior as size of objects gets larger. Also, an application like Amanda Enterprise will typically chunk the archives into smaller objects before transferring to the cloud.)

concurrency: we set this parameter to 500 in order to saturate the Swift cloud.

num_container: we use 10 containers in our experiments. This may e.g. imply that there are 10 users of this storage cloud.

num_put: when the object size is 10KB, we upload (PUT) 10 million of such objects to the Swift cloud. However, when the object size is 1MB, we upload (PUT) 100K of such objects. As discussed in [2], the performance of container DB degrades (e.g. to 5-10 updates per second for the container DB) when the number of objects in each container is in the order of magnitude of millions. Since we have 10 containers, our target is to have 1 million objects in each container. So, we set 10 million for the num_put parameter for 10 KB objects. In order to have an equivalent total upload size, we set the num_put parameter to 100K when we upload 1MB objects.

Testing bench

Two types of EC2 instances are used to implement a small-scale Swift cloud with 1 proxy node and 4 storage nodes.

Proxy node: EC2 Cluster Compute Quadruple Extra large Instance (23 GB of memory, 33.5 EC2 Compute Units)

Storage node: High-CPU Extra large Instance (7GB of memory, 20 EC2 Compute Units)

Recently, AWS released the new EBS volume based on the Provisioned IOPS, which lets the AWS user specify the IOPS ( from 100 to 1000) for each EBS volume that will be attached to an EC2 instance. For example, an EBS with 1000 IOPS indicates that it can achieve a maximum of 1000 IOPS (for 16KB I/O request size) regardless of the I/O access pattern. So, a cloud builder can experiment with an EBS volume with higher IOPS to simulate a faster I/O device.

As mentioned in our last blog, the current version of Swift-bench only allows using 1 account. But an unlimited number of containers can be stored in that account. So, our benchmark is executed based on the following sequence: log into 1 existing account, then create 10 containers, and then upload 10 million or 100K objects (depending on the object size). In our experiments, we measure the upload (PUT) operations per second.

Two implementations of Swift cloud are compared: (1) Swift with 1000-IOPS based container DB (We call this 1000-IOPS Swift) and (2) Swift with 500-IOPS based container DB (We call this 500-IOPS Swift).

The 1000-IOPS Swift is implemented with 1 proxy node and 4 storage nodes. Each storage node attaches 9 of 1000-IOPS EBS volumes for storing all objects, 1 of 200-IOPS EBS volume for storing the account DB and 1 of 1000-IOPS EBS for storing the container DB.

The 500-IOPS Swift is implemented with 1 proxy node and 4 storage nodes. Each storage node attaches 9 of 1000-IOPS EBS volumes for storing all objects, 1 of 200-IOPS EBS volume for storing the account DB and 1 of 500-IOPS EBS for storing the container DB.

The proxy node has 10Gbps Ethernet, while the storage node has 1Gbps Ethernet.

Software Settings

We use OpenStack Swift version 1.6.1 and the authentication method on proxy node is TempAuth. All proxy, container, account and object-related parameters are set to Defaults, except: in proxy-server.conf, #worker = 500; in account-server.conf, # workers = 32; in container-server.conf, #worker = 32 and db_preallocation = on; in object-server.conf, # workers = 32.

The Swift-bench, proxy and authentication services run on the proxy node and we ensure that the proxy server is never the bottleneck of the Swift cloud. The account, container, object and rsync services run on the storage nodes.

The number of replicas in the Swift cloud is set to two and the main memory of each node is fully utilized for caching the files and data.

Benchmark results

Figure 1 show the operation rate (operations per second on the Y-axis) of the PUT operation for the two Swift implementations over the benchmark window, when the object size is 10KB. Overall, as seen from Figure 1, we notice that when the object size is set to 10KB, the 1000-IOPS Swift achieves higher operation rate than the 500-IOPS, and 68% extra operation rate is observed when 10 million objects have been uploaded.

Figure 1: Comparing two Swift implementations when the object size is 10KB


To compare with Figure 1, we also plot the operation rate of the PUT operation when the object size 1MB, as shown in Figure 2.

Figure 2: Comparing two Swift implementations when the object size is 1MB


In contrast with Figure 1, the two Swift implementations show the same performance when the object size is 1MB. Moreover, in Figure 1, when the objects (10KB size) are being uploaded, the performance of the two Swift implementations kept decreasing from first object upload onwards. . However, when the object size is 1MB (see Figure 2), the performance of the two Swift implementations increases initially and then becomes stable after the initial stage.

We conclude the results in Figure 1 and Figure 2 as follow:

(1) For the upload workload that mostly contains small objects (e.g. 10KB in our test), it is a good practice to use faster I/O devices for the container DB, because each small object can be quickly uploaded to I/O devices, the container DB should have a faster I/O device to keep up with the fast speed of uploading small objects.

(2) For the upload workload that mostly contains larger objects, using faster I/O devices for the container DB does not make much sense. This is because the storage node spends more time on storing the large objects to the I/O devices and consequently, the update frequency of the container DB is relatively slow. So, there is no need to supply the container DB with faster I/O devices.

Besides the discussion on how to provision the I/O device for the container DB, we also want to discuss how to provision other types of resources in the storage node for these two workloads. To this end, we also monitored the CPU usage, network bandwidth and the I/O devices (that are used for storing the objects) of the storage node during the runs of our benchmarks and summarize our observations below.

CPU: Comparing to case of uploading large objects, we note that the CPU usage is higher when the small objects are being uploaded. The reason is the object service is much busier to handle the newly uploaded small objects every second. (2) the container service has to deal with more updates generated from the container DB. Thus, more CPU resource in the storage node will be consumed when uploading the small objects.

Network bandwidth: Uploading large objects will consume more network bandwidth. This is can be verified by Figure 1 and Figure 2: in Figure 1, when the 10 millions of objects are uploaded, the operation rate of 1000-IOPS Swift is 361 and the object size is 10KB, so the total network bandwidth is about 3.5 MB/s. However, while uploading the large objects (see Figure 2), when 100K of objects are uploaded, the operation rate of 1000-IOPS Swift is 120 and the object size is 1MB, so the total network bandwidth is around 120 MB/s.

I/O devices for storing the objects: The I/O pattern of those I/O devices is more random when the small objects are being uploaded. This can be verified by Figure 3, where we plot the distribution of the logical block distance (LBN) distance between two successive I/Os. As seen from Figure 3, when uploading the objects of 1MB size, only 9% of successive I/Os are separated more than 2.5 million LBN away. However, for the case of uploading the objects of 10KB size, about 38% of successive I/Os are more than 2.5 million LBN away. So, this comparison shows that the I/O pattern generated by uploading 1MB objects is much less random. For reference we also plot the pattern for a large sequential write on the same storage node. We observe that for the case of uploading 1MB objects, 70% of successive I/Os are more than 80 and less than 160 LBN away, which is also the range where most of the successive I/Os for Sequential Write fall into.

Figure 3: The distribution of logical block number (LBN) between two successive I/Os for the 1MB object size and 1KB object size. (“M” denotes Million in x-axis)


To summarize, the important take-away points from the above discussion are:

For the upload workload that mostly contains small objects (pebbles), it will be rewarded for higher operation rate by provisioning the storage node with faster CPU, faster I/O devices for the container DB and only moderate network bandwidth. We should avoid using I/O devices with very low random IOPS for storing the objects, because the I/O pattern from those I/O devices is not sequential, and this will become the bottleneck of the storage node. So, use of SSDs can be considered for this workload.

For the upload workload that mostly contains the large objects, it is adequate to provision the storage node with the commodity CPU and moderate I/O speed for the container DB. In order to have better throughput (MB/s) of the Swift cloud, it is recommended to choose a large bandwidth network, and the I/O devices with high sequential throughput (MB/s) for storing the objects. IOPs are not critical for this workload, so standard SATA drives may be sufficient.

Of course, these choices have to be aligned with higher level choices e.g. number of storage and proxy nodes. Overall, cloud builders can benefit from the optimization practices we mentioned in our Swift Advisor blog.

If you are thinking of putting together a storage cloud, we would love to discuss your challenges and share our observations. Please drop us a note at  swift@zmanda.com