Archive for the ‘Network Backup and Recovery’ Category

Amanda Enterprise 3.3.5 Release with Hyper-V and Vaulting Support

Friday, December 6th, 2013

Our objective with Amanda Enterprise is to continue to widen its umbrella to protect most common IT infrastructure elements being deployed in the modern data center. We also seek to support various workflows for secondary and tertiary data as desired by the IT administrators. The new release of Amanda Enterprise (3.3.5) comes with two key features for these objectives.

Hyper-V Support

Hyper-V is a native hypervisor, which is getting increasingly popular especially in Windows server-centric IT shops. Amanda Enterprise now supports image level backup of live VMs running on Hyper-V. We support Hyper-V which is installed as a role either on Windows Server 2012 or Windows Server 2008.

One license for Zmanda Windows Client and Zmanda Client for Microsoft Hyper-V enables you to backup an unlimited number of VMs on one hypervisor. You need to run the Zmanda Windows Client on the hypervisor, and configure backups of VMs via Zmanda Management Console (GUI for Amanda Enterprise). ZMC discovers the VMs running on each hypervisor. You can choose whether you want to backup all or a selected set of VMs on each hypervisor.

A full consistent image of the VM is backed up to media of your choice - disk, cloud storage or tape.

Of course, you can continue to install and run separately licensed Zmanda Clients within your virtual machines (guests) to perform finer granular backups. This enables you to recover from smaller disasters (like accidental deletion of a file) without recovering the whole VM. You can consider having different schedules for your file-level or application-level backups and VM level backups - e.g. file-level backups may be performed daily and VM level backups may be performed each weekend.

Vaulting Support

While many of our customers have already implemented disk-to-disk-to-cloud (d2d2c) or disk-to-disk-to-tape (d2d2t) data flows using our products, we have formally introduced Vaulting as a feature supported via Zmanda Management Console. There are two key use cases supported by this feature:

  • Backup to disk and then Vault to lower cost disk or remote disk (via NFS)
  • Backup to disk and then Vault to cloud or tape for offsite storage

Our implementation allows very flexible media duplication (e.g., you can vault from multiple disk volumes to a smaller number of physical tapes). You can also choose to vault only the latest full backups for off-site archiving purposes. All storage devices supported by Amanda Enterprise can be a source or destination for vaulting (e.g., for one of the backup sets you can vault to a tape library and for another backup set on the same backup server you can vault to cloud storage). The Backup catalog of Amanda Enterprise keeps tracks of various backup copies and will allow for restoration from either the backup media volumes or vaulted volumes.

Note that vaulting is most useful when you want to duplicate your backup images. If you want to have a data flow to periodically move your backups e.g. from disk to cloud, you can simply use the staging area in Amanda Enterprise and treat the storage on staging area as your first backup location.

In addition to above new features, Amanda Enterprise 3.3.5 has several bug fixes and usability improvements. If you are interested in learning more, you can join one of our upcoming webinars (next one scheduled on December 18th): http://zmanda.com/webinars.html

If you are already on the Amanda Enterprise platform, you can discuss upgrading to 3.3.5 with your Zmanda support representative. If you are interested in evaluating Amanda Enterprise for your organization, please contact us at zsales@zmanda.com

Amanda Enterprise 3.3 brings advanced backup management features

Wednesday, March 20th, 2013

Built on extensive research and development, combined with active feedback from a thriving open source community, Amanda Enterprise (AE) 3.3 is here! AE 3.3 has significant architecture and feature updates and is a robust, scalable and feature-rich platform that meets the backup needs of heterogeneous environments, across Linux, Windows, OS X and Solaris-based systems.

As we worked to further develop Amanda Enterprise, it was important to us that the architecture and feature updates would provide better control and management for backup administration.  Our main goal was to deliver a scalable platform which enables you to perform and manage backups your way.

Key enhancements in Amanda Enterprise include:

Advanced Cloud Backup Management: AE 3.3 now supports use of many new and popular cloud storage platforms as backup repositories. We have also added cloud backup features to give users more control over their backups for speed and data priority.

 Backup Storage Devices Supported by Amanda Enterprise 3.3


Backup Storage Devices Supported by Amanda Enterprise 3.3

Platforms supported now include Amazon S3, Google Cloud Storage, HP Cloud Storage, Japan’s IIJ GIO Storage Service, and private and public storage clouds built on OpenStack Swift. Notably, AE 3.3 supports all current Amazon S3 locations including various locations in US (including GovCloud), EU, Asia, Brazil and Australia.

Cloud Storage Locations Supported by Amanda Enterprise


Cloud Storage Locations Supported by Amanda Enterprise

In addition to new platforms, now, you can control how many parallel backup (upload) or restore (download) streams you want based on your available bandwidth. You can even throttle upload or download speeds per backup set level; for example, you can give higher priority to the backup of your more important data.

Optimized SQL Server and Exchange Backups: If you are running multiple SQL Server or Exchange databases on a Windows server, AE 3.3 allows selective backup or recovery of an individual database. This enables you to optimize the use of your backup resources by selecting only the databases you want to back up, or to improve recovery time by enabling recovery of a selected database. Of course, the ability to do an express backup and recovery of all databases on a server is still available.

Further optimizing, Zmanda Management Console (which is the GUI for Amanda Enterprise) now automatically discovers databases on a specific Windows server, allowing you to simply pick and choose those you want to backup.

Improved Virtual Tape and Physical Tape Management: Our developers have done extensive work in this area to enhance usability, including seamless management of available disk space. With extensive concurrency added to the Amanda architecture, you can eliminate using the staging disk for backup-to-disk configurations. AE 3.3 will write parallel streams of backups directly to disk without going through the staging disk. You can also choose to optionally configure staging disk for backup to tapes or clouds to improve fault tolerance and data streaming.

Better Fault Tolerance: When backing up to tapes, AE 3.3 can automatically withstand the failure of a tape drive. By simply configuring a backup set to be able to use more than one tape drive in your tape library, if any of the tape drives is not available, AE will automatically start using one of the available drives.

NDMP Management Improvements: AE 3.3 allows for selective restore of a file or a directory from a Network Data Management Protocol (NDMP) based backup. Now, you can also recover to an alternative path or an alternative filer directly from the GUI. Support for compression and encryption for NDMP based backups has also been added to the GUI. Plus, in addition to devices from NetApp and Oracle, AE now also supports NDMP enabled devices from EMC.

Scalability, Concurrency and Parallelism: Many more operations can now be executed in parallel. For example, you can run a restore operation, while active backups are in progress. Parallelism also has been added in various operations including backup to disk, cloud and tapes.

Expanded Platform Support: Our goal is to provide a backup solution which supports all of the key platforms deployed in today’s data centers. We have updated AE 3.3 to support latest versions of Windows Server, Red Hat Enterprise Linux, CentOS, Fedora, Ubuntu, Debian and OS X. With AE, you have flexibility of choosing the platforms best suited for each application in your environment – without having to worry about the backup infrastructure.

Want to Learn More?

There are many new enhancements to leverage! To help you dive in, we hosted a live demonstration of Amanda Enterprise 3.3. The session provides insights on best practices for setting up a backup configuration for a modern data center.

Zmanda “googles” cloud backup!

Friday, May 11th, 2012

Today, we are thrilled to announce a new version of Zmanda Cloud  Backup (ZCB) that backs up to Google Cloud Storage. It feels great to support perhaps the first mainstream cloud storage service we were introduced to (via the breakthrough Gmail and other Google services) and considering the huge promise shown by Google’s cloud services, we are sure that this version will be very useful to many of our customers.

However, a new cloud storage partner explains only part of the excitement. :) What makes this version more significant to us is its new packaging. As you may be aware, until now ZCB came only in a Pay-As-You-Go format and while this option has been great for our customers who value the flexibility offered by this model, we realized that there are our other customers (such as government agencies) who need a fixed amount to put down in their proposals and budget provisions. To put it differently - these customers would rather trade-off some of the flexibility for certainty.

So with these customers in mind, we chose to offer this ZCB version in the following prepaid usage quota based plans:

  • $75/year for 50 GB
  • $100/year for 100 GB
  • $1,000/year for 1000 GB
  • $10,000/year for 10000 GB

Note that the above GB values are the maximum size of data users can store on the cloud at any point in time. The prices above are inclusive of all costs of cloud storage and remain unaffected even if you wish to protect multiple (unlimited!) systems.

    So what are the benefits of this new pricing option? Here are some:

  • Budget friendly: Whether you are an IT manager submitting your annual IT budget for approval or a service provider vying for a client’s business, the all-inclusive yearly plans are a great option, one you can confidently put down in writing.
  • Cost effective: If you know your requirements well, this option turns out to be dramatically cost effective. Here is a rough comparison of our pricing with some other well-known providers:

    Note:
    Zmanda Cloud Backup: The annual plan pricing for Google Cloud Storage version was used.
    MozyPro: Based on http://mozy.com/pro/pricing/ “Server Pass” option was chosen since ZCB protects Server applications at no extra cost.
    JungleDisk: Based on: https://www.jungledisk.com/business/server/pricing/ Rackspace storage option was used since this was the only “all-inclusive” price option

  • More payment options: In addition to credit cards, this version supports a variety of payment options (such as Bank transfer, checks, etc.). So whether you are a government agency or an international firm, mode of payment is never going to be an issue.
  • Simplified billing and account management: Since this aspect is entirely handled by Zmanda, it is much easier and user friendly to manage your ZCB subscription. So no more hassles of updating your credit card information and no need of managing multiple accounts. When you need help, just write to a single email id (zcb@zmanda.com), or open a support case with us, and we will assist you with everything you may need assistance with.
  • Partner friendly: The direct result of all the above benefits is that reselling this ZCB version will be much more simplified and rewarding. If you are interested in learning more, do visit our new reseller page for more details.

So with all the great benefits above, do we still expect some customers to choose our current pay-as-you-go ZCB version for Amazon S3? Of course! As we said, if your needs are currently small or unpredictable, the flexibility of scaling up and down without committing to a long term plan is a sensible option. And the 70 GB free tier and volume discount tier offered on this ZCB version can keep your monthly costs very low.

Oh and I almost forgot - along with this version, we have also announced the availability of ZCB Global Dashboard, the web-interface to track usage and backup activity of multiple ZCB systems at a single place. If you have multiple ZCB systems in your environment or you are a reseller, it will be extra useful to you.

As we work on enhancing our ZCB solution more, please keep sending us your feedback at zcb@zmanda.com. Much more is cooking with Cloud Backup at Zmanda. Will be with you with more exciting news soon!

-Nik

Zmanda @ Oracle OpenWorld 2010

Tuesday, September 7th, 2010

Oracle Open World

If you are coming to this year’s Oracle OpenWorld 2010, please do visit us at Booth #3824.

We will have our backup solution experts at the show to discuss any of your database or infrastructure backup needs.

When it comes to backing up various products offered by Oracle, we have several solutions:

We hope to see you at the show!

Go Tapeless - Use Zmanda Cloud Backup for backup and disaster recovery

Wednesday, June 23rd, 2010

If you are in charge of ensuring backup and disaster recovery of critical servers for your business, you have undoubtedly grappled with unwieldy tapes. In this age of digital everything, writing to tapes and then shipping them to a remote location seems like a relic from another era. Advances in Cloud based services, e.g. those offered by Amazon Web Services, provide an excellent alternative to tapes for backup and disaster recovery.

We have been offering Amazon S3 based cloud backup solution for about three years now. Today we are announcing the third generation of our Zmanda Cloud Backup product. Particularly exciting for me is the support for the Asia Pacific Region.

Cloud Backup to Three Continents

Cloud Backup to Three Continents

For many of the same reasons that Amazon picked Singapore as their first Asia Pacific Region, Singapore is a great destination to preserve your valuable assets. Performance and robustness provided by Singapore’s Internet connectivity is a major plus for backup and disaster recovery needs.

Backing up your data to the cloud requires several steps. You need to (1) Plan what do you want to backup and when; (2) Extract data out of your live applications, e.g. SQL Server or Exchange; (2) Stage this backup image to transfer to the cloud; (3) Monitor the transfer for any Internet hiccups and take corrective actions; and (4) Delete backup images which have expired per your retention policy. Zmanda Cloud Backup automates these steps through an easy GUI based backup configuration and management. ZCB integrates with S3’s REST API to coordinate transfer of on-premises data to the storage cloud.

In third-gen ZCB we also added support for international character sets. So, ZCB is friendly with files and folders named in e.g. Chinese (Simplified or Traditional), Japanese or Korean.

Backup What screenshot - Chinese filenames

Backup What screenshot - Chinese filenames

Backup What screenshot - Japanese filenames

Backup What screenshot - Japanese filenames

While a lot of Zmanda’s customers backup to local disks or tapes, Cloud Backup is fastest growing part of our business. In many environments, customers are backing up some backup sets to local media and other backup sets to the Cloud - with plans to move entire backup to the storage on the Cloud in a few years. We have seen this adoption across the board, including in the traditionally conservative financial industry. So, it appears more and more IT managers are daring to go tapeless when it comes to their backup operations!

Disaster Recovery in the Cloud

Monday, June 21st, 2010

Most small and medium sized business do not have a formal Disaster Recovery (DR) plan and implementation because of its cumbersome and costly nature. Various factors make DR complex, including: (1) Allocation and administration of remote compute and storage resources; (2) Data transport mechanism - e.g. tape shipment or data replication; and (3) Application environment synchronization. To makes matter worse, regular testing of a DR implementation tends to be complicated, and in many cases not practical.

Cloud Computing provides an excellent means to radically simplify the DR process. This is achieved by backing up your critical applications to a Storage Cloud (e.g. Amazon S3), and making preparation to quickly recover in the nearby Compute Cloud (e.g. Amazon EC2).

We have two solutions for backup and DR in the cloud: Amanda Enterprise (with the Amazon S3 Option) and Zmanda Cloud Backup (ZCB). Amanda Enterprise is meant for environments with heterogeneous systems, whereas ZCB is targeted at small businesses with a handful of Windows servers and desktops.

Amanda Enterpise DR in the Cloud

Setup of Amanda Enterprise for Cloud Based DR

 

Zmanda Cloud Backup DR in the Cloud

Setup of Zmanda Cloud Backup for Cloud Based DR

 

The process of setting up DR in the cloud is as follows:

  1. Set up backup process to Amazon S3.
  2. Complete first backup of applications on primary site to S3.
  3. Configure standby VMs on EC2 to match the OS (and patch level) of the corresponding systems on your primary site. For all data storage, use Elastic Block Storage, so you have persistent data across reboots.
  4. Install Zmanda backup software on these standby VMs.
  5. Install the same S3 certificate that is used in step #1 on the standby VMs.
  6. In case of Amanda Enterprise setup the AE-DR option to replicate backup catalog and configuration to the standby VM running the AE server.
  7. Perform full recovery from S3 to standby VMs.
  8. Take a snapshot of the standby VMs.
  9. Shutdown standby VMs.
  10. Optionally start standby VMs periodically to perform steps #6-#8. This will help in reducing the time to recover after a disaster and also tests your DR process.

If you are considering the Cloud for your DR needs, come join us tomorrow (June 22nd) for a webinar: Noted Storage Analyst, Lauren Whitehouse from Enterprise Strategy Group, will be joining me: Leveraging the Cloud for Radically Simple and Cost-Effective Disaster Recovery

What’s New in Amanda Community: Postgres Backups

Thursday, March 25th, 2010

Second installment in a series of posts about recent work on Amanda.

The Application API allows Amanda to back up structured data — data that cannot be handled well by dump or tar. Most databases fall into this category, and with the 3.1 release, Amanda Community Edition ships with ampgsql, which supports backing up Postgres databases using the software’s point-in-time recovery mechanism.

The how-to for this application is on the Amanda wiki.

Operation

Postgres, like most “advanced” databases, uses a logging system to ensure consistency even in the face of (some) hardware failures. In essence, it writes every change that it makes to the database to the logfile before changing the database itself. This is similar to the operation of logging filesystems. The idea is that, in the face of a failure, you just replay the log to re-apply any potentially corrupted changes.

Postgres calls its log files WAL (write-ahead log) files. By default, they are 16MB. Postgres runs a shell command to “archive” each logfile when it is full.

So there are two things to back up: the data itself, which can be quite large, and the logfiles. A full backup works like this:

  • Execute PG_START_BACKUP(ident) with some unique identifier.
  • Dump the data directory, excluding the active WAL logs. Note that the database is still in operation at this point, so the dumped data, taken alone, will be inconsistent.
  • Execute PG_STOP_BACKUP(). This archives a text file with the suffix .backup that indicates which WAL files are needed to make the dumped data consistent again.
  • Dump the required WAL files

An incremental backup, on the other hand, only requires backing up the already-archived WAL files.

A restore is still a manual operation — a DBA would usually want to perform a restore very carefully. The process is described on the wiki page linked above, but boils down to restoring the data directory and the necessary WAL files, then providing postgres with a shell command to “pull” the WAL files it wants. When postgres next starts up, it will automatically enter recovery mode and replay the WAL files as necessary.

Quiet Databases

On older Postgres versions, making a full backup of a quiet database is actually impossible. After PG_STOP_BACKUP() is invoked, the final WAL file required to reconstruct a consistent database is still “in progress” and thus not archived yet. Since the database is quiet, postgres does not get any closer to archiving that WAL file, and the database hangs (or, in the case of ampgsql, times out).

Newer versions of Postgres do the obvious thing: PG_STOP_BACKUP() “forces” an early archiving of the current WAL file.

The best solution for older versions is to make sure transactions are being committed to the database all the time. If the database is truly silent during the dump (perhaps it is only accessed during working hours), then this may mean writing garbage rows to a throwaway table:

CREATE TABLE push_wal AS SELECT * FROM GENERATE_SERIES(1, 500000);
DROP TABLE push_wal;

Note that using CREATE TEMPORARY TABLE will not work, as temporary tables are not written to the WAL file.

As a brief encounter in #postgres taught me, another option is to upgrade to a more modern version of Postgres!

Log Incremental Backups

DBAs and backup admins generally want to avoid making frequent full backups, since they’re so large. The usual pattern is to make a full backup and then dump the archived log files on a nightly basis for a week or two. As the log files are dumped, they can be deleted from the database server, saving considerable space.

In Amanda terms, each of these dumps is an incremental, and is based on the previous night’s backup. That means that the dump after the full is level 1, the next is level 2, and so on. Amanda currently supports 99 levels, but this limit is fairly arbitrary and can be increased as necessary.

The problem in ampgsql, as implemented, is that it allows Amanda to schedule incremental levels however it likes. Amanda considers a level-n backup to be everything that has changed since the last level-n-1 backup. This works great for GNU tar, but not so well for Postgres. Consider the following schedule:

Monday level 0
Tuesday level 1
Wednesday level 2
Thursday level 1

The problem is that the dump on Thursday, as a level 1, needs to capture all changes since the previous level 0, on Monday. That means that it must contain all WAL files archived since Monday, so those WAL files must remain on the database server until Thursday.

The fix to this is to only perform level 0 or level n+1 dumps, where n is the level of the last dump performed. In the example above, this means either a level 0 or level 3 dump on Thursday. A level 0 is a full backup and requires no history. A level 3 would only contain WAL files archived since the level 2 dump on Wednesday, so any WAL files before that could be deleted from the database server.

Summary

The combination of a powerful open source database system and the open source ampgsql plugin combine to produce a powerful protected storage system for your mission-critical data. We will continue to develop additional Application API plugins, and encourage you and other members of the community to do the same!

What’s New in Amanda: Automated Tests

Friday, March 12th, 2010

This is the first in what will be a series of posts about recent work on Amanda. Amanda has a reputation as old and crusty — not so! Hopefully this series will help to illustrate some of the new features we’ve completed, and what’s coming up. I’m cross-posting these on my own blog, too.

Among open-source applications, Amanda is known for being stable and highly reliable. To ensure that Amanda lives up to this reputation, we’ve constructed an automated testing framework (using Buildbot) that runs on every commit. I’ll give some of the technical details after the jump, but I think the numbers speak for themselves. The latest release of Amanda (which will soon be 3.1.0) has 2936 automated tests!

These tests range from highly-focused unit tests, for example to ensure that all of Amanda’s spellings of “true” are parsed correctly, all the way up to full integration: runs of amdump and the recovery applications.

The tests are implemented with Perl’s Test::More and Test::Harness. The result for the current trunk looks like this:

=setupcache.....................ok
Amanda_Archive..................ok
Amanda_Changer..................ok
Amanda_Changer_compat...........ok
Amanda_Changer_disk.............ok
Amanda_Changer_multi............ok
Amanda_Changer_ndmp.............ok
Amanda_Changer_null.............ok
Amanda_Changer_rait.............ok
Amanda_Changer_robot............ok
Amanda_Changer_single...........ok
Amanda_ClientService............ok
Amanda_Cmdline..................ok
Amanda_Config...................ok
Amanda_Curinfo..................ok
Amanda_DB_Catalog...............ok
Amanda_Debug....................ok
Amanda_Device...................ok
        211/428 skipped: various reasons
Amanda_Disklist.................ok
Amanda_Feature..................ok
Amanda_Header...................ok
Amanda_Holding..................ok
Amanda_IPC_Binary...............ok
Amanda_IPC_LineProtocol.........ok
Amanda_Logfile..................ok
Amanda_MainLoop.................ok
Amanda_NDMP.....................ok
Amanda_Process..................ok
Amanda_Recovery_Clerk...........ok
Amanda_Recovery_Planner.........ok
Amanda_Recovery_Scan............ok
Amanda_Report...................ok
Amanda_Tapelist.................ok
Amanda_Taper_Scan...............ok
Amanda_Taper_Scan_traditional...ok
Amanda_Taper_Scribe.............ok
Amanda_Util.....................ok
Amanda_Xfer.....................ok
amadmin.........................ok
amarchiver......................ok
amcheck.........................ok
amcheck-device..................ok
amcheckdump.....................ok
amdevcheck......................ok
amdump..........................ok
amfetchdump.....................ok
amgetconf.......................ok
amgtar..........................ok
amidxtaped......................ok
amlabel.........................ok
ampgsql.........................ok
        40/40 skipped: various reasons
amraw...........................ok
amreport........................ok
amrestore.......................ok
amrmtape........................ok
amservice.......................ok
amstatus........................ok
amtape..........................ok
amtapetype......................ok
bigint..........................ok
mock_mtx........................ok
noop............................ok
pp-scripts......................ok
taper...........................ok
All tests successful, 251 subtests skipped.
Files=64, Tests=2936, 429 wallclock secs (155.44 cusr + 31.48 csys = 186.92 CPU)

The skips are due to tests that require external resources - tape drives, database servers, etc. The first part of the list contains tests for almost all perl packages in the Amanda namespace. These are generally unit tests of the new Perl code, although some tests integrate several units due to limitations of the interfaces. The second half of the list is tests of Amanda command-line tools. These are integration tests, and ensure that all of the documented command-line options are present and working, and that the tool’s behavior is correct. The integration tests are necessarily incomplete, as it’s simply not possible to test every permutation of this highly flexible package.

The =setupcache test at the top is interesting: because most of the Amanda applications need some dumps to work against, we “cache” a few completed amdump runs using tar, and re-load them as needed during the subsequent tests. This speeds things up quite a bit, and also removes some variability from the tests (there are a lot of ways an amdump can go wrong!).

The entire test suite is run at least 54 times for every commit by Buildbot. We test on 42 different architectures - about a dozen linux distros, in both 32- and 64-bit varieties, plus Solaris 8 and 10, and Darwin-8.10.1 on both x86 and PowerPC. The remaining tests are for special configurations — server-only, client-only, special runs on a system with several tape drives, and so on.

Red Hat Enterprise Linux and Amanda Enterprise: IT Manager’s Backup Solution

Thursday, January 14th, 2010

A backup server represents a very important component of any IT infrastructure. red hat logoYou need to pick the right components to implement a scalable, robust and secure backup server. The choice of the operating system has crucial implications. Red Hat Enterprise Linux (RHEL) provides many of the features needed from an ideal OS for a backup server. Some of these include:

Virtualization: RHEL includes a modern hypervisor (Red Hat Enterprise Virtualization Hypervisor) based on the Kernel-Based Virtual Machine (KVM) technology.  Amanda backup server can be run as a virtual machine on this hypervisor. This virtual backup server can be brought up as needed. This provides optimal resource management, e.g. you can bring up the backup server just at the time of backup window or for restores. A virtualized backup server also makes it much more flexible to change the resource levels depending on the business needs, e.g. if more oomph is needed from the backup server prior to a data center move.

High I/O Throughput:
Backup server represents huge I/Os, typically characterized by large sequential writes. RHEL, both as real and virtual system, provides high I/O throughput needed for a backup server workload. RHEL 5 allows for switching I/O schedulers on-the-fly. So, a backup administrator can fine tune I/O activity to match with higher level function (e.g. write-heavy backups vs. read-heavy restores).

Security: Securing a backup server is critical in any overall IT security planning. In a targeted attack, a backup server provides a juicy target because data that is deemed to be important by an organization can be had from one place. Security-Enhanced Linux (SELinux) in RHEL implements a variety of security policies, including U.S. Department of Defense style mandatory access controls, through the use of Linux Security Modules (LSM) in the Linux kernel. Amanda supports RHEL SELinux configuration. It allows users to run backup server in a secure environment.

Scalable Storage:
Storage technologies built into RHEL provide scalability needed from backup storage. The Ext3 filesytem supports up to 16TB file systems. Logical Volume Manager (LVM) allows for backup storage on a pool of devices which can be added to when needed. System administrators can also leverage Global File System (GFS) to provide backup server direct access to data to be backed up, by-passing the production network.

Compatibility: RHEL is found on compatibility matrix of any modern secondary storage device - whether it be a tape drive, tape library or a NAS device. RHEL also supports wide variety of SAN architectures, including iSCSI and Fibre Channel. This, along with Amanda’s use of native drivers to access secondary media, gives IT managers the widest choice in the market for devices to store backup archives.

Manageability: Easy update mechanism, e.g. using yum, from Redhat Network makes it easier for the administrator to keep the backup server updated with latest fixes (including security patches). Amanda depends on some of the system libraries and tools to perform backup and recovery operations. A system administrator can pare down a RHEL environment to only have bare-minimum set of packages needed for Amanda, and then use RHN to keep these packages up-to-date.

Long Retention Lifecycle: Many organizations need to retain their backup archives for several years due to business or compliance reasons. Each version of RHEL comes with seven year support. This combined with open formats used by Amanda Enterprise makes it practical for IT managers to have real long-term retention policies, with a confidence to be able to recover their data several years from now.

starbucks coffee
In summary, if you are in the process of making a choice for your backup server, RHEL should certainly be in the short-list for operating systems, and (yes, we are biased) Amanda in the short-list for backup software.  We will discuss this combination in detail in a webinar on January 21st. Red Hat is warming up this webinar by offering a $10 Starbucks card for every attendee. Join us!

Windows XP -> Cloud -> Windows 7

Monday, November 23rd, 2009

We recently added support for Windows 7 to both Zmanda Cloud Backup and Amanda Enterprise. Zmanda Cloud Backup stores its backup archives on the Amazon S3 Storage Cloud. Amanda Enterprise has the option to do so. Users can backup both the Windows file systems and system state, as well as various Microsoft applications, Oracle and MySQL databases. Now we support all Windows versions supported by Microsoft, including Windows 7.

To upgrade from Windows XP to Windows 7, Microsoft recommends users to backup their Windows XP to external hard disk and then install Windows 7. Backup to (and Restore from) Cloud offers another alternative, which we have tested in our labs.

xp to win7 diagram

If you have your backups (either created by ZCB or Amanda Enterprise) in the cloud, you can upgrade to Windows 7 and restore file system after Windows 7 installation. The system state backup includes application state that can be restored to an alternate location and restored selectively to Windows 7. As an added benefit, your data will be preserved in an off-site secure location until the time you are sure your new environment is stable.