Amazon S3 vs. Amazon Glacier: a simple backup strategy in the cloud

When you start out to design your first application for the hosting on AWS (Amazon Web Services) you will eventually end up considering your options for the protection of your and your customers’ data against accidental losses.
While you may have designed a highly resilient and durable solution, this does not necessarily protected you from administrative mishaps, data corruption or malicious attacks against your system. This can only be mitigated with an effective backup strategy.
Thanks to Amazon’s Simple Storage Service (S3) and its younger sibling Amazon Glacier you have the right services at hand to establish a cost effective, yet practical backup solution.

Within Amazon S3 data is managed as individual objects. This is contrary to Amazon’s Elastic Block Store (EBS) or the local file system of your traditional PC, where data is managed in a directory hierarchy.
The abstraction, away from the lower layers of storage, and the separation of data from its metadata come with a number of benefits. For one, Amazon can provide a highly durable storage service for the fraction of the cost of block storage. You also only pay for the amount of storage you actually use. Therefore you don’t need to second-guess and pre-allocate disk space upfront.

Hierarchical storage with AWS Glacier

Lifecycle rules within S3 allow you to manage the life cycle of the objects stored on S3. After a set period of time you can either have your objects automatically delete or archived off to Amazon Glacier.

AWS S3 LifeCycle

Amazon Glacier is marketed by AWS as “extremely low cost storage”. The cost per Terrabyte of storage and month is again only a fraction of the cost of S3. Amazon Glacier is pretty much designed as a write once and retrieve never (or rather rarely) service. This is reflected in the pricing, where extensive restores come at a additional cost and the restore of objects require lead times of up to 5 hours.

Amazon S3 with Glacier vs. Amazon Glacier

At this stage we need to highlight the difference between the ‘pure’ Amazon Glacier service and the Glacier storage class within Amazon S3. S3 objects that have been moved to Glacier storage using S3 Lifecycle policies can only be accessed (or shall I say restored) using the S3 API endpoints. As such they are still managed as objects within S3buckets, instead of Archives within Vaults, which is the Glacier terminology.

This differentiation is important when you look at the costs of the services. While Amazon Glacier is much cheaper than S3 on storage, charges are approximatey ten times higher for archive and restore requests. This is re-iterating the store once, retrieve seldom pattern. Amazon also reserves 32KB for metadata per Archive within Glazier, instead of 8 KB per Object in S3, both of which are charged back to the user. This is important to keep in mind for your backup strategy, particularly if you are storing a large number of small files. If those files are unlikely to require restoring in the short term it may be more cost effective to combine them into an archive and store them directly within Amazon Glazier.

Tooling

Fortunately enough, there is a large variety of tools available on the web that allow you to consume AWS S3 and Glacier services to create backups of your data. They reach from stand-alone, local PC to enterprise storage solutions.

Just bear in mind that whatever third party tool you are using, you will need to enable with access to your AWS account. You need to ensure that the backup tool only gets the minimum amount of access to perform its duties. For this reason it is best to issue a separate set of access keys for this purpose. You may also want to consider the backup of your data to an entirely independent AWS account. Depending on your individual risk profile and considering that your backups tend to provide the last resort recovery option after a major disaster it may be wise to keep those concerns separated. Particularly to protect yourself against cases like Code Spaces where all services and data within the account got wiped out entirely.
For reference we have included instructions below for the configuration of dedicated backup credentials on the example for my backup tool of choice CloudBerry.

AWS Identity and Access Management

Identity and Access Management (IAM) allows you to manage users and groups for your AWS account and define fine grained policies for the access management of the various services and resources. To get started log-in to the AWS Management Console and open the link for IAM

This opens the IAM Dashboard. Once in the Dashboard you can navigate to Users and select the Create New Users option. Selecting the “Generate an access key for each User” option ensures that an access key is issued for each user at creation time. An access key can be issued at a later time as well though in case you miss that step.

After confirming the dialogue you will be given the opportunity to download the Security Credentials, consisting of an unique Access Key identifier and the Secret Access Key itself. Naturally the Access Key should be stored in a secure place.

As a default, new users will not have any access to any of the resources within the account. Access is granted in attaching an IAM policy directly to a user account or in adding the user to a group with an IAM policy attached. To attach a user policy to an account, select the user and open the Permissions tab.

IAM policies allow for very granular access to AWS resources; hence I am not going into too much detail here. Policies can be defined using pre-defined templates or the policy generation tool. For the purpose of allowing your backup tool write access to your AWS S3 bucket just select the Custom Policy Option.

Below policy grants three different sets of rights:

  • Access to AWS S3 to list all buckets for the account,
  • Access to the bucket MyBucketName and
  • The ability to read, write and delete objects within the MyBucketName bucket.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:GetBucketLocation","s3:ListAllMyBuckets"],
      "Resource": "arn:aws:s3:::*"
    },
    {
      "Effect": "Allow",
      "Action": [ "s3:ListBucket" ],
      "Resource": [ "arn:aws:s3:::MyBucketName" ]
    },
    {
      "Effect": "Allow",
      "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject"],
      "Resource": [ "arn:aws:s3:::MyBucketName/*"]
        }
  ]
}

If you don’t want to give access to list all available buckets within your account, just omit the first object within the JSON statement. In this case though the bucket name cannot be selected within the application.

    {
      "Effect": "Allow",
      "Action": ["s3:GetBucketLocation","s3:ListAllMyBuckets"],
      "Resource": "arn:aws:s3:::*"
    },

Finally

While this post primarily focussed on backup options for your hosted environments, it is not limited to this. Amazon S3 and Glacier are available world wide through public API endpoints.
Additionally, enterprises can make use of the AWS storage gateway to backup your on-premises data in AWS. Commonly known enterprise backup software from vendors like Commvault, EMC or Symantec also provide you with options to utilise Amazon’s cloud storage as an additional storage tier within your backup strategy.