Architecting on AWS: Dynamic configuration vs master AMIs

Infrastructure as a Service holds the promise of reduced costs and increased flexibility that is enabled through ease of operation and management. To seize that opportunity as IT professionals when we are architecting on AWS though, we need to adapt how we view, manage and operate today’s technology.

The desire to respond more agile to changing business needs and the ever increasing pace in innovation has helped to form the DevOps service delivery model where the development and operations domains moved closer together.
Modern cloud service providers support and continuously advance the model of coded infrastructure. As part of that they keep abstracting further away from our traditional understanding of IT infrastructure. This is reaching a point, where compute services become a true commodity similar to power or tap water.

If you are new to cloud computing and coded infrastructure, it is important for you to understand those underlying basics as we are going to built on them at the later stage as indicated in the ‘Outlook’ below.

architecting on aws

When you create a new AWS EC2 instance from one of the (admitting large variety) of Amazon Machine Images (AMIs) you will eventually require to customise certain configuration settings or deploy additional applications to tailor it for your solution. The Base Microsoft Windows Server AMI for example doesn’t have a web server pre-installed. This provides you with the flexibility to configure the web server of your choice.
While we could log-on to the machine after launch and manually deploy and configure our web server, this is obviously not going to be good enough long term. Particular not, if we eventually want to be able to dynamically scale our environment. Even if you just want to make a single instance more fault tolerant as described in our previous post in this series, you would need to employ a basic level of automation.

Architecting on AWS: the options continuum

As with any good IT problem there is more than one solution to the problem. Naturally those options are kind of on opposing ends of a scale. Your task is to weigh off the advantages and disadvantages of each option to find the optimal solution for your needs.

Dynamic configuration

The standard AWS AMIs can be instructed to perform automated tasks or configuration actions at launch time. This is enabled by the EC2Config service for Windows and cloud-init scripts under Linux. You provide those instructions as “user data” as part of the advanced launch configuration of your instances as shown in the example below.
architecting on aws

The user data instructions can either contain Microsoft script commands and PowerShell scripts on Windows or Shell Script and cloud-init directives on Linux based AMIs. The actual types of actions performed are only limited by your imagination and the total size limit of 16 kilobyte (a minor but important detail).

Pre-baked AMIs

Instead of configuring your instances dynamically at launch time, you can also create your own version of an Amazon Machine Image. Just launch a new instance, ensure that all your ‘static’ applications and settings have been applied, to finally create a new image from that instance. This is done in the AWS console using the Create Image option from the instance Actions menu or using the create-image command from the Command Line Interface.

architecting on aws

Trade-offs

Your decision of to decide for a dynamic configuration or master image approach depends on your individual use case. Each of the options does have its advantages and disadvantages that you need to understand and assess against each other in order to find the best solution for your scenario.

One advantage of using pre-baked AMIs is the reduced time to get a new instance from ‘launch’ to ‘ready’. With all components pre-configured and applications installed, you just need to wait for the instance to launch.
This obviously comes at a cost as the image requires constant maintenance. Even if your application code is fairly static, you still need to ensure that you keep your images patched regularly to ensure the resulting instances are not exposed to any new security threats.

On the other hand the dynamic configuration provides you with a lot of flexibility. Every instance you launch can have a ever so slightly different configuration.
Since you always ever start with an AWS managed AMI your security patches are ‘reasonable’ up-to-date (i.e. usually within 5 business days after Microsoft’s patch Tuesday for Windows AMIs).
You are ‘paying’ for this additional service through the time it takes for your instance to get itself ‘ready’ while executing all launch scripts. You also need to be aware that the ID of the AMI image changes whenever AWS releases a new version of the patched image. This is particularly important to note for your scripted launches or AutoScaling configurations as described in our previous post on this topic.

Fortunately we are able to combine the two options to get the best of two worlds. For this scenario you would create an AMI image that contains the applications and configurations items that are changing infrequently (e.g. Internet Information Server, Windows update configuration, etc.). Items that are changing frequently (e.g. your own application) are then injected as part of the dynamic launch configuration.
This approach minimises the time to get a new instance to the ‘ready’ state, yet still provides you with a level of flexibility to influence the final result through the user data instructions.

Outlook

While this post provided you with an introduction to the entry level functionality provided to you by AWS this is really just the tip of the iceberg to get your head into the right space towards the concept of a coded infrastructure.
Auxiliary configuration management solutions like Chef, Puppet and PowerShell DSC provide you with additional flexibility and control over your larger deployments.
Based on Chef, AWS OpsWorks also provides you with an application management solution, which is currently limited to Linux based AMIs.

At re:Invent 2014 AWS also released AWS CodeDeploy, supporting the automated deployment of code updates to your Linux and Windows environments, which is currently available for the North Virginia and Oregon regions. Knowing AWS though this is only going to be a short term limitation and we’ll be looking at this service, probably also in combination with Elastic Beanstalk and CloudFormation at a later stage. In the interim you can start to learn more about the individual AWS services using the courses that are available from CloudAcademy.

DISCLOSURE: This post has originally been created for and sponsored by CloudAcademy.com.

Architecting on AWS: utilising elastic compute

Our last post in this series has provided you with an overview of our example architecture on AWS. In this post we are going into some more detail in focusing on elasticity using AWS EC2 (Elastic Compute Cloud), and in particular we will see how to use AutoScaling to make your computing infraastructure elastic and highly available.

But what is that elasticity thing that people keep on going on about? According to Wikipedia elasticity is defined as “the degree to which a system is able to adapt to workload changes by provisioning and deprovisioning resources in an autonomic manner, such that at each point in time the available resources match the current demand as closely as possible.”
This is different to scalability, or, if you like, a specialisation of scalability. Scalability provides the ability to increase (or decrease) the amount of resources in scaling up (more powerful instances) or out (additional instances), which is usually done through manual intervention. Elasticity does the same but in an autonomic manner, independent from human interaction.

But what does that mean for EC2? Sometimes EC2 instances only tend to be considered as virtual machines that are hosted in the cloud. However, this doesn’t take into account the auxiliary services that come as part of EC2. Therefore it is missing one key enabler to elasticity as defined above: AutoScaling.
AutoScaling is the ‘magic’ ingredient that allows a system that is hosted on EC2 to dynamically adapt to changes in demand. But how does it actually work?

How AutoScaling works

AutoScaling has two components: Launch Configurations and Auto Scaling Groups.

  • Launch Configurations hold the instructions for the creation of new instances. The instructions describe what type of instance AutoScaling needs to launch (e.g. t2.medium, m3.large), what Amazon Machine Image (AMI) the new instance is going to be based on, what roles or what storage is going to be associated with the instance, and so on.
  • Scaling Groups on the other hand manage the scaling rules and logic, which are defined in policies. Those could be based on schedule or CloudWatch metrics. The CloudWatch service allows you to monitor all resources and applications that you have deployed on AWS. CloudWatch allows you to define alarms on metrics, which the AutoScaling policies subscribe to. Through the use of metrics you can for example implement rules that elastic scale your environment based on performance of your deployed instances or traffic volumes on the network.

This doesn’t have to be the limit though. Since CloudWatch is collecting metrics from each and every resource deployed within your environment you can choose a variety of different sources as inputs to your scaling events. Assume you have deployed an application on EC2 that is processing requests from a queue like the Simple Queuing Service. With CloudWatch you can monitor the length of the queues and scale your computing environment in or out based on the amount of items in queue at the time. And since CloudWatch also supports the creation of custom metrics through the API, you can actually use any of your application logging outputs as a trigger for utility compute scaling events.

How to use AutoScaling to achieve elastic computing

Ignoring CloudWatch you can also use the AutoScaling APIs to amend your scaling configuration, trigger scaling events or define the health of an instance. Defining the health status of your instances allows you to go beyond the internal health checking that is done by AutoScaling, which is basically just confirming whether an instance is still running or not. As part of your internal application logic, you could set the health status as a result of certain error conditions. Once set to unhealthy, AutoScaling will take the instance out of service and spin up a fresh new instance instead.

Auto Scaling can also have a use outside of the traditional elasticity needs. Auto Scaling is commonly used in smaller environment to ensure that no less than a certain amount of instances are running at any point in time. So if you are just starting up with that flash new application that no one knows about just yet, or you are deploying an internal facing business application, it is still good practice to make those instances part of an Auto Scaling group. This brings a number of advantages with it.

Firstly and most importantly: you are forcing yourself to design your application in a way that lends itself to the paradigm of disposable infrastructure. Therefore you will ensure that no state or data is ever going to be stored on the instance.

Secondly, you ensure that the launch of a new instance is fully automated. While you may not yet start to use configuration management tools like Chef, Puppet or PowerShell DSC, you will set yourself on the right path in either maintaining a ‘master’ AMI image or make use of the default AMIs in combination with bootstrapping through the instance’ user data.

Finally, with the first two strategies implemented, you are ready to scale your environment in case that your idea becomes the hype of the month.

Summary

In summary we have provided you with a variety of examples that allow you to understand the use of elasticity and scalability in relation to EC2 and provided you with a summary of the services involved.

For scaling, particular using elastic scaling you need to be conscious about the other services in your environment that form part of your solution. For example you may need to consider whether your relational database can continue to respond to the increasing in demand from the additional web or application servers. If you are utilising the Elastic Load Balancer (ELB) to distribute the load between your instances, you need to be aware that the ELB is also designed as an elastic service, which is based on EC2. For huge spikes in demand unfortunately you don’t quite get the elasticity you would wish for. As you are ‘warming-up’ your own environment in spinning up new instances in anticipation for an expected increase in demand (e.g. through the launch of a marketing campaign), you are best to also contact the AWS support in advance of the expected spike to ensure that the ELB is ready to respond to the demand immediately.

You can learn more how to design a scalable and elastic infrastructure on AWS using the courses that are available from CloudAcademy. In particular, you might benefit from watching our course “How to Architect with a Design for Failure Approach“, where AutoScaling is used to help achieving high availability and fault-tolerance in a common architecture.

DISCLOSURE: This post has originally been created for and sponsored by CloudAcademy.com.

Architecting on AWS: the best services to build a two-tier application

The notion of a scalable, on-demand, pay-as-you go cloud infrastructure tends to be easy understood by the majority of today’s IT specialists. However, in order to fully reap the benefits from hosting solutions in the cloud you will have to rethink traditional ‘on-premises’ design approaches. This should happen for a variety of reasons with the most prominent ones the design-for-costs or the adoption of a design-for-failure approach.

This is the first of a series of posts in which we will introduce you to a variety of entry-level AWS services on the example of architecting on AWS to build a common two-tier application deployment (e.g. mod_php LAMP). We will use the architecture to explain common infrastructure and application design patterns pertaining to cloud infrastructure.
To start things off we provide you with a high level overview of the system and a brief description of the utilised services.

Architecting on AWS

Virtual Private Cloud (VPC)

The VPC allows you to deploy services into segmented networks to reduce the vulnerability of your services to malicious attacks from the internet. Separating the network into public and private subnets allows you to safeguard the data tier behind a firewall and to only connect the web tier directly to the public internet. The VPC service provides flexible configuration options for routing and traffic management rules.  Use an Internet Gateway to enabls connectivity to the Internet for resources that are deployed within public subnets.

Redundancy

In our reference design we have spread all resources across two availability zones (AZ) to provide for redundancy and resilience to cater for unexpected outages or scheduled system maintenance. As such, each availability zone is hosting at least one instance per service, except for services that are redundant by design (e.g. Simple Storage Service, Elastic Load Balancer, Rote 53, etc.).

Web tier

Our web tier consists of two web servers (one in each availability zone) that are deployed on Elastic Compute Cloud (EC2) instances. We balance external traffic to the servers using Elastic Load Balancers (ELB). Dynamic scaling policies allow you to elastically scale the environment in adding or removing web instances to the auto scaling group. Amazon Cloud Watch allows us to monitor demand on our environment and triggers scaling events using Cloud Watch alarms.

Database tier

Amazon’s managed Relational Database Service (RDS) provides the relational (MySQL, MS SQL or Oracle) environment for this solution. In this reference design it is established as multi-AZ deployment. The multi-AZ deployment includes a standby RDS instance in the second availability zone, which provides us with increased availability and durability for the database service in synchronously replicating all data to the standby instance.
Optionally we can also provision read replicas to reduce the demand on the master database. To optimise costs, our initial deployment may only include the master and slave RDS instances, with additional read replicas created in each AZ as dictated by the demand.

Object store

Our file objects are stored in Amazon’s Simple Storage Service (S3). Objects within S3 are managed in buckets, which provide virtually unlimited storage capacity. Object Lifecycle Management within an S3 bucket allows us to archive (transition) data to the more cost effective Amazon Glacier service and/or the removal (expiration) of objects from the storage service based on policies.

Latency and user experience

For minimised latency and an enhanced user experience for our world-wide user base, we utilise Amazon’s CloudFront content distribution network. CloudFront maintains a large number of edge locations across the globe. An edge location acts like a massive cache for web and streaming content.

Infrastructure management, monitoring and access control

Any AWS account should be secured using Amazon’s Identity and Access Management (IAM). IAM allows for the creation of users, groups and permissions to provide granular, role based access control over all resources hosted within AWS.
The provisioning of above solution to the regions is achieved in using Amazon CloudFormation. CloudFormation supports the provisioning and management of AWS services and resources using scriptable templates. Once created, CloudFormation also updates the provisioned environment based on changes made to the ‘scripted infrastructure definition’.
We use the Route 53 domain name service for the registration and management of our Internet domain.

In summary, we have introduced you to a variety of AWS services, each of which has been chosen to address one or multiple specific concern in regards to functional and non-functional requirements of the overall system. In our upcoming posts we’ll investigate a number of above services in more detail, discussing major design considerations and trade-offs in selecting the right service for your solution. In the meantime you can start to learn more about the individual AWS services using the courses that are available from CloudAcademy.

DISCLOSURE: This post has originally been created for and sponsored by CloudAcademy.com.

Hadoop streaming R on Hortonworks Windows distribution

The ability to combine executables with the hadoop map/reduce engine is a very powerful concept known as streaming. Thanks to a very active community, we have  ample of examples available on the web that caters for a large variety of languages and implementations. Sources like this and this have also provided some R language examples that are very easy to follow.
In my case I was only required to look at the integration between two tools. Unfortunately though, I hadn’t so far been able to find any detail on how to implement streaming of R on the Hortonworks Windows distribution.

Why Windows you are asking? – Well, I guess this is based on the same reason on which Hortonworks decided to even consider shipping a Windows distribution in first instance. Sometimes it is just easier to reach a market in a perceived familiar environment. But this may become a topic for another post someday.

In hindsight everything always appears quite straight forward. Still, I would like to briefly share my findings here to reduce the research time for anyone else who is presented with a similar challenge.

As a first requirement R obviously needs to be installed on every data node. This also applies to all additional R packages that are used within your application.

Next you are creating two files containing the instructions for the map and reduce tasks in R. In the example below the files are named map.R and reduce.R.

Assuming that your data is already loaded into hdfs you issue the following command on the hadoop command line:

hadoop jar share/hadoop/tools/lib/hadoop-streaming-2.4.0.2.1.1.0-1621.jar -files "file:///c:/Apps/map.R,file:///c:/Apps/reduce.R" -mapper "C:\R\R-3.1.0\bin\x64\Rscript.exe map.R" -reducer "C:\R\R-3.1.0\bin\x64\Rscript.exe reduce2." -input /Logs/input -output /Logs/output

A couple of comments regarding the streaming jar options used in this command under Windows:

-files: in order to submit multiple files to the streaming process, the comma separated list needs to be encapsulated in double quotes. Access to the local file system is provided using the file:/// scheme.
-mapper and -reducer: since the R application can’t be pre-faced with a hashbang in Windows, we need to provide the execution environment as part of the option. As above, the path to the Rscript executable and the name of the R file again need to be encapsulated in double quotes.

Using IAM credentials to grant access to AWS services

Having used Amazon Web Services (AWS) for quite some time now, I realised that it should be time to start sharing some of my experiences on this blog, particularly considering that I haven’t contributed to the World Wide Web community for quite some time.
Today’s post is about Amazon’s Identity and Access Management (IAM) service and why it is a good idea to use it.
I am using the great backup solution from CloudBerry to backup important files from my laptop on Amazon S3. While I am absolutely excited about the capabilities of the application, I still did not feel comfortable to provide my AWS root account access keys (as described in CloudBerry’s help file) to the application.

Why is that?
To fully understand the issue we need to be aware about the difference between AWS root credential and an IAM identity. The AWS root account is provisioned for all AWS users and has full access to all resources and services in the account. Sharing the secret access key for this identity with a 3rd party potentially gets you into big trouble; a malicious piece of software may use the credentials to wipe out all your data, terminate instances or, potentially worse, subscribe to a new raft of additional services that you will have to pay for at the end of the month. Scary stuff right? So please read on. Continue reading