Infrastructure as a Service holds the promise of reduced costs and increased flexibility that is enabled through ease of operation and management. To seize that opportunity as IT professionals when we are architecting on AWS though, we need to adapt how we view, manage and operate today’s technology.
The desire to respond more agile to changing business needs and the ever increasing pace in innovation has helped to form the DevOps service delivery model where the development and operations domains moved closer together.
Modern cloud service providers support and continuously advance the model of coded infrastructure. As part of that they keep abstracting further away from our traditional understanding of IT infrastructure. This is reaching a point, where compute services become a true commodity similar to power or tap water.
If you are new to cloud computing and coded infrastructure, it is important for you to understand those underlying basics as we are going to built on them at the later stage as indicated in the ‘Outlook’ below.
When you create a new AWS EC2 instance from one of the (admitting large variety) of Amazon Machine Images (AMIs) you will eventually require to customise certain configuration settings or deploy additional applications to tailor it for your solution. The Base Microsoft Windows Server AMI for example doesn’t have a web server pre-installed. This provides you with the flexibility to configure the web server of your choice.
While we could log-on to the machine after launch and manually deploy and configure our web server, this is obviously not going to be good enough long term. Particular not, if we eventually want to be able to dynamically scale our environment. Even if you just want to make a single instance more fault tolerant as described in our previous post in this series, you would need to employ a basic level of automation.
Architecting on AWS: the options continuum
As with any good IT problem there is more than one solution to the problem. Naturally those options are kind of on opposing ends of a scale. Your task is to weigh off the advantages and disadvantages of each option to find the optimal solution for your needs.
The standard AWS AMIs can be instructed to perform automated tasks or configuration actions at launch time. This is enabled by the EC2Config service for Windows and cloud-init scripts under Linux. You provide those instructions as “user data” as part of the advanced launch configuration of your instances as shown in the example below.
The user data instructions can either contain Microsoft script commands and PowerShell scripts on Windows or Shell Script and cloud-init directives on Linux based AMIs. The actual types of actions performed are only limited by your imagination and the total size limit of 16 kilobyte (a minor but important detail).
Instead of configuring your instances dynamically at launch time, you can also create your own version of an Amazon Machine Image. Just launch a new instance, ensure that all your ‘static’ applications and settings have been applied, to finally create a new image from that instance. This is done in the AWS console using the Create Image option from the instance Actions menu or using the create-image command from the Command Line Interface.
Your decision of to decide for a dynamic configuration or master image approach depends on your individual use case. Each of the options does have its advantages and disadvantages that you need to understand and assess against each other in order to find the best solution for your scenario.
One advantage of using pre-baked AMIs is the reduced time to get a new instance from ‘launch’ to ‘ready’. With all components pre-configured and applications installed, you just need to wait for the instance to launch.
This obviously comes at a cost as the image requires constant maintenance. Even if your application code is fairly static, you still need to ensure that you keep your images patched regularly to ensure the resulting instances are not exposed to any new security threats.
On the other hand the dynamic configuration provides you with a lot of flexibility. Every instance you launch can have a ever so slightly different configuration.
Since you always ever start with an AWS managed AMI your security patches are ‘reasonable’ up-to-date (i.e. usually within 5 business days after Microsoft’s patch Tuesday for Windows AMIs).
You are ‘paying’ for this additional service through the time it takes for your instance to get itself ‘ready’ while executing all launch scripts. You also need to be aware that the ID of the AMI image changes whenever AWS releases a new version of the patched image. This is particularly important to note for your scripted launches or AutoScaling configurations as described in our previous post on this topic.
Fortunately we are able to combine the two options to get the best of two worlds. For this scenario you would create an AMI image that contains the applications and configurations items that are changing infrequently (e.g. Internet Information Server, Windows update configuration, etc.). Items that are changing frequently (e.g. your own application) are then injected as part of the dynamic launch configuration.
This approach minimises the time to get a new instance to the ‘ready’ state, yet still provides you with a level of flexibility to influence the final result through the user data instructions.
While this post provided you with an introduction to the entry level functionality provided to you by AWS this is really just the tip of the iceberg to get your head into the right space towards the concept of a coded infrastructure.
Auxiliary configuration management solutions like Chef, Puppet and PowerShell DSC provide you with additional flexibility and control over your larger deployments.
Based on Chef, AWS OpsWorks also provides you with an application management solution, which is currently limited to Linux based AMIs.
At re:Invent 2014 AWS also released AWS CodeDeploy, supporting the automated deployment of code updates to your Linux and Windows environments, which is currently available for the North Virginia and Oregon regions. Knowing AWS though this is only going to be a short term limitation and we’ll be looking at this service, probably also in combination with Elastic Beanstalk and CloudFormation at a later stage. In the interim you can start to learn more about the individual AWS services using the courses that are available from CloudAcademy.
DISCLOSURE: This post has originally been created for and sponsored by CloudAcademy.com.