Automated infrastructure deployments with CloudFormation

If you are like me (and many other people in the industry), I am sure you can relate to this: You get your hand on a new toy (like CloudFormation) and just want to get going. Therefore you ignore all the on-line help material that is kindly written and published by a myriad of technical writers. Instead you just ‘borrow’ a number of existing scripts (or templates in this case) from the web and tweak them for your purpose. Until you hit a dead end. Where something doesn’t work as expected and you need to start to debug the work you have done borrowed.

This is likely the time where you wish you had a better understanding of the inner workings of a tool. I have used CloudFormation for quite some time now. Looking back at my path of enlightenment I do remember a number of items I wish I had understood better or paid some more attention at the time. For that reason I would like to share a few nuggets that will provide new starters on the topic a somewhat flattened learning curve and provide an outlook on the opportunities and challenges that follow on your first discoveries with CloudFormation templates.

Open your mind to automation

From the standpoint of new adopter of AWS cloud services you may be tempted to disregard CloudFormation templates as a time waster: Automating the deployment of a vanilla EC2 instance with CloudFormation doesn’t save you any time over the manual provisioning of the resources through the AWS Console.
However, we need to be careful to not cheat ourselves here. In the ‘old’ days, when did you ever have to provision a bare metal server containing the core operating system only? Never! One always had to touch the machine to install or at least configure the application stack, database environment, etc. You name it. At this time it was also commonly accepted that the process of ‘crafting’ a new server instance was taking days if not weeks. And this doesn’t even include the time for ordering and delivery.

Also, when was the last time that you only ever needed one of each type? That must have been at one of our side projects where development, test and production was all based on a single code base running on a server hidden somewhere in the attic. Today you tend to require at least four environments to facilitate the software development life cycle. For good measure you probably also want to add one more for a BlueGreen Deployment.
This is the time where you (should) start to manage your infrastructure as code. And at the same time automation becomes the king of the kingdom.

AWS provides a good variety of helper scripts that come pre-installed with all Amazon provided machine images or as executables for the installation on your own images.
In combination with the instructions you provide within the stack templates those automation scripts enable you to deploy an entire infrastructure stack with the click of a few buttons – unless of course you even automate this bit through the use of the CloudFormation API.
Understanding the interdependencies and differences between the various sections of the template and automation scripts helps you with the successful development your stack.

CloudFormation init (cfn-init)

The most commonly used script, aside from cloud-init (but more on this in a later post) would unarguable be cfn-init (CloudFormation init).
CloudFormation init (cfn-init) reads and processes the instructions provided insight the instance metadata as part of the CloudFormation template.
To run cfn-init you need to call it from insight the user data instructions or as part of any of the start-up processes of your own image.

You might like to know that the user data instructions are ‘magically’ executed by cloud-init. Cloud-init is an open source package that is widely used for the bootstrapping of cloud instances and we can dive deeper into this tool set in another post.

Cfn-init accepts a number of command line options. As a minimum you need to provide the name of the CloudFormation stack and the name of the element that contains the instance metadata instructions.

/opt/aws/bin/cfn-init -v --stack YourStackName --resource YourResourceName

This either is the launch configuration or an EC2 instance definition inside the CloudFormation template.
It is important for you to understand that the instance itself does not get ‘seeded’ with the template instructions as part of the launch. In fact: the instance itself has no appreciation of the fact that its launch was initiated by CloudFormation. Instead, the cfn-init script reaches out to the public CloudFormation api endpoint to retrieve the template instructions. This is important to realize if you are launching your instance inside a VPC that has no connectivity to the internet or provides the connectivity via a proxy server that requires explicit configuration.

Configuration sets

CloudFormation init instructions can be grouped into multiple configuration sets.
I strongly suggest you take advantage of this functionality to support the separation of concerns and enable reuse of template fragments.
With its procedural template instructions, CloudFormation doesn’t necessarily support DRY coding practices – and neither is it supposed to do so.
However, if your set-up requires a common set of applications or configurations installed and configured on each instance (think anti-virus, compliance or log forwarding agents, etc.), you are well placed to keep those parts separated in their own configuration set. In combination with a centralised source control management system or an advanced text editor like sublime or notepad++, you can then fairly easy maintain and quickly re-use those common parts of your stack.
Bear in mind though that this isn’t the only solution to ensure common components are always rolled into the stack. In a previous post I have written about the advantages and trade-offs for scripted launches of instances vs the use of pre-baked, customised machine images.
Above solution doesn’t scale well for larger environments. If you want to automate your infrastructure across tens or hundreds of templates, you will soon hit the limits. As your environment requires patching, and you start re-factoring your code fragments, you need to ensure that every stack in your environment is kept up-to-date.
Once you have reached that point, you will start to investigate the use of continuous integration solutions that hook into AWS for a more automated management of stacks across multiple environments.

Be mindful of alternatives

Which leads me nicely to my closing words. I am sure everyone is aware of the popular saying that goes along the line of:

‘if all you have has a tool is a hammer,
everything looks like a nail’.

Rest assured that your infrastructure and deployment solution is subject to the same paradigm. When I started with scripted deployments in AWS I made good use of the user data script. I partioned the various steps within my script and made it re-usable. I split it up into individual bash or powershell scripts that I deployed them to the instances and called them from within the user data or cascaded them amongst each other. And felt very clever! Until my fleet of instances started increasing. Therefore I discovered that a lot of the effort managing the fleet could be saved in using CloudFormation. As part of this the instance definitions moved to CloudFormation Init metadata and provided me with additional flexibility. CloudFormation Init then allowed me to define in a declarative way what actions I wanted to perform on an instance and in which order – much alike the YAM based cloud-init configuration, but at the scale of a whole stack, not just a single instance. No longer did I have to navigate to a specific directory, download a RPM package using wget or curl, install it using the package manager, ensure the application is started at boot time, and so on. Instead I can just provided declarative instructions inside one or more of the 7 supported configuration keys.
As discussed earlier, I yet again felt very smart, I started to organise my individual declarative instructions in configuration sets, managed them in a central repository for re-use, etc. Until, well, you can probably already guess it by now: until I discovered that it is worth considering the use of AWS Opsworks and Elastic Beanstalk resources inside CloudFormation stack. AWS Opsworks abstracts your configuration instructions further away from the declarative configuration in the init metadata. Using a managed Chef service you have access to a large variety of pre-defined recipes for the installation and configuration of additional components of your system.
Since those recipes are continuously maintained and updated by the wider community you don’t need to re-invent the wheel over and over again.
While I do have to admit that the re-invention of the wheel has served humankind quite well to date (imagine our cars would still use stone wheels), it is quite obvious that the wisdom and throughput of a whole community can be much higher then the capability of an individual.
The same can be said about Elastic Beanstalk. Where OpsWorks helped you to accelerate the deployment of common components, Elastic Beanstalk allows you to automate the resilient and scalable deployment of your application into the stack without you even having to describe or configure the details for load balancing and scaling.

In summary

The point I would like to make is that in a world where “the slow eats the fast”, we can never settle at a given solution at any given time. Our whole community, including AWS, is constantly evolving to allow organisations to innovate, develop and ship features at an ever increasing rate. This is achieved in the continuous abstraction away from the core underlying infrastructure and services and the combination of traditional features with new functionality and innovation.
To stay on top of the game as an IT professional there is the need to constantly challenge the status quo and, where applicable, make the leap of faith to investigate and learn new ways of doing our business.

Hadoop streaming R on Hortonworks Windows distribution

The ability to combine executables with the hadoop map/reduce engine is a very powerful concept known as streaming. Thanks to a very active community, we have  ample of examples available on the web that caters for a large variety of languages and implementations. Sources like this and this have also provided some R language examples that are very easy to follow.
In my case I was only required to look at the integration between two tools. Unfortunately though, I hadn’t so far been able to find any detail on how to implement streaming of R on the Hortonworks Windows distribution.

Why Windows you are asking? – Well, I guess this is based on the same reason on which Hortonworks decided to even consider shipping a Windows distribution in first instance. Sometimes it is just easier to reach a market in a perceived familiar environment. But this may become a topic for another post someday.

In hindsight everything always appears quite straight forward. Still, I would like to briefly share my findings here to reduce the research time for anyone else who is presented with a similar challenge.

As a first requirement R obviously needs to be installed on every data node. This also applies to all additional R packages that are used within your application.

Next you are creating two files containing the instructions for the map and reduce tasks in R. In the example below the files are named map.R and reduce.R.

Assuming that your data is already loaded into hdfs you issue the following command on the hadoop command line:

hadoop jar share/hadoop/tools/lib/hadoop-streaming-2.4.0.2.1.1.0-1621.jar -files "file:///c:/Apps/map.R,file:///c:/Apps/reduce.R" -mapper "C:\R\R-3.1.0\bin\x64\Rscript.exe map.R" -reducer "C:\R\R-3.1.0\bin\x64\Rscript.exe reduce2." -input /Logs/input -output /Logs/output

A couple of comments regarding the streaming jar options used in this command under Windows:

-files: in order to submit multiple files to the streaming process, the comma separated list needs to be encapsulated in double quotes. Access to the local file system is provided using the file:/// scheme.
-mapper and -reducer: since the R application can’t be pre-faced with a hashbang in Windows, we need to provide the execution environment as part of the option. As above, the path to the Rscript executable and the name of the R file again need to be encapsulated in double quotes.

Using regular expression with Connections profiles population

Today I was challenged to make use of regular expressions in Tivoli Directory Integrator to further filter LDAP distinguished names for the import into IBM Connections Profiles.

According to the Connections 3.0.1 documentation this can be achieved in specifying your regular expression in the source_ldap_required_dn_regex property of the profiles_tdi.PROPERTIES file.

Looking at the underlying TDI Assembly Line code though I discovered that source_ldap_required_dn_regex is never being read. Instead a property of source_ldap_required_dn_regex_pattern is being evaluated.

 

Using the right property unfortunately still doesn’t make the regular expression filter work properly. Instead I received the following error in the ibmdi.log file when running the sync_all_dns Assembly Line:

ERROR [AssemblyLine.AssemblyLines/_internal_ldap_iterate.5] - [ldap_iterate]
CTGDIS181E Error while evaluating Hook 'After GetNext' in the Component
'ldap_iterate' (ldap_iterate.after_getnext).
com.ibm.jscript.InterpretException: Script interpreter error, line=26, col=74:
Unknown member 'test' in Java class 'java.lang.String'

Hunting down the offending line of code the cause of the error becomes quite obvious. According to the JavaScript introduction from the TDI Users online community, TDI specific objects are Java objects instead of JavaScript objects. In our example the source_ldap_required_dn_regex_pattern variable is a java.lang.String object, which does not have a method named test. The test method is part of the JavaScript RegExp object, which apparently has been expected in this case.

After a brief moment where I was wondering whether IBM ever tested that bit of code or if I am the only one with that issue I replaced:

if(!lcConf.source_ldap_required_dn_regex_pattern.test(sdn)) {

with:

var re = new RegExp(lcConf.source_ldap_required_dn_regex_pattern);
if(!re.test(sdn)) {

This results in my ‘re’ variable to be properly instantiated as a RegExp JavaScript object with the value of the source_ldap_required_dn_regex_pattern string variable. I can then successfully use the JavaScript RegExp.test function to test the value of sdn. As another way of solving the issue I could have also rewritten above line of code and use the matches function of the java.lang.String class, passing in the regular expression itself.

Migrate in-line styles to css with XPages editor

There is a nice little option/feature on the Style tab of the Properties view in the XPages editor. It allows for the export of style parameter that had previously been defined manually in the code, to be migrated into a css file and automatically applied to the selected control. I like that!
Style Migration