Escape into the Cloud with AWS Elastic Beanstalk and Docker or: How “Release On Steroids” actually looks and feels.


BY Bogdan Kulbida / ON Dec 12, 2018

Convenience and efficiency are the two prime motivators that nudge reluctant teams into the world of Cloud computing.

For many individuals, Computing Cloud is a buzzword that is often interpreted differently. In this article, we will show you how your release process can be dramatically improved, simplified, and streamlined. We will focus mostly on Amazon Web Services – specifically, on how to use Amazon’s Elastic Beanstalk product in combination with a Docker to enable you to take your release process up to the next level. That is what we’re going to call “Release On Steroids”.

This article is a product of multiple, real world iterations on improving infrastructure, making it simpler and cheaper from the maintenance point of view so that you have one command/button to deploy your code to multiple environments, such as production or stage. So that every code change can be deployed with a single button press within a few minutes.

So, what is AWS Elastic Beanstalk? From the AWS website, it is: “an orchestration service offered from Amazon Web Services for deploying infrastructure which orchestrates various AWS services, including EC2, S3, Simple Notification Service, CloudWatch, autoscaling, and Elastic Load Balancers.”

As you can see, it is a massive platform that allows you to build (or provision), monitor and operate practically unlimited number of (micro)services (or instances) and containers in a secure and efficient way and feel the joy of being in the Cloud. Thanks to the Beanstalk, you can stop being afraid that when you deploy something new, you could break something.

Despite the fact that it requires a little upfront investment, the Elastic Beanstalk is actually very long-term beneficial for the entire organization and helps to improve the organization’s overall IT posture.

Elastic Beanstalk also delegates some of the deployment risks and simplifies your operations. So eventually it is cost effective to be in the cloud, at least these days.

Another part of the puzzle is a Docker. From the official website: “Docker is a computer program that performs operating-system-level virtualization, also known as “containerization”. It was first released in 2013 and is developed by Docker, Inc. Docker is used to run software packages called “containers”.” In combination, these two become very powerful tools in building small- to medium-size enterprise infrastructures. Because of them, when you are about to heavily rely on the Cloud solutions, you probably will want to make sure that you do it in the right way, as recommended by the vendor.

Let’s see a scenario for how an old and legacy system might look. As we probably all know, legacy systems tend to be resistant to change; they are fragile; and, most likely, they must be configured manually, even if you have provisioned bash scripts (we still consider that to be a poor implementation since you cannot simply upgrade your service dependencies or easily upgrade host Operation System - most likely).

Image alt

So, as a result of a manual provision, we have a single server that most likely runs multiple services. That is how bad infrastructure smells, so it is also prone to errors and disintegration and, as a result, interruptions.

This is where Docker comes in. Let’s see what dockerization actually is. By “dockerization” we mean a process of defining and extracting each service from your legacy system and isolating it in one distinct container which can be run on a host machine (or on a few hosts if your architecture and business needs require that) along with other containers to provide resiliency and high availability of your system. But here is an issue.

We say, “host machine”. Does it sound like an old school approach still? Well, no, and here is why. A host machine, provisioned as a self-disposable instance (or a server), needs very minimum software to run, practically only the Docker binary and its dependencies.

Let’s clarify what a self-disposable instance is.

A self-disposable instance allows you, as a system admin, to swap servers (or instances in terms of AWS) easily and quickly since provisioning is done automatically. So now you can gain a benefit of provisioning it with a script using AWS cloud means, such as AWS Elastic Beanstalk. What that buys you is that using AWS Elastic Beanstalk you can have your self-disposable instance running within minute or less with a Docker installed and everything else is done for free by the AWS (well almost for free, hourly rates per resources used still apply).

So now we have an environment where we can spin up our containers that host our micro-services. The next step is to dockerize each micro-service and deploy them to the AWS Elastic Beanstalk.

The best starting point is, of course, writing a bunch of Dockerfiles and eventually orchestrate them using Docker run template files and a bunch of other Elastic Beanstalk extension files for cross-host provisioning. Here at Konstankino we dockerize services as well as harden their execution environment so it is optimized from the performance and security point of view. Once services are dockerized, we need some secure place to push our Docker images to. This is where AWS Elastic Container Registry comes into play. The AWS ECR is secure, efficient and cost-effective storage for all your docker images.

Once all your docker images are built and pushed to AWS ECR, you can create a Multi-Docker Elastic Beanstalk environment and deploy those images, provision your cross-host environment, such as AWS Elastic File System, AWS Security Groups, AWS Load Balancer, etc. All these tasks can be done using AWS Elastic Beanstalk extensions.

A challenging part of coming up with the most efficient and appropriate architecture is to understand your platform usage and exposure, so, for example, that you can correctly set Security Groups or link containers together and then scale them.

Image alt

Here at Konstankino we design systems paying close attention to the security. Our aim is that services always run with the least privileged user roles and permissions and expose only ports that are needed for its normal operation.

Once all your containers are deployed in the cloud, you need to make a decision on how you would like the system to scale in and out based on the load. This is where AWS Auto Scaling and AWS Load Balancer come into play. However, if load balancing is not a concern, you can save some funds by using Let’s Encrypt Authority to issue free of charge SSL certificates and deploy them within your services.

We are not going to dive into details and AWS Auto Scaling today, but it is a good topic for another article.

Currently for all our production and staging projects, here at Konstankino, we use AWS Elastic Beanstalk with some modifications and other AWS services involved. It showed us how robust, powerful and elastic the platforms can be, effective both in cost and labor. We deploy our products with one single command that can be run on the development machine as well as on our Continuous Integration server after a successful build pass.

If you need more assistance, we are ready to help you with your project.