End to End Deployment in Production | Going Serverless with AWS Fargate

Varun Kumar
6 min readAug 22, 2020

--

So, you are a developer and have been working on your super awesome application for a long time. You have spent days & weeks in fine tuning it and now it’s ready to hit the market. Hmmm, so you want to deploy it but what are your options?

I remember my first deployment. We were using shared web hosting by GoDaddy and we used to transfer file using FileZilla/cPanel. That was sufficient for our college project developed using PHP and MySQL. But then node.js came into our life and that was not supported by those shared host providers. We learnt about something called VPS (Virtual private server e.g. DigitalOcean Droplets) and VPC (Virtual Private Cloud e.g. AWS EC2 instances). If you have to deploy your app, you login to your server using SSH, install necessary software and take a git pull of your code. Pretty cool but how will you scale your app horizontally? Probably doing the above same thing in 2–3 VPS machines (connected via load balancer), not cool. If you are using AWS, you would launch additional EC2 instances with the stored AMI as per the Autoscale policy.

Docker & MicroServices

Docker and MicroService Architecture are latest trend in the industry. You no longer need to setup execution environment for your app, Dockerfile would do it. Your app now consists of many microservices running independently into docker containers. You would use docker-compose.yml to connect all your microservices and in one command your super awesome app would start spinning on a single machine. This is fine for development but production you would want to distribute your containers dynamically over multiple machines. This is where the docker orchestration services e.g. Docker Swarm, AWS ECS, AWS EKS etc come into the picture.

AWS ECS

Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service. According to their documentation, ECS is a highly scalable, fast, container management service that makes it easy to run, stop, and manage containers on a cluster. Your containers are defined in a task definition which you use to run individual tasks or as a service.

AWS ECS Architecture

ECS gives you two options. You can run your tasks and services on a serverless infrastructure that is managed by AWS Fargate or, for more control over your infrastructure, you can run your tasks and services on a cluster of Amazon EC2 instances that you manage. I would recommend to go Serverless using AWS Fargate since it eliminates continuous monitoring of the Current EC2 capacity and making sure that the cluster has enough EC2 resources in case there is a sudden spike in traffic.

Serverless Deployment using AWS Fargate

With Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary. Now coming back to your super awesome app, lets say that your app consists of 3 microservices-
1. Frontend- Users facing side running on port 4000 and mounted on “/” route
2. Admin Panel- Internal tool running on port 5000 and mounted on “/admin” route
3. Backend- API server for both Frontend as well as Admin Panel, running on port 3000 and mounted on “/api” route

You want an end result something like this-

Frontend mounted on “/” route and communicating with Backend
Admin Panel mounted on “/admin” route and communicating with Backend

With this objective in mind, lets proceed and see how can you achieve this. In this article I’ll talk about all the deployment steps on a high level and answer some of queries that I struggled with. However if you are interested in more details, you can find this project on Github.

Deployment Steps

Following are the steps required to deploy your super awesome app in production. You can perform these actions using AWS CLI or via AWS Console or a combination of both.

  • Create an AWS Identity and Access Management (IAM) user assigning AmazonEC2ContainerRegistryFullAccess and AmazonECS_FullAccess permission policies.
  • Configure AWS CLI with credentials of IAM user created above
  • Build docker image for each of the 3 microservices i.e. Frontend, Backend & Admin Panel
  • Authenticate docker to Amazon ECR registry
  • Create ECR repositories for all 3 microservices and push docker image
  • Create and configure Amazon VPC
  • Create an elastic load balancer
  • Create Target groups for all 3 microservices
  • Create a listener for ELB and add path based rules to connect all 3 target groups i.e. “/” for Frontend Target Group, “/api” for Backend and “/admin” for Admin Panel
  • Create a cluster in AWS ECS
  • Register Task Definitions for each of the 3 microservices
  • Create ECS services for all 3 microservices using their respective Task Definitions
  • Create a Log Group on Amazon CloudWatch to monitor logs for all 3 microservices
  • Using DNS name of your load balancer, try visiting your app from browser

I know, I know, this is a long list and you are lazy. But hey, we are talking about end to end deployment here so you must get out of your comfort zone. Besides, most of these steps are required only one time and once you get familiar with them, it will be a piece of cake. Visit https://github.com/varunon9/aws-ecs-getting-started for detailed instructions along with screenshots and sample code.

ELB path based rules for Listener

Common queries

When I started with ECS, I had tons of queries and doubts. I spent days of googling, watching tutorials and talking to multiple people. I know you also must have some questions. Here is my attempt to answer some of them-

Shall I dockerize my database?

No no, never do that in production else your friends and family will stop talking to you. Containers are meant to be stateless so that they can be scaled up and scaled down by adding/removing them.

Where should I host my database then?

You can go for fully managed database services e.g. AWS relational db or host your own database on EC2 instance. Of course in the latter you will have to maintain it (backups, monitoring, alerting, scaling).

Where can I monitor logs and set alerts for my containers?

Amazon CloudWatch is the answer. You can check logs, gain insights of your containers, set up Alarms, scale up & scale down your containers and much more.

CloudWatch Console

How do I redeploy my code after making changes to codebase?

You will have to rebuild docker image, push to ECR, update Task Definition to consume updated image and finally update service to consume latest revision of Task Definition. Rather than doing it manually, you can setup CI/CD pipeline using Github Actions and AWS CodeBuild.

Conclusion

That’s all folks for this article. There are many more things that I didn’t talk about e.g. setting up SSL and associating your custom domain name with your ELB, setting up Database, Deploying an NGINX reverse proxy sidecar container for added security and performance etc. I am sure once you get started you will eventually explore these. Hopefully you will find this article and Github repository useful for your journey of Deployment in production :D

Thank you Keshav for your inputs, suggestions and validations of content.
Thank you Om for your feedback.

--

--