
Amazon Elastic Container Service (Amazon ECS) is a highly scalable and fast container management service. You can use it to run, stop, and manage containers on a cluster. With Amazon ECS, your containers are defined in a task definition that you use to run an individual task or task within a service. In this context, a service is a configuration that you can use to run and maintain a specified number of tasks simultaneously in a cluster. You can run your tasks and services on a serverless infrastructure that’s managed by AWS Fargate. Alternatively, for more control over your infrastructure, you can run your tasks and services on a cluster of Amazon EC2 instances that you manage.
Launch types
There are two models that you can use to run your containers:
Fargate launch type – This is a serverless pay-as-you-go option. You can run containers without needing to manage your infrastructure.
EC2 launch type – Configure and deploy EC2 instances in your cluster to run your containers.
The Fargate launch type is suitable for the following workloads:
– Large workloads that need to be optimized for low overhead
– Small workloads that have occasional burst
– Tiny workloads
– Batch workloads
The EC2 launch type is suitable for the following workloads:
– Workloads that require consistently high CPU core and memory usage
– Large workloads that need to be optimized for price
– Your applications need to access persistent storage
– You must directly manage your infrastructure
Step by Step Guide of AWS Elastic Container Service(With Images)
Deploy Your Container and Configure to Access AWS Resource
In this post, I will illustrate the process of deploying a docker container using AWS ECS and configure it to access other AWS resources. Now, let’s get right into the core.
Prerequisite: Basic understanding of docker, AWS and you need to have an AWS account with admin access.
Build a Container
If you already have a container ready to deploy, you can skip this part. If not, I wrote a simplest python project here, in which it has the most basic structure of a docker container:
ecs-example:
|_ model
| |_ run.py
| |_ sample.txt
|
|_ docker-entrypoint.sh
|_ Dockerfile
|_ requirements.txt
The main file run.py
resides in the model
folder and what it does is just to upload the sample.txt
into s3
(S3 is AWS’s simple storage system to store data)
import boto3 import logging import os logging.basicConfig(level=logging.INFO) logger = logging.getLogger('example_write_s3') bucket = 'projectjz' key = 'ecs-example/sample.txt' local_file_path = 'model/sample.txt' if __name__ == "__main__": logger.info("writing {} -> {}".format(local_file_path, os.path.join(bucket, key))) s3 = boto3.resource('s3') bucket = s3.Bucket(bucket) data = open(local_file_path, 'rb') bucket.put_object(Key=key, Body=data)
The docker-entrypoint.sh
defines the command to run at the start of the container
#!/usr/bin/env bash
export PYTHONPATH=.
python3 model/run.py
So when the container kick start, it would call the run.py
written above and load the designated file to s3.
Now have a look of our Dockerfile
FROM python:3.7-slim
ENV APP_DIR /ecs-example
RUN mkdir -p ${APP_DIR}
WORKDIR ${APP_DIR}
ADD ./requirements.txt ${APP_DIR}
RUN pip install -r requirements.txt
COPY ./model ${APP_DIR}/model
COPY ./docker-entrypoint.sh ${APP_DIR}/docker-entrypoint.sh
ENTRYPOINT ${APP_DIR}/docker-entrypoint.sh
It does the following things,
- Set working directory
- Install required packages
- Copy your local files into docker container
- Set the starting point on start of container
Now that our demo project is all set, let’s build and tag our container and push it to your own docker hub.
Inside your project root directory, do
docker build -t ecs-example .
(Optional) docker run ecs-example
tag and push to private repo
docker tag ecs-example {YOUR_REPO_ACCOUNT}/ecs-example:v1
docker push {YOUR_REPO_ACCOUNT}/ecs-example:v1
Now go to your private docker registry, you should see it inside your repo like this:

So now we have our project containerised and published, let’s proceed to deployment.
Deploy on ECS
ECS stands for elastic container service. Just as its name says, it is a service dedicated to managing docker containers, but with the advantage of simplifying the process of container deployment and avoid the heavy lifting part from users. In a nutshell, it is probably the easiest way of deploying your container on cloud and with proper standards. (For more info, you can read the docs here)
Overview of ECS
Firstly, go to your AWS console and search for elastic container service or just ECS.
The quickest way for you to start might be click the Get Started
button in the front page, and follow the guidance, but in this post we would build each component manually and I believe this would give you a deeper understanding of each part, also for our container to access S3, it requires extra config that the “Get Started” does not enable.
First let’s have an overview of the components in ECS,

- Container definition: defines your container image, environment variables, storage mount, … etc.
- Task definition: wraps up your container and is a blueprint of your deployment, specifies roles for your container, how to pull images, …, etc.
- Service: Manages how many instances of your container to deploy, which network it should resides in, …, etc.
- Cluster: Contains VPC, subnets, …, etc.
Create a Cluster
To start let’s create a cluster. Choose clusters on the side bar and select Create Cluster
,

Choose Networking only
, and select next step. The Networking only
is built for Fargate taks. In oppose to EC2 tasks,
the Fargate launch type allows you to run your containerized applications without the need to provision and manage the backend infrastructure. When you run a task with a Fargate-compatible task definition, Fargate launches the containers for you.
Basically, using this type, AWS handles most infra stuff for you, so you will have the least to worry about.
In the next step, input your cluster name(here I use ecs-example-cluster), select create new VPC(if you don’t have one) and enable container insights(this gives CloudWatch monitoring and help debugging).

And hit Create. Your cluster should be ready in a minute.

A bunch of resources are set up for your cluster. Now let’s move to Task Definition.
Create Task Definition
Now let’s get to the creation of your container blueprint. On the side bar, choose Task Definitions and select Create new Task Definition.
Choose Fargate (If you would like AWS to handle the infra for you), and it would leads you to:

Create Task Role
Give your task definition a name. And select a Task Role for you container, but wait, what is Task Role? As explained under the input,
Task Role: Optional IAM role that tasks can use to make API requests to authorized AWS services. Create an Amazon Elastic Container Service Task Role in the IAM Console
Remember that our container needs to upload a file into S3? So to allow your container to access s3, it needs a role to authorise it to do so. (For more info, get to the official docs here)
Now open your AWS console in a new tab.
Choose service IAM → select Roles → Create Role
For Select type of trusted entity section, choose AWS service.
For Choose the service that will use this role, choose Elastic Container Service.
For Select your use case, choose Elastic Container Service Task and choose Next: Permissions.

For our service to access s3, choose AmazonS3FullAccess (This is not the best practice of granting s3 full access to a service, you might need to restrict a service to a certain bucket by creating new Policy).
For Add tags (optional), enter any metadata tags you want to associate with the IAM role, and then choose Next: Review.
For Role name, enter a name for your role. For this example, type ECSs3AccessTaskRole
to name the role, and then choose Create role to finish.
NOW BACK TO TASK DEFINITION
Select the role you just created into the Task Role.
For Task execution IAM role, this role helps to pull images from your docker register, so firstly we need to create this role if you don’t have it already. The role creation process is similar to the above and you can follow the steps here. (The created role name here is ecsTaskExecutionRole
!)
And for Task size, all we choose the minimum, surely our task is simple and does not require much resource.

The task execution role is different from the task role we just created, as stated in the creation process:
Task Execution Role: This role is required by tasks to pull container images and publish container logs to Amazon CloudWatch on your behalf.
Task Role: Optional IAM role that tasks can use to make API requests to authorized AWS services. Create an Amazon Elastic Container Service Task Role in the IAM Console
A good difference explanation from stack overflow here.
Add Container
Now most importantly, let’s get to our container definition. Input the following:

For Image*, input your private image registry url. Tick the Private repository authentication because our image resides in our private repo on docker hub. In order for our service to access our private registry, we need to tell our account and password to the agent. The way on AWS to manage secrets is using Secrets Manager.
Create Secret
Go to your AWS console again → select Secrets Manager → Store a new secret → Other types of secrets
Enter plain text:
{
"username": {YOUR_DOCKER_ACCOUNT},
"password": {YOUR_DOCKER_PWD}
}
Choose next, give your secret a name and others just go with the default.
After creation you should have something like this:

Click your secret and you can retrieve your secret ARN, and input this ARN into the container definition above!
Although we have input our secret in the container, but remember that the fargate agent uses the task execution role to pull images from your private registry, you can understand the process this way:

So the next step would be add the secret to our execution role that we created above.
Google Hacks VSeries
Add Secret to Task Execution Role
Now go to the execution role under IAM again and add inline policy:

Choose the Json format and input,
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue"
],
"Resource": [
{YOUR_SECRET_ARN}
]
}
]
}
I know it is exhausted, but we are about to finish! Now our task definition is done! Go back to our task definition, leave other as default and hit Create.
Create Service and Deploy Task Definition
We’ve now has gone through the hardest part, now what we need is to create the last component — service and deploy our task!
On the side panel, click Clusters
Choose ecs-example-cluster

Under Service tab, choose Create, and input following:

Choose next step, for VPC just choose the VPC and subnet we created in Create Cluster Session, others just leave as default, and finally create the service.
Now go to the service tab, you should see the job running …

And to validate our work, go to s3 and you should see(prerequisite, you should have your bucket created before this):

Done! Our container is successfully deployed and file is uploaded to S3!
Google Hacks VSeries
Delete Cluster
To delete our cluster, delete our service first and then delete cluster, it would take a few minutes to take in effect.