Using Terraform to provision Amazon’s ECR, and ECS to manage containers (docker)
Using Terraform to provision Amazons ECR and ECS to manage containers (docker)
AWS provides alot of cloud based services, and Elastic Container Service (ECS) is just one of many. ECS, just like kubernetes, helps you manage containers. For one to fully use ECS, you must have a good understanding of what containers, images are all about.
Quick summary; using Docker as an example to explain containers & images; Docker is a software tool that helps for a single OS to run multiple containers with the help of the container runtime engine. the engine helps allocate system resources through the kernel, which makes running each container seemless as though it was running on its own OS. With docker, you can create an image (an app or code package with all its dependencies). A container then is a running instance of that image.
Docker does provide a platform to host images called DockerHub. This comes with Docker's commands for pushing and pulling images to and from the remote image repository. For our case, however, we will use Amazon's Elastic Container Registry, aka ECR. ECR is amazon's version of Dockerhub. With ECR, you can create a remote repository to host all your images.
To have ECR & Docker working, we have to authenticate Docker to Amazons ECR. First, collect the region
and aws_account_id
. use the command below to authenticate Docker to ECR
$ aws ecr get-login-password --region ${region} | docker login --username AWS --password-stdin ${aws_account_id}.dkr.ecr.${region}.amazonaws.com
you should be able to see a
Login Succeeded
message once the above command is run with the correct values from your aws account
Using terraform to create image repository
Once authenticated, we can either use the aws cli, or we can use terraform to create the repository.The latter is a more interesting option. For terraform, we can just create image_repo.tf
file with these lines that create a remote repository, with a policy attached to it.
resource "aws_ecr_repository" "demo-repository" {
name = "demo-repo"
image_tag_mutability = "IMMUTABLE"
}
resource "aws_ecr_repository_policy" "demo-repo-policy" {
repository = aws_ecr_repository.demo-repository.name
policy = <<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "adds full ecr access to the demo repository",
"Effect": "Allow",
"Principal": "*",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:GetLifecyclePolicy",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
]
}
]
}
EOF
}
An aws_ecs_repository policy
defines permissions on this repository. The Principal
attribute defines which IAM user can push images to this repository, and thee Action
attribute defines the what sort of actions (as the attribute name suggests) the user can perform on this particular repository.
Using terraform's commands;
terraform init
terraform plan
terraform apply
(in that order) creates the resource with the defined policy configuration. Once we have the resource created. We can verify the resource exists by viewing repository in aws ECR dashboard
With the image repository created, we can now push any specific images we need
To view a list all images on the OS, run this command
docker images
with possible output like below
REPOSITORY TAG IMAGE ID CREATED SIZE
node current-slim 84zcb5q09aea 2 days ago 167MB
docker/getting-started latest 4f02149ex038 5 days ago 26.8MB
github/super-linter latest 8d4ed3426d51 2 weeks ago 1.94GB
alpine latest a23bb4015296 3 weeks ago 5.57MB
rails_app latest 52f471ep4bb5 1 month ago 968MB
Choose an IMAGE ID and provide tag name for this image. (remember the ${aws_account_id}
, ${region}
, and ${repository-name}
). The ${repository-name}
can be found in the terraform resource defined under the name
attribute.
docker tag ${image_id} ${aws_account_id}.dkr.ecr.${region}.amazonaws.com/${repository-name}:${image_tag}
after taging this image, we can use docker to push this image to amazon's container registry
docker push ${aws_account_id}.dkr.ecr.${region}.amazonaws.com/${repository-name}
the following would be the output for a successful docker push to ECR
The push refers to repository [${aws_account_id}.dkr.ecr.${region}.amazonaws.com/${repository-name}]
2c023ab93af2: Pushed
7015c8d0a3f5: Pushed
842b24e93f2c: Pushed
e4d7bc8584dd: Pushed
923ed18d2581: Pushed
9e3f39b108ca: Pushed
804abd5012d6: Pushed
da1749281c4c: Pushed
268417188696: Pushed
8ffc50431e46: Pushed
b4505242243c: Pushed
8700c6d5f108: Pushed
1e1890158369: Pushed
6270adb5794c: Pushed
${tag-name}: digest: sha256:08f00004d30e1cfa3ce47c800adb137f651edb6f0000000000bd31ddde97e6a6 size: 3245
The above steps can be repeated multiple times to push images to a remote repository. In conclusion we have used terraform to create an image repository with amazon's Elastic Container Registry, connected our docker to ecr, and used docker to push our image to the repository on aws
Using terraform to create ecs task definition, ecs service and ecs cluster
As defined earlier, Amazon's Elastic Container Service, just like kubernetes, helps to manage containers. With ECS, you only have define a few resources and ECS takes care of the rest, in terms of auto-scaling, using the load-balancer, and also deciding when to spin up new task
s depending on the traffic on one or any of the existing containers.
Below are the resources that ECS needs to be defined:
- Cluster
- Service
- Task Definition
Task Definition :
The ecs_task_definition
is the most important unit the ECS ecosystem. It contains memory and cpu allocations, the container definitions etc. The container definition has port mappings for the container and host, and most importantly the image from ECR.
Service :
This defines the how many instances of the task_definition
we want to run, we provide this with the desired_count
attribute. Each instance of a task_definition
is called a Task
. The service also requires network configuration for subnet(s). The launch_type
attribute for the service is very crucial. Only two types exist ie FARGATE or EC2. Using FARGATE means you dont have to worry about managing a cluster and/or its services, FARGATE does that for you. With EC2 launch type, you would have to be responsible for managing the cluster with its EC2 instances. This is why we have a launch_type
of FARGATE for the aws_ecs_service
resource.
Cluster :
This is ultimate component for ECS. A cluster can contain multiple ecs_service
s, with each service running multiple instances of the task_definition
. Having a service of launch_type
FARGATE means ECS gets to manage for you cluster and service optmization and resource utilization. In case one of the tasks fails within a cluster, ECS will automatically spin up a new task with same cpu and memory allocation defined in the task_definition
Using a single terraform module, we can define all three resources ie: ecs_task_definition
, ecs_service
, and ecs_cluster
for the Amazon's Elastic Container Service.
create a ecs.tf
file with these lines of code:
resource "aws_ecs_cluster" "demo-ecs-cluster" {
name = "ecs-cluster-for-demo"
}
resource "aws_ecs_service" "demo-ecs-service-two" {
name = "demo-app"
cluster = aws_ecs_cluster.demo-ecs-cluster.id
task_definition = aws_ecs_task_definition.demo-ecs-task-definition.arn
launch_type = "FARGATE"
network_configuration {
subnets = ["subnet-05t93f90b22ba76qx"]
assign_public_ip = true
}
desired_count = 1
}
resource "aws_ecs_task_definition" "demo-ecs-task-definition" {
family = "ecs-task-definition-demo"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
memory = "1024"
cpu = "512"
execution_role_arn = "arn:aws:iam::123456789012:role/ecsTaskExecutionRole"
container_definitions = <<EOF
[
{
"name": "demo-container",
"image": "123456789012.dkr.ecr.us-east-1.amazonaws.com/demo-repo:1.0",
"memory": 1024,
"cpu": 512,
"essential": true,
"entryPoint": ["/"],
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}
]
EOF
}
running these terraform's commands;
terraform init && terraform plan && terraform apply
(in that order) creates the 3 ECS resources. We can verify the resource exists by viewing the ECS dashboard
Once in the dashboard, we can view the service with its running task
s, each task
has a Public IP that we can use to access the running container image.