A look at Amazon Fargate
This year at re:Invent, AWS announced the start of Fargate, their Docker containers SaaS, akin to a managed EKS.
Terraform has recently added support for it in their provider, and since the provider split, it allows us to get access to the new features at a much faster speed.
Today, we will deploy through Terraform a simple docker image on a Fargate-backed ECS cluster.
Read more on Fargate at the official details page: https://aws.amazon.com/fargate/
At the time of writing, Fargate is only available in us-east-1
(US North Virginia). You can follow updates on the AWS region table at https://aws.amazon.com/about-aws/global-infrastructure/regional-product-....
Architecture
This is an overview of what we’re trying to deploy: - A VPC with a private and a public subnet - An ECS cluster in the private subnet, running our Docker container - An ALB load-balancing requests to the ECS cluster
Note: The entire repository is available on our Github: https://github.com/Oxalide/terraform-fargate-example.
Writing the terraform configuration
To achieve the minimal amount of High Availability, we need to deploy our ECS cluster to run on at least 2 Availability Zones (AZs). The load balancer also needs at least 2 public subnets in different AZs.
Our networking configuration looks like:
# Fetch AZs in the current region
data "aws_availability_zones" "available" {}
resource "aws_vpc" "main" {
cidr_block = "172.17.0.0/16"
}
# Create var.az_count private subnets, each in a different AZ
resource "aws_subnet" "private" {
count = "${var.az_count}"
cidr_block = "${cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)}"
availability_zone = "${data.aws_availability_zones.available.names[count.index]}"
vpc_id = "${aws_vpc.main.id}"
}
# Create var.az_count public subnets, each in a different AZ
resource "aws_subnet" "public" {
count = "${var.az_count}"
cidr_block = "${cidrsubnet(aws_vpc.main.cidr_block, 8, var.az_count + count.index)}"
availability_zone = "${data.aws_availability_zones.available.names[count.index]}"
vpc_id = "${aws_vpc.main.id}"
map_public_ip_on_launch = true
}
# IGW for the public subnet
resource "aws_internet_gateway" "gw" {
vpc_id = "${aws_vpc.main.id}"
}
# Route the public subnet traffic through the IGW
resource "aws_route" "internet_access" {
route_table_id = "${aws_vpc.main.main_route_table_id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gw.id}"
}
# Create a NAT gateway with an EIP for each private subnet to get internet connectivity
resource "aws_eip" "gw" {
count = "${var.az_count}"
vpc = true
depends_on = ["aws_internet_gateway.gw"]
}
resource "aws_nat_gateway" "gw" {
count = "${var.az_count}"
subnet_id = "${element(aws_subnet.public.*.id, count.index)}"
allocation_id = "${element(aws_eip.gw.*.id, count.index)}"
}
# Create a new route table for the private subnets
# And make it route non-local traffic through the NAT gateway to the internet
resource "aws_route_table" "private" {
count = "${var.az_count}"
vpc_id = "${aws_vpc.main.id}"
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = "${element(aws_nat_gateway.gw.*.id, count.index)}"
}
}
# Explicitely associate the newly created route tables to the private subnets (so they don't default to the main route table)
resource "aws_route_table_association" "private" {
count = "${var.az_count}"
subnet_id = "${element(aws_subnet.private.*.id, count.index)}"
route_table_id = "${element(aws_route_table.private.*.id, count.index)}"
}
After setting up the network, we need to create a few security-related resources to ensure our application is properly shielded.
# ALB Security group
# This is the group you need to edit if you want to restrict access to your application
resource "aws_security_group" "lb" {
name = "tf-ecs-alb"
description = "controls access to the ALB"
vpc_id = "${aws_vpc.main.id}"
ingress {
protocol = "tcp"
from_port = 80
to_port = 80
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Traffic to the ECS Cluster should only come from the ALB
resource "aws_security_group" "ecs_tasks" {
name = "tf-ecs-tasks"
description = "allow inbound access from the ALB only"
vpc_id = "${aws_vpc.main.id}"
ingress {
protocol = "tcp"
from_port = "${var.app_port}"
to_port = "${var.app_port}"
security_groups = ["${aws_security_group.lb.id}"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
}
With this, we’re ready to deploy our ECS cluster.
We’re using a very simple ALB: route every request to our ECS cluster.
In a production environment, it is highly recommended to listen to HTTPS on port 443 instead of HTTP requests on port 80. ACM is available to provision free certificates to terminate HTTPS at the ALB level.
resource "aws_alb" "main" {
name = "tf-ecs-chat"
subnets = ["${aws_subnet.public.*.id}"]
security_groups = ["${aws_security_group.lb.id}"]
}
resource "aws_alb_target_group" "app" {
name = "tf-ecs-chat"
port = 80
protocol = "HTTP"
vpc_id = "${aws_vpc.main.id}"
target_type = "ip"
}
# Redirect all traffic from the ALB to the target group
resource "aws_alb_listener" "front_end" {
load_balancer_arn = "${aws_alb.main.id}"
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.app.id}"
type = "forward"
}
}
Fargate requires specifying the task-level CPU and memory parameters (which can be different from the container-level parameters). In our case, we only have one container running in our task, so we can afford setting them to the same values.
The available combinations are listed on the official documentation: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definit...
resource "aws_ecs_cluster" "main" {
name = "tf-ecs-cluster"
}
resource "aws_ecs_task_definition" "app" {
family = "app"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = "${var.fargate_cpu}"
memory = "${var.fargate_memory}"
container_definitions = <<DEFINITION
[
{
"cpu": ${var.fargate_cpu},
"image": "${var.app_image}",
"memory": ${var.fargate_memory},
"name": "app",
"networkMode": "awsvpc",
"portMappings": [
{
"containerPort": ${var.app_port},
"hostPort": ${var.app_port}
}
]
}
]
DEFINITION
}
resource "aws_ecs_service" "main" {
name = "tf-ecs-service"
cluster = "${aws_ecs_cluster.main.id}"
task_definition = "${aws_ecs_task_definition.app.arn}"
desired_count = "${var.app_count}"
launch_type = "FARGATE"
network_configuration {
security_groups = ["${aws_security_group.ecs_tasks.id}"]
subnets = ["${aws_subnet.private.*.id}"]
}
load_balancer {
target_group_arn = "${aws_alb_target_group.app.id}"
container_name = "app"
container_port = "${var.app_port}"
}
depends_on = [
"aws_alb_listener.front_end",
]
}
Note that we do not specify in our aws_ecs_service
a iam_role/code>. AWS will default to their official one,
aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS/code>, that will handle registration on the load balancer.
Why do we need both a public and private subnet?
In this example, we’re using a public and a private subnet.
Fargate allows you to bind a public IP to the launched containers, allowing you to only use a public subnet.
However, doing so entails losing control of the source IP. In our example, every outgoing connection uses the NAT Gateway Elastic IP. It is up to you to choose between having control of the source IP (easier to manage authorizations), or save the cost of using a NAT Gateway.
Sources
Auteur : Anthony Dong