Have deployed a Fargate cluster/service/task using pulumi (Python).
"pulumi up" works well.
Now we want to keep all pieces as are, load balancer etc, and just update the docker image of the running task.
How is this done?
Running "pulumi up" again, creates a new cluster.
Related
I am using AWS cloud ECS services and created once ECS cluster. In which there is one FargateService and behind that 2 containers are running.
Two alarms have been created for fargate service on threashold 95.
Everything looks fine now Testing part comes into picture. I want test alarms functionality.
is there any easy way in AWS using some AWS service or manual script so that I can increase CPU and Memory use to test alarm functionality .
I am using Terraform to create infrastructure on AWS environment. Out of many services, we are also creating AWS EKS using terraform-aws-modules/eks/aws module. The EKS is primarily used for spinning dynamic containers to handle asynchronous job execution. Once a given task is completed the container releases resources and terminates.
What I have noticed is that, the dead containers lying on the EKS cluster forever. This is causing too many dead containers just sitting on EKS and consuming storage. I came across few blogs which mention that Kubernetes has garbage collection process, but none describes how it can be specified using Terraform or explicitly for AWS EKS.
Hence I am looking for a solution, which will help to specify garbage collection policy for dead containers on AWS EKS. If not achievable via Terraform, I am ok with using kubectl with AWS EKS.
These two kubelet flags will cause the node to clean up docker images when the filesystem reaches those percentages. https://kubernetes.io/docs/concepts/architecture/garbage-collection/#container-image-lifecycle
--image-gc-high-threshold="85"
--image-gc-low-threshold="80"
But you also probably want to set --maximum-dead-containers 1 so that running multiple (same) images doesn't leave dead containers around.
In EKS you can add these flags to the UserData section of your EC2 instance/Autoscaling group.
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint ..... --kubelet-extra-args '<here>'
I have a ECS fargate cluster up and running and it has 1 service and 1 task definition attached to it.
The task definition already has 2 container images described.This cluster is up and running.
Can I create a new service and for another application and configure it with this Existing ECS cluster.
If yes, will both the service run simultaneously.
From the AWS Documentation in regards Amazon ECS Clusters
An Amazon ECS cluster is a logical grouping of tasks or services. Your
tasks and services are run on infrastructure that is registered to a
cluster.
So I believe, you should be able to run multiple services in a cluster that is attached to its related task definition in the ECS.
Source Documentation - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html
We generally use BlueGreen & Rolling deployment strategy,
for docker containers in ECS container instances, to get deployed & updated.
Ansible ECS modules allow implement such deployment strategies with below modules:
https://docs.ansible.com/ansible/latest/modules/ecs_taskdefinition_module.html
https://docs.ansible.com/ansible/latest/modules/ecs_task_module.html
https://docs.ansible.com/ansible/latest/modules/ecs_service_module.html
Does AWS CDK provide such constructs for implementing deployment strategies?
CDK supports higher level constructs for ECS called "ECS patterns". One of them is ApplicationLoadBalancedFargateService which allows you to define an ECS Fargate service behind an Application Load Balancer. Rolling update is supported out of the box in this case. You simply run cdk deploy with a newer Docker image and ECS will take care of the deployment. It will:
Start a new task with the new Docker image.
Wait for several successful health checks of the new deployment.
Start sending new traffic to the new task, while letting the existing connections gracefully finish on the old task.
Once all old connections are done, ECS will automatically stop the old task.
If your new task does not start or is not healthy, ECS will keep running the original task.
Regarding Blue-Green deployment I think it's yet to be supported in CloudFormation. Once that's done, it can be implemented in CDK. If you can live without BlueGreen as IaC, you can define your CodeDeploy manually.
Check this NPM plugin which helps with blue-green deployment using CDK.
https://www.npmjs.com/package/#cloudcomponents/cdk-blue-green-container-deployment
Blue green deployments are supported in cloud formation now .
https://aws.amazon.com/about-aws/whats-new/2020/05/aws-cloudformation-now-supports-blue-green-deployments-for-amazon-ecs/
Don’t think CDK implementation is done yet .
I know it is not possible to run containers on multiple hosts using a single docker-compose file. And this needs something like docker swarm to do it.
Is there anything equivalent to this on AWS - ECS or Fargate?
Yes, You can do this on ECS using EC2 type or using Fargate type cluster.
If you are moving with ECS EC2 Type model you can opt for different task placement strategies for placing your task on different nodes to achieve any model you want like AZ Spread, Binpack etc
Ref: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-strategies.html
If you are opting for Fargate ECS Type then you don't have to take care of underlying EC2 nodes as those are managed by AWS in this case.
Also, there is a big difference in docker-compose and docker-swarm.
Docker swarm is the orchestration which is required for your use case as of now.