AWS CLI - Create New Revision of Task Definition - amazon-ecs

In AWS ECS with the UI, I can create a new revision of a task definition.
I go to Task Definitions -> Select my Task Definition -> Select my Revision -> Click Create new revision.
With AWS UI, the container definition properties are copied across from the old revision to the new revision.
With AWS CLI, how do I copy across the container definition from the old revision to the new revision? Is there a simple CLI command I can use without having to manually extract properties from the old definition to then create the new definition?
This is my AWS CLI solution so far:
I'm getting the image with:
aws ecr describe-images ...
And the container definition with:
aws ecs describe-task-definition ...
I'm then extracting the container definition properties, placing them in a json string $CONTAINER_DEFINITION and then creating a new revision with:
aws ecs register-task-definition --family $TASK_DEFINITION --container-definitions $CONTAINER_DEFINITION
When I check the UI, the old revision's container definition properties are not copied across to the new revision's container definition.
I expected the container definition properties to be copied across from the old revision to the new revision, as that would be the same behaviour as AWS UI.

I am trying to do exactly the same - create a new revision of an existing task definition using an updated container version. I suspect your approach is registering an entirely new task definition, rather than creating an incremental version of an existing one.
Update... managed to get this working using Powershell and the AWS CLI. The PS commands below read the task definition, edit the container version in the container definitions, then get the container defs as JSON and pass them back into the register command.
$taskDef = aws ecs describe-task-definition --task-definition <task definition name> --region=eu-west-1 | ConvertFrom-Json
$taskDef.taskDefinition.containerDefinitions[0].image = "<container>:<version>"
$containerDefinitions = $taskDef.taskDefinition.containerDefinitions | ConvertTo-Json -Depth 10
aws ecs register-task-definition --family "<task definition name>" --container-definitions $containerDefinitions --memory 8192
The trick to generate a revision (rather than a new task def) appeared to be the family parameter which is set to the existing task definition name. Not sure why it required the memory parameter, but it worked.

Related

How to add new container in the already existed task definition using cloudformation?

I have setup CICD using Codepipeline and CodeBuild that deploy my application to ECS Fargate container. This is the setup for the frontend of my app.
For the backend I have another same CICD. But this time, I want to deploy my backend to the same ECS Fargate container using Cloudformation. I know I have to update the task definition.
How I can update the already existed task definition that only create backend container of my app to the same task definition that we have used for frontend. And it should not affect the frontend container?
Is there any workaround for this?
You can't do that. Task definitions are immutable. You can only create new reversion of a task definition and deploy the new reversion. You can't change existing reversion. From docs:
A task definition revision is a copy of the current task definition with the new parameter values replacing the existing ones. All parameters that you do not modify are in the new revision.
To update a task definition, create a task definition revision. If the task definition is used in a service, you must update that service to use the updated task definition.

How to cache resources that haven't changed rather than rebuild or delete?

I have a pulumi repository setup for an AWS project such that I have a directory of services
index.ts
services/
user-service/
recommendation-service/
chat-service/
convert-service/
Each service has its own docker file and application code (i.e. node or go micro service).
There is a pulumi script in the root index.ts that currently scans the services directory for directories with directory name matching the pattern: *-service.
For each service directory a fargateType ECS service is created.
These services are then added to their own target group and attached to an Application Load Balancer using a ALB listener with path based routing condition so that
/user/* -> user service /recommendation/* -> recommendation service /chat/* -> chat service ...etc
This is all working fine and dandy!!
The only issue is I wish to build a git pipeline with incremental builds... Meaning If there is no diff to the user-service I do not want to build the docker image or have pulumi calculate a diff of aws resources I want to skip all that without deleting the resource... It would be simple enough to just check to see if the file has been modified either using git to see what files have changed since last commit, or use a checksum.
I can do that but currently pulumi will delete those resources if they are skipped in the "pulumi up" script.
I would like to do this without creating a separate stack for each service, as it is convenient to reproduce the entire environment by creating a single new stack for all resources.
I want those resources to stay as they were if there is no change without pulumi having to create all those resources.

Kubernetes - Upgrading Kubernetes-cluster version through Terraform

I assume there are no stupid questions, so here is one that I could not find a direct answer to.
The situation
I currently have a Kubernetes-cluster running 1.15.x on AKS, deployed and managed through Terraform. AKS recently Azure announced that they would retire the 1.15 version of Kubernetes on AKS, and I need to upgrade the cluster to 1.16 or later. Now, as I understand the situation, upgrading the cluster directly in Azure would have no consequences for the content of the cluster, I.E nodes, pods, secrets and everything else currently on there, but I can not find any proper answer to what would happen if I upgrade the cluster through Terraform.
Potential problems
So what could go wrong? In my mind, the worst outcome would be that the entire cluster would be destroyed, and a new one would be created. No pods, no secrets, nothing. Since there is so little information out there, I am asking here, to see if there are anyone with more experience with Terraform and Kubernetes that could potentially help me out.
To summary:
Terraform versions
Terraform v0.12.17
+ provider.azuread v0.7.0
+ provider.azurerm v1.37.0
+ provider.random v2.2.1
What I'm doing
§ terraform init
//running terrafrom plan with new Kubernetes version declared for AKS
§ terraform plan
//Following changes are announced by Terraform:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
#module.mycluster.azurerm_kubernetes_cluster.default will be updated in-place...
...
~ kubernetes_version = "1.15.5" -> "1.16.13"
...
Plan: 0 to add, 1 to change, 0 to destroy.
What I want to happen
Terraform will tell Azure to upgrade the existing AKS-service, not destroy before creating a new one. I assume that this will happen, as Terraform announces that it will "update in-place", instead of adding new and/or destroying existing clusters.
I found this question today and thought I'd add my experience as well. I made the following changes:
Changed the kubernetes_version under azurerm_kubernetes_cluster from "1.16.15" -> "1.17.16"
Changed the orchestrator_version under default_node_pool from "1.16.15" -> "1.17.16"
Increased the node_count under default_node_pool from 1 -> 2
A terraform plan showed that it was going to update in-place. I then performed a terraform apply which completed successfully. kubectl get nodes showed that an additional node was created, but both nodes in the pool were still on the old version. After further inspection in Azure Portal it was found that only the k8s cluster version was upgraded and not the version of the node pool. I then executed terraform plan again and again it showed that the orchestrator_version under default_node_pool was going to be updated in-place. I then executed terraform apply which then proceeded to upgrade the version of the node pool. It did that whole thing where it creates an additional node in the pool (with the new version) and sets the status to NodeSchedulable while setting the existing node in the pool to NodeNotSchedulable. The NodeNotSchedulable node is then replaced by a new node with the new k8s version and eventually set to NodeSchedulable. It did this for both nodes. Afterwards all nodes were upgraded without any noticeable downtime.
I'd say this shows that the Terraform method is non-destructive, even if there have at times been oversights in the upgrade process (but still non-destructive in this example): https://github.com/terraform-providers/terraform-provider-azurerm/issues/5541
If you need higher confidence for this change then you could alternativly consider using the Azure-based upgrade method, refreshing the changes back into your state, and tweaking the code until a plan generation doesn't show anything intolerable. The two azurerm_kubernetes_cluster arguments dealing with version might be all you need to tweak.

how to handle ECS deploys in CodePipeline for changes in task definition

I am deploying an ECS Fargate task with two containers: 1 reverse proxy nginx and 1 python server. For each I have an ECR repository, and I have a CI/CD CodePipeline set up with
CodeCommit -> CodeBuild -> CodeDeploy
This flow works fine for simple code changes. But what if I want to add another container? I can certainly update my buildspec.yml to add the building of the container, but I also need to 1) update my task definition, and 2) assign this task definition to my service.
Questions:
1) If I use the CLI in my CodeBuild stage to create a new task definition and associate it with my service, won't this trigger a deploy? And then my CodeDeploy will try to deploy again, so I'll end up deploying twice?
2) This approach ends up creating a new task definition and updating the service on every single deploy. Is this bad? Should I have some logic to pull down the LATEST task revision and diff the JSON from CodeCommit version and only update if there is a difference?
Thanks!
The CodePipeline's ECS Job Worker copies the Task Definition and updates the Image and ImageTag for the container specified in the 'imagedefinitions.json' file, then updates the ECS Service with this new TaskDef. The job worker cannot add a new container in the task definition.
If I use the CLI in my CodeBuild stage to create a new task definition and associate it with my service, won't this trigger a deploy? And then my CodeDeploy will try to deploy again, so I'll end up deploying twice?
I don't think so. Is there a CloudWatch event rule that triggers CodeDeploy in such fashion?
This approach ends up creating a new task definition and updating the service on every single deploy. Is this bad? Should I have some logic to pull down the LATEST task revision and diff the JSON from CodeCommit version and only update if there is a difference?
The ECS Deploy Job worker creates a new task definition revision every time a deployment occurs so if this is official behaviour, I wouldn't consider it bad as such.
I will question why you need to add new containers to your Task definition in runtime during deploys. Your task definition in general should be modified less often, and only the image:tag in it should be modified regularly - something the ECS Deploy action helps you achieve.

AWS ECS get placement constraint after task creation

I am trying to create a CI build step that will stop and re-run my tasks when my docker containers changed.
The definition itself would be pointing at latest tag in ECR, and so all i need is to stop-task and then run-task.
Two of the parameters in the API as well as the UI are PlacementConstraints and PlacementStrategy.
Is there any way to get these from the API AFTER the task has been started? e.g. get them for a running task. describe-tasks doesn't seem to return this information.