How to add new container in the already existed task definition using cloudformation? - aws-cloudformation

I have setup CICD using Codepipeline and CodeBuild that deploy my application to ECS Fargate container. This is the setup for the frontend of my app.
For the backend I have another same CICD. But this time, I want to deploy my backend to the same ECS Fargate container using Cloudformation. I know I have to update the task definition.
How I can update the already existed task definition that only create backend container of my app to the same task definition that we have used for frontend. And it should not affect the frontend container?
Is there any workaround for this?

You can't do that. Task definitions are immutable. You can only create new reversion of a task definition and deploy the new reversion. You can't change existing reversion. From docs:
A task definition revision is a copy of the current task definition with the new parameter values replacing the existing ones. All parameters that you do not modify are in the new revision.
To update a task definition, create a task definition revision. If the task definition is used in a service, you must update that service to use the updated task definition.

Related

AWS CLI - Create New Revision of Task Definition

In AWS ECS with the UI, I can create a new revision of a task definition.
I go to Task Definitions -> Select my Task Definition -> Select my Revision -> Click Create new revision.
With AWS UI, the container definition properties are copied across from the old revision to the new revision.
With AWS CLI, how do I copy across the container definition from the old revision to the new revision? Is there a simple CLI command I can use without having to manually extract properties from the old definition to then create the new definition?
This is my AWS CLI solution so far:
I'm getting the image with:
aws ecr describe-images ...
And the container definition with:
aws ecs describe-task-definition ...
I'm then extracting the container definition properties, placing them in a json string $CONTAINER_DEFINITION and then creating a new revision with:
aws ecs register-task-definition --family $TASK_DEFINITION --container-definitions $CONTAINER_DEFINITION
When I check the UI, the old revision's container definition properties are not copied across to the new revision's container definition.
I expected the container definition properties to be copied across from the old revision to the new revision, as that would be the same behaviour as AWS UI.
I am trying to do exactly the same - create a new revision of an existing task definition using an updated container version. I suspect your approach is registering an entirely new task definition, rather than creating an incremental version of an existing one.
Update... managed to get this working using Powershell and the AWS CLI. The PS commands below read the task definition, edit the container version in the container definitions, then get the container defs as JSON and pass them back into the register command.
$taskDef = aws ecs describe-task-definition --task-definition <task definition name> --region=eu-west-1 | ConvertFrom-Json
$taskDef.taskDefinition.containerDefinitions[0].image = "<container>:<version>"
$containerDefinitions = $taskDef.taskDefinition.containerDefinitions | ConvertTo-Json -Depth 10
aws ecs register-task-definition --family "<task definition name>" --container-definitions $containerDefinitions --memory 8192
The trick to generate a revision (rather than a new task def) appeared to be the family parameter which is set to the existing task definition name. Not sure why it required the memory parameter, but it worked.

ADO YAML task for Docker Container Image Creation

I am trying to find out what is the equivalent ADO YAML task for the Classic "Task group: Docker Container Image Creation".
I tried Docker and Docker Compose tasks but both of them don't have an argument to capture the environment the application package was built for.
Since the name of the task in the classic editor is "Task group: Docker Container Image Creation", what you see in the classic editor is probably a task group:
A task group allows you to encapsulate a sequence of tasks, already defined in a build or a release pipeline, into a single reusable task that can be added to a build or release pipeline, just like any other task. You can choose to extract the parameters from the encapsulated tasks as configuration variables, and abstract the rest of the task information.
...
Task groups are a way to standardize and centrally manage deployment steps for all your applications. When you include a task group in your definitions, and then make a change centrally to the task group, the change is automatically reflected in all the definitions that use the task group. There is no need to change each one individually.
When a task group is created the creator can define their own parameters and use them in one or more subtasks inside the task group.
To replicate this behavior in yaml pipelines you need to examine the task group to understand what tasks it contains and then define a reusable template in yaml, which allows you to define reusable content, logic, and parameters
Task group is only available classic pipelines (see here).
For YAML pipelines, you can set up the step template to reuse the same steps in different YAML pipelines.

how to handle ECS deploys in CodePipeline for changes in task definition

I am deploying an ECS Fargate task with two containers: 1 reverse proxy nginx and 1 python server. For each I have an ECR repository, and I have a CI/CD CodePipeline set up with
CodeCommit -> CodeBuild -> CodeDeploy
This flow works fine for simple code changes. But what if I want to add another container? I can certainly update my buildspec.yml to add the building of the container, but I also need to 1) update my task definition, and 2) assign this task definition to my service.
Questions:
1) If I use the CLI in my CodeBuild stage to create a new task definition and associate it with my service, won't this trigger a deploy? And then my CodeDeploy will try to deploy again, so I'll end up deploying twice?
2) This approach ends up creating a new task definition and updating the service on every single deploy. Is this bad? Should I have some logic to pull down the LATEST task revision and diff the JSON from CodeCommit version and only update if there is a difference?
Thanks!
The CodePipeline's ECS Job Worker copies the Task Definition and updates the Image and ImageTag for the container specified in the 'imagedefinitions.json' file, then updates the ECS Service with this new TaskDef. The job worker cannot add a new container in the task definition.
If I use the CLI in my CodeBuild stage to create a new task definition and associate it with my service, won't this trigger a deploy? And then my CodeDeploy will try to deploy again, so I'll end up deploying twice?
I don't think so. Is there a CloudWatch event rule that triggers CodeDeploy in such fashion?
This approach ends up creating a new task definition and updating the service on every single deploy. Is this bad? Should I have some logic to pull down the LATEST task revision and diff the JSON from CodeCommit version and only update if there is a difference?
The ECS Deploy Job worker creates a new task definition revision every time a deployment occurs so if this is official behaviour, I wouldn't consider it bad as such.
I will question why you need to add new containers to your Task definition in runtime during deploys. Your task definition in general should be modified less often, and only the image:tag in it should be modified regularly - something the ECS Deploy action helps you achieve.

VSTS Task Groups: use variable in task title

(I'm using hosted VSTS.)
I have written a generic database deployment Task Group, and I would like the title to include the name of the database being provisioned.
E.g.: Deploy schema $(databaseName)
When I run an instance of the task, though, the variable substitution doesn't happen.
Is this a known issue or am I missing something?
Steps to reproduce
Create a task group named "Provision database"
Add a task of type Azure SQL Database Deployment
Set the name of said task to Deploy schema $(databaseName)
Create a Release which includes said task
Run the release
In the release log the task appears as:
##[section]Starting: Deploy schema $(databaseName)
The problem being that the databaseName variable is not replaced with its value.
Below is a screenshot of the relevant Azure SQL Database Deployment configuration

AWS ECS get placement constraint after task creation

I am trying to create a CI build step that will stop and re-run my tasks when my docker containers changed.
The definition itself would be pointing at latest tag in ECR, and so all i need is to stop-task and then run-task.
Two of the parameters in the API as well as the UI are PlacementConstraints and PlacementStrategy.
Is there any way to get these from the API AFTER the task has been started? e.g. get them for a running task. describe-tasks doesn't seem to return this information.