how to handle ECS deploys in CodePipeline for changes in task definition - amazon-ecs

I am deploying an ECS Fargate task with two containers: 1 reverse proxy nginx and 1 python server. For each I have an ECR repository, and I have a CI/CD CodePipeline set up with
CodeCommit -> CodeBuild -> CodeDeploy
This flow works fine for simple code changes. But what if I want to add another container? I can certainly update my buildspec.yml to add the building of the container, but I also need to 1) update my task definition, and 2) assign this task definition to my service.
Questions:
1) If I use the CLI in my CodeBuild stage to create a new task definition and associate it with my service, won't this trigger a deploy? And then my CodeDeploy will try to deploy again, so I'll end up deploying twice?
2) This approach ends up creating a new task definition and updating the service on every single deploy. Is this bad? Should I have some logic to pull down the LATEST task revision and diff the JSON from CodeCommit version and only update if there is a difference?
Thanks!

The CodePipeline's ECS Job Worker copies the Task Definition and updates the Image and ImageTag for the container specified in the 'imagedefinitions.json' file, then updates the ECS Service with this new TaskDef. The job worker cannot add a new container in the task definition.
If I use the CLI in my CodeBuild stage to create a new task definition and associate it with my service, won't this trigger a deploy? And then my CodeDeploy will try to deploy again, so I'll end up deploying twice?
I don't think so. Is there a CloudWatch event rule that triggers CodeDeploy in such fashion?
This approach ends up creating a new task definition and updating the service on every single deploy. Is this bad? Should I have some logic to pull down the LATEST task revision and diff the JSON from CodeCommit version and only update if there is a difference?
The ECS Deploy Job worker creates a new task definition revision every time a deployment occurs so if this is official behaviour, I wouldn't consider it bad as such.
I will question why you need to add new containers to your Task definition in runtime during deploys. Your task definition in general should be modified less often, and only the image:tag in it should be modified regularly - something the ECS Deploy action helps you achieve.

Related

How to add new container in the already existed task definition using cloudformation?

I have setup CICD using Codepipeline and CodeBuild that deploy my application to ECS Fargate container. This is the setup for the frontend of my app.
For the backend I have another same CICD. But this time, I want to deploy my backend to the same ECS Fargate container using Cloudformation. I know I have to update the task definition.
How I can update the already existed task definition that only create backend container of my app to the same task definition that we have used for frontend. And it should not affect the frontend container?
Is there any workaround for this?
You can't do that. Task definitions are immutable. You can only create new reversion of a task definition and deploy the new reversion. You can't change existing reversion. From docs:
A task definition revision is a copy of the current task definition with the new parameter values replacing the existing ones. All parameters that you do not modify are in the new revision.
To update a task definition, create a task definition revision. If the task definition is used in a service, you must update that service to use the updated task definition.

Should I update ecs service through CloudFormation or ecs directly

I want to create a CI/CD pipeline for deploying micro-services using AWS ECS.
Everything is ok until new image uploaded to ECR (trigger build new Docker image when new code is committed, pushes new Docker image to ECR).
The next step is i need to update service with new Docker image, then i have to options:
Update CloudFormation for ecs(which i need to design 1 stack contain only ecs infrastructure for each mirco-service)
Update ecs serivce directly via update-service cli
Which approach should i choose?
Updated:
At the first, i prefer the option 1, it have advantages like:
Rollback if deployment failed
Avoid dirty data (compare with direct update resource)
But the thing i concern is only one stack for each ecs infrastucture, this will create many stacks, does this lead too hard to manage stacks ?
Thank all!!
If you are using IaC such as CDK or CFN to manage resources then it is always suggested to make updates to those resources via IaC. Making updates directly to the resources could cause your stack to drift and cause you bother in the long term.
The best practice is to always use CloudFormation or CDK.
You can see version history to track changes. You can do auto rollbacks if there are any deployment issues.

ADF deployment without making changes on trigger status

I am doing CI/CD on Azure Data Factory.
I do have a DEV instance and PROD instance of Azure Data Factory.
The deployment process is going smooth except one problem with the triggers.
I do have around 20 triggers in which 15 are in running state and 5 are stopped for a while in PROD.
Since DEV ADF is a development instance and i do not want to run any of the triggers in scheduled manner in development instance status of all triggers are set to stopped.
Currently the trigger status changes from DEV to PROD deployment is by replacing the string '"runtimeState": "Stopped"', '"runtimeState": "Started"' in the ARM template json file.
But this will start all of the triggers in the production after deployment along with the 5 triggers which are stopped for a while.
Is there any way to un touch the trigger status at the time of deployment in PROD and only add newly created trigger into PROD without touching the existing triggers in PROD?
You can add an Azure PowerShell Task to do this activity.
You can write a PowerShell script to Start/Stop trigger and place it in relevant pipeline.
In below case, I have stopped all triggers before deployment, you can add Pre or Post deployment step via PowerShell to achieve this.

Specify order of pipelines and dependencies

I'm having a hard time getting a grasp on this to be honest.
Right now my lab project is as follows:
PR to master -> Triggers Pre-Build Pipeline as condition to merge the code ->
On merge Infrastructure pipe runs only if any changes happen in my Infrastructure folder ->
On merge I want to run my deploy pipeline to deploy my web app to Azure.
The pipes in question do the things they ought to, i.e.
Pre build builds, publishes artifact, runs Unit tests, validates ARM templates.
Infra pipe deploys the necessary infra for my web app such as ResourceGroup, App plan, app service, key vault.
Deploy Pipe downloads the artifact produced in pre deploy and deploys to a stage slot and swaps it to production slot.
What I can't seem to get to work is the pipeline chaining through dependencies, if changes happen to both infra and web app code in master I want the infra pipe to run first and the deploy pipe only if it succeeds.
If I merge only app code I want only the deploy pipe to run regardless if the infra pipe ran or not.
If I merge only infra code I want only the infra pipe to run.
If I merge both app and infra code I want both infra and deploy pipe to run in specific order.
I feel this shouldn't be all that hard to accomplish, but I've spent way too much time trying to solve this to no avail, anyone able to help? :)
Edit:
Hey Sorry #HughLin-MSFT Been Trying to work around this a bit since we're trying to avoid running scripts left and right. :)
I saw you have Build Queuing planned in an upcoming release so for now I think we might have to wait for that.
If I were to merge my deploy and infra pipe, can I use:
trigger:
branches:
include:
- master
paths:
include:
- Infrastructure/*
At stage level and somehow skip a stage instead?
Seen multiple articles mention "Continue if skipped" but can't find any information on how to actually skip a stage.
For the first and second cases, you just need to set Path filters in Triggers, the pipeline only triggers when the file at the specified path is changed. Please refer to this.
For the third case, you can try to add two agent jobs in the infra pipe, add Trigger Azure DevOps Pipeline task to the second agent job to trigger the deploy pipe, and then set Only when all previous jobs have succeeded in Run this job drop-down box for job2. In addition, you need to add a powershell task before the Trigger Azure DevOps Pipeline task, and use a script to detect whether there is app code, run job2 if there is, and cancel job2 if not.
Update:
First you can create a new pipeline and create a variable:changedcode
Use Builds - Get rest api to get the commit , then get the changed code folder with Commits - Get Changes rest api.
Assign changed code folder name as value to changedcode variable.
Set custom conditions for the agent job. In the Infra job, if the changedcode variable value is Infra, run the Infra job. In the Infra job, use the Builds-Queue rest api or Trigger Azure DevOps Pipeline task to trigger the Infra pipeline. The same is true for Deploy job, the only difference is the custom condition expression.
Here is a sample structure in yaml:
jobs:
variables:
changedcode: ""
- job:
steps:
- powershell: |
#Get the changed code folder with rest api
- job: Infra
condition: containsValue($(changedcode), "Infra"))
- powershell: |
#queue Infra pipeline with rest api or Trigger Azure DevOps Pipeline task
- job: Deploy
condition: (containsValue($(changedcode), "deploy")),and ....
- powershell: |
#queue Deploy pipeline with rest api or Trigger Azure DevOps Pipeline task

AWS ECS get placement constraint after task creation

I am trying to create a CI build step that will stop and re-run my tasks when my docker containers changed.
The definition itself would be pointing at latest tag in ECR, and so all i need is to stop-task and then run-task.
Two of the parameters in the API as well as the UI are PlacementConstraints and PlacementStrategy.
Is there any way to get these from the API AFTER the task has been started? e.g. get them for a running task. describe-tasks doesn't seem to return this information.