Cloudformation - ECS service. How to manage pipeline-deployed image updates without stack conflicts - aws-cloudformation

I'm attempting to write a CloudFormation template to fully to define all resources required for an ECS service, including...
CodeCommit repository for the nodejs code
CodePipeline to manage builds
ECR Repository
ECS Task Definition
ECS Service
ALB Target Group
ALB Listener Rule
I've managed to get all of this working. The stack builds fine. However I'm not sure how to correctly handle updates.
The Container in the Task Defition in the template required an image to be defined. However the actual application image won't exist until after the code is first built by the pipeline.
I had an idea that I might be able to work around this issue, by defining some kind of placeholder image "amazon/amazon-ecs-sample" for example, just to allow the stack to build. This image would be replaced by CodeBuild when the pipeline first runs.
This part also works fine.
The issues occur when I attempt to update the task definition, for example adding environment variables, in the CloudFormation template. When I re-run the stack, it replaces my application image in the container definition, with the original placeholder image from the template.
This is logical enough, as CloudFormation obviously assumes the image in the template is the correct one to use.
I'm just trying to figure out the best way to handle this.
Essentially I'd like to find some way to tell CloudFormation to just use whatever image is defined in the most recent revision of the task definition when creating new revisions, rather than replacing it with the original template property.
Is what I'm trying to do actually possible with pure CloudFormation, or will I need to use a custom resource or something similar?
Ideally I'd like to keep extra stack dependencies to a minimum.
One possibility I had thought of, would be to use a fixed tag for the container definition image, which won't actually exist when the cloudformation stack first builds, but which will exist after the first code-pipeline build.
For example
image: [my_ecr_base_uri]/[my_app_name]:latest
I can then have my pipeline push a new revision with this tag. However, I prefer to define task defition revisions with specific verion tags, like so ...
image: [my_ecr_base_uri]/[my_app_name]:v1.0.1-[git-sha]
... as this makes it very easy to see exactly what version of the application is currently running, and to revert revisions easily if needed.

Your problem is that you're putting too many things into this CloudFormation template. Your template could include the CodeCommit repository and the CodePipeline. But the other things should be outputs from your pipeline. Remember: Your pipeline will have a build and a deploy stage. The build stage can "build" another cloudformation template that is executed in the deploy stage. During this deploy stage, your pipeline will construct the ECS services, tasks, ALB etc...

Related

Is there a way we can deploy all the cloudformation templates from a gitlab repository using gitlab pipeline in aws in a single stack?

I'm looking for an option to pick all the templates from the repository without hardcode the yml template files and in future if new templates are added, the pipeline should automatically pick all of them and do the deploy and create a single stack in aws environment, without making any modification to gitlab-ci.yml/pipeline file.
I tried using deploy CLI command, it deploy all the templates but then it goes for update and start deleting one by one and only the last template resource will be available after the pipeline execution is complete.
Let me know if there is an option to do this?

Best practice to deploy multi stacks in cloudFormation using codepipeline

I have a repository in CodeCommit, and in this repository, there are 3 branches dev, stage, and prod, in this repository there are multi stacks versioned, for example:
root/
--task-1
----template.yml
------src
--------index.js
--------package.json
--task-2
--task-3
--task-....
--buildspec.yml
Where every folder contains a different template yml and its src folder for the specific Lamba code, the buildspec.yml contains the commands to enter in every task folder and execute the required commands to install the node packages required and the sam or cloudformation commands to create or update the stack.
When a new commit is pushed to origins this trigger the pipeline and executes all the commands of buildspec.yml and create/update all the stacks even when only one stack has been changed in the code, here the question if there are better solutions to handle multi stacks in one repository and one pipeline.
One idea is to create one repository and pipeline for each stack in this way every stack will be updated independently of the other stacks, but in this way, if there are 20 stacks will be required 20 repositories and 20 pipelines.
I would like to know what is the best practice to handle multi stacks in the same repository and one pipeline and avoid deploying all the stacks when just one stack has been updated, or update only stacks that were updated in codecommit.
Create an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function to evaluate changes to the repository and run the appropriate pipeline.
it could be fixed using a lambda and event bridge when a commit happens, more details https://aws.amazon.com/es/blogs/devops/adding-custom-logic-to-aws-codepipeline-with-aws-lambda-and-amazon-cloudwatch-events/

Should ARM Template be ran on every deployment

When using template to deploy infrastructure.Is it expected to run your arm template on every deployment or are you suppose run the arm template once to setup infrastructure and create a another pipeline that deploy to the infrastructure that was setup by ARM.
Run ARM -> Once,
deploy build artifacts -> Repeat
Run ARM then deploy build artifacts -> Repeat
Depends how you want to setup your test environments. In my system I deploy each branch to a new test environment, instead of using a single instance of a resource as "test" instance and deploy to that. So I do run ARM template deployments as part of the deployment pipeline. I place the deployment scripts and ARM templates for a microservice in the same repository as the code. This makes the coherence I am looking for as infra, backend, frontend all live together in one repository for a microservice.
Wanted to throw an opinion for the other side. I highly recommend rerunning your ARM infrastructure deployments every release or at least setting up a scheduled deployment. The reason being, yes it may take a little more time...or a few extra minutes depending on your resources. However, in larger organizations and in particular in lower environments where developers or others may have at least contributor access, there is the risk of drift. By rerunning the ARM templates for each deployment you are guaranteeing the state matches your template, without having to add or setup any policy logic.
Plus I'd say it's the ultimate confidence in your Infrastructure as code. You are 100% confident your template is rerunable.
well, there is no answer to this one, but in my book it doesnt make sense to run the arm template if there are no changes to it. you should have a separate repo for IaC code or a separate build for arm template
From my point of view, re-running the arm template depends on whether your project’s infrastructure and configuration are updated.
If the structure and configuration of the project you build is not updated, you do not need to run the arm template multiple times. You could directly deploy the build artifacts to the same resource.
On the other hand, if your project requires new resources or parameters, you can update or create new resources by editing the Template configuration file (generally a json file). This allows the deployed environment to meet the needs of your project.
In short, there is no absolute answer to this topic, it only depends on your needs.

Updating ECS task definition image using Codeship

I'm trying to determine the best way to generate a new or update an existing AWS ECS task definition JSON file using Codeship's codeship/aws-deployment image.
I don't want to rely on the implicit :latest tag within my task definition and wish to use the custom image tag generated in a previous step which pushes to AWS ECR.
Is a custom bash or python script to pull the current task definition, increment the version and replace the image string the only way to accomplish this or is there an existing CLI tool I'm glossing over?

Unreleasing (undeploying) applications in VSTS?

I have a project with N git repos, each representing a static website (N varies). For every git repo there exists a build definition that creates an nginx docker image on Azure Container Registry. These N build definitions are linked to N release defenitions that deploy each image to k8s (also on Azure). Overall, CI/CD works fine and after the releases have succedded for the first time, I see a list of environments, each representing a website that is now online.
What I cannot do though with VSTS CI/CD is to declare how these environments are torn down. In GitLab CI (which I used before), there exists a concept of stopping an environment and although this is just a stage in .gitlab-ci.yaml, running it literally removes an environemnt from the list of the deployed ones.
Stopping an environment can be useful when deleting autodeployable feature branches (aka Review Apps). In my case, I'd like to do this when an already shared static website needs to be removed.
VSTS does not seem to have a concept of unreleasing something that has already been released and I'm wondering what the best workaround could be. I tried these two options so far:
Create N new release definition pipelines, which call kubecetl delete ... for a corresponding static websites. That does make things clear at all, because an environment called k8s prod (website-42) in one pipeline is not the same one as in another one (otherwise, I could see whether web → cloud or web × cloud was called last):
Define a new environment called production (delete) in the same release defenition and trigger it manually.
In this case 'deploy' sits a bit closer to 'undeploy', but its hard to figure out what was last (in the example above, you can kind of guess that re-releasing my k8s resources happened after I deleted them – you need to look at the time on the cards, which is a pretty poor indication).
What else could work for deleting / undeploying released applications?
VSTS has not the feature "stop environment" (auto delete the deployed things on the environment) in release management. But you can achieve the same thing in VSTS YAML build.
So except the two workarounds you shared, you can also stop the environment by VSTS YAML build (similar as the mechanism in GitLab).
For YAML CI build, you just need to commit the file end with .vsts-ci.yml. And in the .vsts-ci.yml file, you can specify with the tasks to delete the deployed app.