Best practice to deploy multi stacks in cloudFormation using codepipeline - aws-cloudformation

I have a repository in CodeCommit, and in this repository, there are 3 branches dev, stage, and prod, in this repository there are multi stacks versioned, for example:
root/
--task-1
----template.yml
------src
--------index.js
--------package.json
--task-2
--task-3
--task-....
--buildspec.yml
Where every folder contains a different template yml and its src folder for the specific Lamba code, the buildspec.yml contains the commands to enter in every task folder and execute the required commands to install the node packages required and the sam or cloudformation commands to create or update the stack.
When a new commit is pushed to origins this trigger the pipeline and executes all the commands of buildspec.yml and create/update all the stacks even when only one stack has been changed in the code, here the question if there are better solutions to handle multi stacks in one repository and one pipeline.
One idea is to create one repository and pipeline for each stack in this way every stack will be updated independently of the other stacks, but in this way, if there are 20 stacks will be required 20 repositories and 20 pipelines.
I would like to know what is the best practice to handle multi stacks in the same repository and one pipeline and avoid deploying all the stacks when just one stack has been updated, or update only stacks that were updated in codecommit.

Create an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function to evaluate changes to the repository and run the appropriate pipeline.
it could be fixed using a lambda and event bridge when a commit happens, more details https://aws.amazon.com/es/blogs/devops/adding-custom-logic-to-aws-codepipeline-with-aws-lambda-and-amazon-cloudwatch-events/

Related

Is there a way we can deploy all the cloudformation templates from a gitlab repository using gitlab pipeline in aws in a single stack?

I'm looking for an option to pick all the templates from the repository without hardcode the yml template files and in future if new templates are added, the pipeline should automatically pick all of them and do the deploy and create a single stack in aws environment, without making any modification to gitlab-ci.yml/pipeline file.
I tried using deploy CLI command, it deploy all the templates but then it goes for update and start deleting one by one and only the last template resource will be available after the pipeline execution is complete.
Let me know if there is an option to do this?

Release pipeline using several build pipelines?

I'm facing the following issue: I have one git repo with a Node.js application. The application is divided into several components, namely: server, client, microserviceA, microserviceB. There is also a directory named shared with some sharaed code used by all the other components.
I have a pipeline for each of the components that only runs tests, which run on pull-request to master. Each pipeline only runs when the PR contains changes relevant to him, e.g. server-ci will run only when there were changes in the server component, etc.
Now, on merge to master, I would like to build the components and deploy them on a staging server. Currently what I have is as follows: for each component (beside the shared) I have another build pipeline (<component>-build) which on merge to master builds the corresponding component (depending on the changes made, as above). I have one Release pipeline which takes as artifacts all these build pipelines and deploys them on the staging server. So the good thing about this is that merge to master which includes only changes in client will build only client and not all the rest of the components.
The problem is, that on merge to master that contains changes to several components, I'll have more than one build pipeline running, so they will both trigger the Release pipeline. This is not desirable.
A possible solution I thought about was, to have only one build pipeline which runs on merge to master, but then I'd have to build ALL the components on each merge, which is inefficient.
What is the best way to deal with such situation?
In the release stage settings you can configure that Number of parallel deployments will be 1:

"Templates in other repositories may not specify a repository source when referencing another template" when trying advanced pipeline

I am trying to find a solution for a rather complicated Azure Devops build pipeline.
I have 3 repositories (lets call them RepoA, RepoB and RepoC) which each contain templates for jobs that need to run for builds within those repositories.
I also have a Build repository which contains templates for tasks that are used by the jobs in all 3 of the main repositories.
I have run into a major problem when trying to build a unified pipeline that can run jobs across the different repositories however. While I can implement a pipeline with a repository resource pointing to RepoA, RepoB and RepoC, when I add a template job from any of those repos using the templateJob.yaml#RepoA syntax, I receive an error due to the commonTaskTemplate.yaml#build reference within the job template in RepoA.
Templates in other repositories may not specify a repository source
when referencing another template.
Is there any clean work around for this issue? I could implement the task templates as custom tasks in an Azure Devops Extension, but that will make it much harder to make any changes to those shared tasks. I could also re-implement the shared tasks in every branch of every repo, but that makes a maintenance nightmare as the tasks would be near impossible to update.
It seems like every alternative I can come up with results in it being quite difficult to update the common tasks and it seems like there should be some way to do this more easily since I can't imagine that needing common tasks across repositories and a common pipeline that builds across repositories can be that uncommon of a thing.
Any other ideas on how to work around this limitation?

Cloudformation - ECS service. How to manage pipeline-deployed image updates without stack conflicts

I'm attempting to write a CloudFormation template to fully to define all resources required for an ECS service, including...
CodeCommit repository for the nodejs code
CodePipeline to manage builds
ECR Repository
ECS Task Definition
ECS Service
ALB Target Group
ALB Listener Rule
I've managed to get all of this working. The stack builds fine. However I'm not sure how to correctly handle updates.
The Container in the Task Defition in the template required an image to be defined. However the actual application image won't exist until after the code is first built by the pipeline.
I had an idea that I might be able to work around this issue, by defining some kind of placeholder image "amazon/amazon-ecs-sample" for example, just to allow the stack to build. This image would be replaced by CodeBuild when the pipeline first runs.
This part also works fine.
The issues occur when I attempt to update the task definition, for example adding environment variables, in the CloudFormation template. When I re-run the stack, it replaces my application image in the container definition, with the original placeholder image from the template.
This is logical enough, as CloudFormation obviously assumes the image in the template is the correct one to use.
I'm just trying to figure out the best way to handle this.
Essentially I'd like to find some way to tell CloudFormation to just use whatever image is defined in the most recent revision of the task definition when creating new revisions, rather than replacing it with the original template property.
Is what I'm trying to do actually possible with pure CloudFormation, or will I need to use a custom resource or something similar?
Ideally I'd like to keep extra stack dependencies to a minimum.
One possibility I had thought of, would be to use a fixed tag for the container definition image, which won't actually exist when the cloudformation stack first builds, but which will exist after the first code-pipeline build.
For example
image: [my_ecr_base_uri]/[my_app_name]:latest
I can then have my pipeline push a new revision with this tag. However, I prefer to define task defition revisions with specific verion tags, like so ...
image: [my_ecr_base_uri]/[my_app_name]:v1.0.1-[git-sha]
... as this makes it very easy to see exactly what version of the application is currently running, and to revert revisions easily if needed.
Your problem is that you're putting too many things into this CloudFormation template. Your template could include the CodeCommit repository and the CodePipeline. But the other things should be outputs from your pipeline. Remember: Your pipeline will have a build and a deploy stage. The build stage can "build" another cloudformation template that is executed in the deploy stage. During this deploy stage, your pipeline will construct the ECS services, tasks, ALB etc...

How to set up 2 different Jenkins jobs linked with 2 different repos in one Jenkins installation?

I have Job1 that is linked to a Github repo and when I push code it builds in it's own workspace (space1)
I want to add a second job (Job2) that will be linked with a different GitHub repo and will build the code in a different workspace (space2).
Notice: 2 different jobs building different code from different repos (both master branches) in different workspaces.
Is it possible with vanilla Jenkins or will I need any extra plugin?
I have researched Pipeline (link1, link2) a little but I try to figure out if it covers my use case.
EDIT:
I have setup the communication between the second job and GitHub but in order for the build to succeed needs an SSH key. But Jenkins provides only one slot for configuring the SSH key.
Also I have added a second workspace .