We are using AWS amplify for React-GraphQL app with production and staging environments.
After failing to push new GraphQL fields, we found we have multiple stacks (some nested), some titled An auto-generated nested stack and some with update_rollback_failed (stage already exists in stack arn:aws:cloudformation) status.
We need to understand what created these stacks (I guess Amplify CLI code generation or push), how to identify the stack we need and if we can delete the others.
Related
I have a repository in CodeCommit, and in this repository, there are 3 branches dev, stage, and prod, in this repository there are multi stacks versioned, for example:
root/
--task-1
----template.yml
------src
--------index.js
--------package.json
--task-2
--task-3
--task-....
--buildspec.yml
Where every folder contains a different template yml and its src folder for the specific Lamba code, the buildspec.yml contains the commands to enter in every task folder and execute the required commands to install the node packages required and the sam or cloudformation commands to create or update the stack.
When a new commit is pushed to origins this trigger the pipeline and executes all the commands of buildspec.yml and create/update all the stacks even when only one stack has been changed in the code, here the question if there are better solutions to handle multi stacks in one repository and one pipeline.
One idea is to create one repository and pipeline for each stack in this way every stack will be updated independently of the other stacks, but in this way, if there are 20 stacks will be required 20 repositories and 20 pipelines.
I would like to know what is the best practice to handle multi stacks in the same repository and one pipeline and avoid deploying all the stacks when just one stack has been updated, or update only stacks that were updated in codecommit.
Create an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function to evaluate changes to the repository and run the appropriate pipeline.
it could be fixed using a lambda and event bridge when a commit happens, more details https://aws.amazon.com/es/blogs/devops/adding-custom-logic-to-aws-codepipeline-with-aws-lambda-and-amazon-cloudwatch-events/
I would like to setup one GitHub repo which will hold all backend Google Cloud Functions. But I have 2 questions:
How can I set it up so that GCP knows that there are multiple Cloud Functions to be deployed?
If I only change code for one Cloud Functions, how can I setup so that GCP will only deploy the modified Cloud Functions, and NOT to redeployed the unchanged onces?
When saving your cloud function to a github repo just make sure to have in the gitignore file the proper settings. Exclude node_modules and such folder and files you don't want to commit.
Usualy all cloud functions are deployed trough a single index.js file. So here you need to make sure you have that and all files you import into it.
If you want to deploy a single function you can use this command:
firebase deploy --only functions:your_function_name
If you want to have a more structure solution you can read this article to learn more about it.
As I understand, you would like to store cloud functions code in GitHub, and I assume you would like to trigger deployment on some action - push or pull request. At the same time, you would like only modified cloud functions be redeployed, leaving others without redeployment.
Every cloud function is your set is (re)deployed separately, - I mean the for each of them you might need to provide plenty of specific parameters, and those parameters are different for different functions. Details of those parameters are described in the gcloud functions deploy command documentation page.
Most likely the --entry-point - the name of the cloud function as defined in source code - is to be different for each of them. Thus, you might have some kind of a "loop" through all cloud functions for deployment with different parameters (including the name, entry point, etc.).
Such "loop" (or "set") may be implemented by using Cloud Build or Terraform or both of them together or some other tool.
An example how to deploy only modified cloud functions is provided in the How can I deploy google cloud functions in CI/CD without redeploying unchanged SO question/answer. That example can be extended into an arbitrary number of cloud functions. If you don't want to use Terraform, the similar mechanism (based not he same idea) can be implemented by using pure Cloud Build.
I'm attempting to write a CloudFormation template to fully to define all resources required for an ECS service, including...
CodeCommit repository for the nodejs code
CodePipeline to manage builds
ECR Repository
ECS Task Definition
ECS Service
ALB Target Group
ALB Listener Rule
I've managed to get all of this working. The stack builds fine. However I'm not sure how to correctly handle updates.
The Container in the Task Defition in the template required an image to be defined. However the actual application image won't exist until after the code is first built by the pipeline.
I had an idea that I might be able to work around this issue, by defining some kind of placeholder image "amazon/amazon-ecs-sample" for example, just to allow the stack to build. This image would be replaced by CodeBuild when the pipeline first runs.
This part also works fine.
The issues occur when I attempt to update the task definition, for example adding environment variables, in the CloudFormation template. When I re-run the stack, it replaces my application image in the container definition, with the original placeholder image from the template.
This is logical enough, as CloudFormation obviously assumes the image in the template is the correct one to use.
I'm just trying to figure out the best way to handle this.
Essentially I'd like to find some way to tell CloudFormation to just use whatever image is defined in the most recent revision of the task definition when creating new revisions, rather than replacing it with the original template property.
Is what I'm trying to do actually possible with pure CloudFormation, or will I need to use a custom resource or something similar?
Ideally I'd like to keep extra stack dependencies to a minimum.
One possibility I had thought of, would be to use a fixed tag for the container definition image, which won't actually exist when the cloudformation stack first builds, but which will exist after the first code-pipeline build.
For example
image: [my_ecr_base_uri]/[my_app_name]:latest
I can then have my pipeline push a new revision with this tag. However, I prefer to define task defition revisions with specific verion tags, like so ...
image: [my_ecr_base_uri]/[my_app_name]:v1.0.1-[git-sha]
... as this makes it very easy to see exactly what version of the application is currently running, and to revert revisions easily if needed.
Your problem is that you're putting too many things into this CloudFormation template. Your template could include the CodeCommit repository and the CodePipeline. But the other things should be outputs from your pipeline. Remember: Your pipeline will have a build and a deploy stage. The build stage can "build" another cloudformation template that is executed in the deploy stage. During this deploy stage, your pipeline will construct the ECS services, tasks, ALB etc...
I need help to understand better how to create complete CI/CD with Azure Devops for APim. Ok I already has explored the tools and read docs:
https://github.com/Azure/azure-api-management-devops-resource-kit
But I still have questions, my scenario:
APim dev instance with APi and operations created and others settings, as well APim prod instance created but empty.
I ran the extract tool and got the templates (not all), still need the Master (linked templates), and on this point seat my doubt, I already have 2 repos too(dev and prod).
How can I create the Master template, and how my changes from dev environment will be automatically applied to prod?
I didn't used the policyXMLBaseUrl parameters not sure what Path insert there, although seems #miaojiang inserted a folder from azure storage.
After some search and tries I deployed API's and Operations from an environment to another, but we still don't have a full automated scenario, where I make a change in a instance and that is automatically available.Is necessary to edit policies and definitions directly on the repositories or run the extract tool again.
I'm using Bitbucket Pipelines to do CD for a Serverless app. I want to use as few "build minutes" as possible for each deployment. The lifecycle of the serverless deploy command, when using AWS as the backing, seems to be:
Push the package to CloudFormation. (~60 seconds)
Sit around watching the logs from CloudFormation until the deployment finishes. (~20-30 minutes)
Because of the huge time difference, I don't want to do step two. So my question is simple: how do I deploy a serverless app such that it only does step one and returns success or failure based on whether or now CloudFormation successfully accepted the new package?
I've looked at the docs for serverless deploy and I can't see any options to enable that. Also, there seem to be AWS specific options in the serverless deploy command already, so maybe this is an option that the serverless team will consider if there is no other way to do this.
N.B. As for, "how will you know if CloudFormation fails?", for that, I would rather set up notifications to come from CloudFormation directly. The build can just have the responsibility of pushing to CloudFormation.
I don't think you can do it with serverless deploy. You can try serverless package command that will store the package in .serverless folder or you can specify the path using --package. Package will create a CloudFormation template file e.g. cloudformation-template-update-stack.json. You can then call Create Stack API action to create the stack. It will return the stack ID without waiting for all the resources to be created.