Cloudformation submitted information does not contain changes when updating task formation image version - aws-cloudformation

If my cloud formation script is like this:
myServiceName:
Type: "AWS::ECS::Service"
Properties:
ServiceName: "myServiceName"
TaskDefinition: !Ref myTaskName
myTaskName:
Type: "AWS::ECS::TaskDefinition"
Properties:
ContainerDefinitions:
Image: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/docker-image-name:1.1.1"
And I update the task definition to 1.1.2
Image: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/docker-image-name:1.1.2"
Then trying to run a Cloud formation update command gives me this error:
*Submitted information does not contain changes. *
Is it just not possible to update the task definition to point to a new image in an ecr with out changing the service?

All the documentation I've read says that this error comes up when you don't change any properties of your resource, so Cloudformation doesn't see any resources as changed, and therefore won't redeploy.
But you are changing a property, and yet it's still happening, which is weird. I haven't been able to find any record of such behavior.
Debugging suggestion: try adding an arbitrary new property to your resource, e.g. a tag field. If it updates successfully, it means for some reason the changed Image doesn't trigger an update, and the fix would be to always change something else too. If it still doesn't update, then I suspect something is going wrong somewhere else in your process and you're not actually uploading your changed template at all.

I found the following in the CloudFormation User Guide that may help.
Troubleshooting CloudFormation - No updates to perform
I encountered an issue adding a DeletionPolicy attribute (which is not a property). According to the documentation, adding/changing metadata will cause CloudFormation to accept certain changes.

Related

Custom resource outputs not found

I've written a custom resource in Go using cloudformation-cli-go-plugin, it's failing when I try and use it in a stack with
Unable to retrieve Guid attribute for MyCo::CloudFormation::Workloads, with error message NotFound guid not found.
The stack:
AWSTemplateFormatVersion: 2010-09-09
Description: Sample MyCo Workloads Template
Resources:
Resource1:
Type: 'MyCo::CloudFormation::Workloads'
Properties:
APIKey: ""
AccountID: ""
Workload: >-
workload: {entityGuids: "", name: "CloudFormationTest-Create"}
Outputs:
CustomResourceAttribute1:
Value: !GetAtt Resource1.Guid
If I remove the Outputs stanza the stack runs successfully and I can see the created resource.
Running with SAM locally I've verified that Guid is in fact always returned. FWIW the resource passes all of the contract tests, Guid is the primaryIdentifier, and is a readOnlyProperties.
I've tried several tests playing with the !GetAtt definition, all of which fail with schema errors so it appears the CF is aware of the format of the resource's properties.
Suggestions and/or pointers would be appreciated.
The issue here is Read failing due to CloudFormation behaving differently than the Contract Tests. The Contract Tests do not follow the CloudFormation model rules, they are more permissive.
There are a number of differences in how the Contract Tests and CloudFormation behave- passing the Contract Tests does not guarantee CloudFormation compatibility. Two for instances:
The contract tests allow returning a temporary primaryIdentifier that can change between handler.InProgress and handler.Success
The contract tests pass the entire model to all events. CloudFormation only passes the primaryIdentifier to Read and Delete.

Does skip_deploy_on_missing_secrets work in static web app pipeline?

I would like to only build my static web app and not deploy it. I saw there is a env setting "skip_deploy_on_missing_secrets' but after setting that in the pipeline it just gets ignored and the pipeline fails with error saying the deployment token is not set. How exactly should I use this env setting? Does it actually work?
There's not much info on the internet about this parameter. However, at least Dapr docs suggest that it should work, and I doubt they'd put it in their docs if it didn't (here).
However, I had problems getting it working as well.
One thing to notice there is that Dapr docs actually show a GitHub Action, and they work a little bit differently than Azure CICD YAML Pipelines, which I was using.
Finally I stumbled upon this comment on a similar issue on GitHub which hints that this magic undocumented parameter should be passed as an environment variable. I was passing it as an input. Maybe GitHubActions forward these params to envs automatically?
So I tried setting it as ENV and it worked!
- task: AzureStaticWebApp#0
inputs:
app_location: ...blahblahblah
....
#skip_deploy_on_missing_secrets: true
# ABOVE: this one is documented in few places, but it's expected to be a ENV var!
#see https://github.com/Azure/static-web-apps/issues/679
env:
SKIP_DEPLOY_ON_MISSING_SECRETS: true

Updating a CloudFormation stack with a Cognito pool claims that we're adding attributes when we're not

Starting on Nov 7, 2018 we started getting the following error when updating our CloudFormation stacks:
Updating user pool schema is not allowed from cloudformation. Use the
AddCustomAttributes API or the AWS Cognito Console to update user pool
schema.
Our CF stacks don't have any changes to the custom attributes of the Cognito pool. They only have changes to the PostConfirmation and CustomMessage triggers, as well the addition of API Gateway responses.
Does anybody know why we might be seeing this? How can we avoid this error message?
We had the same problem with deployment. For now we are deploying it without CustomMessage trigger and setting CustomMessage trigger manually after deployment.
we removed the CustomMessage changes from our template and that seemed to do the trick.
Mostly by luck, I've found an answer that allows me to get around this in an automated manner.
How our scripts used to work
First, let me explain how this used to work. I used to have the following set of cloudFormation scripts:
cognitoSetup.template --> <Serverless Framework> --> <cognitoSetup.template updated with triggers>
So we'd setup the Cognito pool, run the Serverless Framework to add the Cognito Lambda functions, and then update the cognitoSetup.template file with the ARNs for the lambdas exported when the Serverless Framework ran.
The Fix
Now, we include the ARNs for the Lambdas in the cognitoSetup.template. So now cognitoSetup.template looks like this:
"CognitoUserPool": {
"Type": "AWS::Cognito::UserPool"
...
"Properties": {
...
"LambdaConfig": {
"CustomMessage": "arn:aws:lambda:<our aws region>:<our account#>:function:main-<our stage>-onCognitoCustomMessage"
}
}
Note, we're setting this trigger before the lambda even exists. The trigger just needs an ARN, and it doesn't seem to care that it's not there yet. Then we run sls deploy which creates the actual Lambda function and everything works fine.
Now our scripts look like this:
cognitoSetup.template --> <Serverless Framework>
Why does this fix this error? I don't actually know. CloudFormation seems to be fine with this modification but not okay with modifying the same file later in our process. But it works.

Concourse CI - S3 trigger not firing. How often does it check?

I've got a Concourse job that uses the appearance of a file in an Amazon S3 bucket as a trigger to a suite of tests. Using this resource --> https://github.com/concourse/s3-resource . Problem is, the job is not firing when the file appears. When I trigger the job manually, it does see the file and start the test suite.
Yaml config looks like this:
- name: s3-trigger-file
type: s3
source:
bucket: my-bucket-name
regexp: qabot_request_(.*).json
access_key_id: {{s3-access-key-id}}
secret_access_key: {{s3-secret-access-key}}
jobs:
- name: my-job
public: true
plan:
- get: s3-trigger-file
trigger: true
When I click on the trigger itself in the Concourse UI, I see what looks like a running monitor:
As I said, the job isn't firing when the file appears, but a manual trigger does verify the S3 input is found.
How can I debug why the automatic trigger isn't firing? Also, how much latency is expected for the s3 resource to detect a new file has appeared?
Concourse 3.4. Thanks ~~
The capturing group in your regexp must refer to a semver compliant version.
See the documentation:
The version extracted from this pattern is used to version the resource. Semantic versions, or just numbers, are supported. Accordingly, full regular expressions are supported, to specify the capture groups.
Your capturing group is currently making the captured "version" quote2. You should probably delete the pipeline and regenerate it with a modified regex (e.g. qabot_request_quote(\d+).json)

Why does Concourse `get` a resource after `put`ing it?

When I configure the following pipeline:
resources:
- name: my-image-src
type: git
source:
uri: https://github.com/concourse/static-golang
- name: my-image
type: docker-image
source:
repository: concourse/static-golang
username: {{username}}
password: {{password}}
jobs:
- name: "my-job"
plan:
- get: my-image-src
- put: my-image
After building and pushing the image to the Docker registry, it subsequently fetches the image. This can take some time and ultimately doesn't really add anything to the build. Is there a way to disable it?
Every put implies a get of the version that was created. There are a few reasons for this:
The primary reason for this is so that the newly created resource can be used by later steps in the build plan. Without the get there is no way to introduce "new" resources during a build's execution, as they're all resolved to a particular version to fetch when the build starts.
There are some side-benefits to doing this as well. For one, it immediately warms the cache on one worker. So it's at least not totally worthless; later jobs won't have to fetch it. It also acts as validation that the put actually had the desired effect.
In this particular case, as it's the last step in the build plan, the primary reason doesn't really apply. But we didn't bother optimizing it away since in most cases the side benefits make it worth not having the secondary question arise ("why do only SOME put steps imply a get?").
It also cannot be disabled as we resist adding so many knobs that you'll want to turn one day and then have to go back and turn back off once you actually do need it back to the default.
Docs: https://concourse-ci.org/put-step.html