Custom resource outputs not found - aws-cloudformation

I've written a custom resource in Go using cloudformation-cli-go-plugin, it's failing when I try and use it in a stack with
Unable to retrieve Guid attribute for MyCo::CloudFormation::Workloads, with error message NotFound guid not found.
The stack:
AWSTemplateFormatVersion: 2010-09-09
Description: Sample MyCo Workloads Template
Resources:
Resource1:
Type: 'MyCo::CloudFormation::Workloads'
Properties:
APIKey: ""
AccountID: ""
Workload: >-
workload: {entityGuids: "", name: "CloudFormationTest-Create"}
Outputs:
CustomResourceAttribute1:
Value: !GetAtt Resource1.Guid
If I remove the Outputs stanza the stack runs successfully and I can see the created resource.
Running with SAM locally I've verified that Guid is in fact always returned. FWIW the resource passes all of the contract tests, Guid is the primaryIdentifier, and is a readOnlyProperties.
I've tried several tests playing with the !GetAtt definition, all of which fail with schema errors so it appears the CF is aware of the format of the resource's properties.
Suggestions and/or pointers would be appreciated.

The issue here is Read failing due to CloudFormation behaving differently than the Contract Tests. The Contract Tests do not follow the CloudFormation model rules, they are more permissive.
There are a number of differences in how the Contract Tests and CloudFormation behave- passing the Contract Tests does not guarantee CloudFormation compatibility. Two for instances:
The contract tests allow returning a temporary primaryIdentifier that can change between handler.InProgress and handler.Success
The contract tests pass the entire model to all events. CloudFormation only passes the primaryIdentifier to Read and Delete.

Related

Cloudformation submitted information does not contain changes when updating task formation image version

If my cloud formation script is like this:
myServiceName:
Type: "AWS::ECS::Service"
Properties:
ServiceName: "myServiceName"
TaskDefinition: !Ref myTaskName
myTaskName:
Type: "AWS::ECS::TaskDefinition"
Properties:
ContainerDefinitions:
Image: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/docker-image-name:1.1.1"
And I update the task definition to 1.1.2
Image: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/docker-image-name:1.1.2"
Then trying to run a Cloud formation update command gives me this error:
*Submitted information does not contain changes. *
Is it just not possible to update the task definition to point to a new image in an ecr with out changing the service?
All the documentation I've read says that this error comes up when you don't change any properties of your resource, so Cloudformation doesn't see any resources as changed, and therefore won't redeploy.
But you are changing a property, and yet it's still happening, which is weird. I haven't been able to find any record of such behavior.
Debugging suggestion: try adding an arbitrary new property to your resource, e.g. a tag field. If it updates successfully, it means for some reason the changed Image doesn't trigger an update, and the fix would be to always change something else too. If it still doesn't update, then I suspect something is going wrong somewhere else in your process and you're not actually uploading your changed template at all.
I found the following in the CloudFormation User Guide that may help.
Troubleshooting CloudFormation - No updates to perform
I encountered an issue adding a DeletionPolicy attribute (which is not a property). According to the documentation, adding/changing metadata will cause CloudFormation to accept certain changes.

How to pass multiple secret nicknames in the ecs annotation

I am using SCDF UI to launch pods on ECS cluster and I am passing all the required parameters from the task itself. My application uses two secrets , one for oracle and one for mongo.
I need to pass both the secret nicknames in job annotation.
I tried below approach but none of them worked.
a. deployer.app-test.kubernetes.jobAnnotations=ecs.o2c.secretsnicknames: 'MONGO_NICKNAME,ORACLE_NICKNAME'
b. deployer.app-test.kubernetes.jobAnnotations=ecs.o2c.secretsnicknames: "MONGO_NICKNAME,ORACLE_NICKNAME"
c. deployer.app-test.kubernetes.jobAnnotations=ecs.o2c.secretsnicknames: 'MONGO_NICKNAME+ORACLE_NICKNAME'
Please suggest how to do that using annotation.
for me having same issue, trying to get two different secrets, i've used the following annotation:
ecs.o2c.secretsnicknames: "${SECRET1}, ${SECRET2}"
It should work. You only need to define the variables in the parameters section, such as:
parameters:
- name: SECRET1
required: true
- name: SECRET2
required: true
For your example, providing you have defined the MONGO_NICKNAME and the ORACLE_NICKNAME in the parameters section, you should have the following syntax:
ecs.o2c.secretsnicknames: "${MONGO_NICKNAME},${ORACLE_NICKNAME}"
If you inject somehow the strings directly, you should have:
ecs.o2c.secretsnicknames: "MongoNicknameString, OracleNicknameString"

Access to output data from stack

I am creating a REST API using CloudFormation. In an other CloudFormation stack I would like to have access to values that are in the ouput section (the invoke URL) of that CloudFormation script.
Is this possible, and if so how?
You can export your outputs. Exporting makes them accessible to other stacks.
From the AWS Docs:
To export a stack's output value, use the Export field in the Output section of the stack's template. To import those values, use the Fn::ImportValue function in the template for the other stacks
The following exports an API Gateway Id.
Description: API for interacting with API resources
Parameters:
TargetEnvironment:
Description: 'Examples can be dev, test or prod'
Type: 'String'
ProductName:
Description: 'Represents the name of the product you want to call the deployment'
Type: 'String'
Resources:
MyApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapi'
Outputs:
MyApiId:
Description: 'Id of the API'
Value: !Ref MyApi
Export:
Name: !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapi'
MyApiRootResourceId:
Description: 'Id of the root resource on the API'
Value: !GetAtt MyApi.RootResourceId
Export:
Name: !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapirootresource'
The Export piece of the Output is the important part here. If you provide the Export then other Stacks can consume from it.
Now, in another file I can import that MyApiId value by using the Fn::Import intrinsic function, importing the exported name. I can also import it's root resource and consume both of these values when creating a child API resource.
From the AWS Docs:
The intrinsic function Fn::ImportValue returns the value of an output exported by another stack. You typically use this function to create cross-stack references.
Description: Resource endpoints for interacting with the API
Parameters:
TargetEnvironment:
Description: 'Examples can be dev, test or prod'
Type: 'String'
ProductName:
Description: 'Represents the name of the product you want to call the deployment'
Type: 'String'
Resources:
MyResource:
Type: AWS::ApiGateway::Resource
Properties:
ParentId: {'Fn::ImportValue': !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapirootresource' }
PathPart: foobar
RestApiId: {'Fn::ImportValue': !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapi' }
These are two completely different .yaml files and can be deployed as two independent stacks but now they depend on each other. If you try to delete the MyApi API Gateway stack before deleting the MyResource stack the CloudFormation delete operation will fail. You must delete the dependencies first.
One thing to keep in mind is that in some cases you might want to have the flexability to delete the root resource without worrying about dependencies. The delete operation could in some cases be done without any side-effects. For instance, deleting an SNS topic won't break a Lambda - it's prevents it from running. There's no reason to delete the Lambda just to re-deploy a new SNS topic. In that scenario I utilize naming conventions and tie things together that way instead of using exports. For example - the above AWS::ApiGateway::Resource can be tied to an environment specific API Gateway based on the naming convention.
Parameters:
TargetEnvironment:
Description: 'Examples can be dev, test or prod'
Type: 'String'
ProductName:
Description: 'Represents the name of the product you want to call the deployment'
Type: 'String'
Resources:
MyResource:
Type: AWS::ApiGateway::Resource
Properties:
ParentId: {'Fn::ImportValue': !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapirootresource' }
PathPart: foobar
RestApiId: !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapi'
With this there's no need to worry about the export/import as long as the last half of the resource is named the same across all environments. The environment can change via the TargetEnvironment parameter so this can be re-used across dev, test and prod.
One caveat to this approach is that naming conventions only work for when you want to access something that can be referenced by name. If you need a property, such as the RootResource in this example, or EC2 size, EBS Volume size, etc then you can't just use a naming convention. You'll need to export the value and import it. In the example above I could replace the RestApiId import usage with a naming convention but I could not replace the ParentId with a convention - I had to perform an import.
I use a mix of both in my templates - you'll find when it makes sense to use one approach over the other as you build experience.

Concourse: What is the difference between "Resource Types" and "Resource"?

When i developing pipeline i can't understand the difference between "Resource Types" and "Resource".
According to documentation Resource type is there only to provide the type of the resource and check for the tags. Like in example bellow:
---
resource_types:
- name: rss
type: docker-image
source:
repository: suhlig/concourse-rss-resource
tag: latest
resources:
- name: booklit-releases
type: rss
source:
url: http://www.qwantz.com/rssfeed.php
jobs:
- name: announce
plan:
- get: booklit-releases
trigger: true
Why do we need both of them? isn't it enough just to use resources?
I'm also new to this project. Please correct me if I am wrong.
I think in the term of the container:
A resource type is an image and we need to config the repository and tag in its source so that the concourse can locate/download it.
A resource is a container which is an instance of that image and can be used in the jobs when the pipeline is running. Its source that we configure is the common parameters which will be passed on the stdin to the check, in and out scripts when the resource is configured in a get or put step.
I think it's a little similar to the comparison between the class and object.

AWS SES template from Cloudformation / serverless

I'm using the serverless framework to deploy my AWS stacks, and I’m trying to add an AWS SES template to my resources.
However, I keep getting “unrecognized type” from CloudFormation for AWS::SES::Template.
This is definitely a defined CloudFormation resource type, so I don’t know what’s going on. I've seen identical snippets describing SES templates that supposedly work, but for me it doesn't. Any ideas what could be causing this?
The section in my serverless.yml looks like this:
resources:
Resources:
EmailNotificationTemplate:
Type: AWS::SES::Template
Properties:
Template:
TemplateName: "test"
TextPart: "body text"
SubjectPart: "subject"
Turns out this was due to SES not being available in the region I was using. It sure would've been nice with a error message such as "type not supported in region", but instead the generic "unrecognized type" was raised.