CloudFormation Multiline Substitution in YAML. Is it possible? - aws-cloudformation

I am trying to do a multiline substitution in a CFT and it's just not happening. The error I am getting is
An error occurred (ValidationError) when calling the ValidateTemplate operation: Template format error: unsupported structure.
Which is quite non-descript. I have the CFT plugin for intellij and it isn't giving me any syntax errors. Is such a thing supported? The problem line is at Fn::Sub
According to this documentation it is.
Here is the sample I am working with. I have the whole CFT working with hardcoded values but I would like it working with imported values from the CFT that created the parts of the stack that I am trying to watch
code:
AWSTemplateFormatVersion: 2010-09-09
Description: "Per ticket: CLOUD-1284"
Parameters:
LogGroupName:
Type: String
Default: "ct/dev-logs"
AllowedValues: ["ct/dev-logs","ct/prod-logs"]
Description: Enter CloudWatch Logs log group name. Default is ct/dev-logs
Email:
Type: String
Description: Email address to notify when an API activity has triggered an alarm
Default: cloudops#
Resources:
PolicyUpdates:
Type: AWS::Logs::MetricFilter
Properties:
FilterPattern:
Fn::Sub:
- >-
{ ($.eventSource = iam.amazonaws.com) &&
(($.eventName = Update*) || ($.eventName = Attach*) || ($.eventName = Delete*) || ($.eventName = Detach*) ||($.eventName = Put*)) &&
(($.requestParameters.roleName = ${Ec2Role}) || ($.requestParameters.roleName = ${RdsRole})) }
- Ec2Role: !ImportValue infra-Ec2IamRole
- RdsRole: !ImportValue infra-RdsIamRole
LogGroupName: !Ref LogGroupName
MetricTransformations:
- MetricValue: 1
MetricNamespace: SpecialMetrics
MetricName: PolicyUpdateMetrics
PolicyUpdatesAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmDescription: " Policies have have been updated"
AlarmActions:
- Ref: AlarmNotificationTopic
MetricName: PolicyUpdateMetrics
Namespace: SpecialMetrics
Statistic: Sum
Period: 10
EvaluationPeriods: 1
Threshold: 1
ComparisonOperator: GreaterThanOrEqualToThreshold
TreatMissingData: notBreaching
S3BucketPolicyUpdates:
Type: AWS::Logs::MetricFilter
Properties:
FilterPattern: >-
{ ($.eventSource = s3.amazonaws.com) && (($.eventName = Put*) || ($.eventName = Delete*)) &&
(($.requestParameters.bucketName = assets-us-east-1) || ($.requestParameters.bucketName = logs-us-east-1)) }
LogGroupName: !Ref LogGroupName
MetricTransformations:
- MetricValue: 1
MetricNamespace: SpecialMetrics
MetricName: S3BucketPolicyUpdateMetric
S3BucketPolicyUpdatesAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmDescription: " S3 bucket security settings have been updated"
AlarmActions:
- Ref: AlarmNotificationTopic
MetricName: S3BucketPolicyUpdateMetric
Namespace: SpecialMetrics
Statistic: Sum
Period: 10
EvaluationPeriods: 1
Threshold: 1
ComparisonOperator: GreaterThanOrEqualToThreshold
TreatMissingData: notBreaching
AlarmNotificationTopic:
Type: AWS::SNS::Topic
Properties:
Subscription:
- Endpoint: !Ref Email
Protocol: email

Yes, but you just need to fix the syntax.
The fix:
Here's a simplified version of your code showing corrected syntax:
---
AWSTemplateFormatVersion: 2010-09-09
Description: Test Stack
Resources:
PolicyUpdates:
Type: AWS::Logs::MetricFilter
Properties:
FilterPattern:
Fn::Sub:
- >-
{ ($.eventSource = iam.amazonaws.com) &&
(($.eventName = Update*) || ($.eventName = Attach*) || ($.eventName = Delete*) || ($.eventName = Detach*) ||($.eventName = Put*)) &&
(($.requestParameters.roleName = ${Ec2Role}) || ($.requestParameters.roleName = ${RdsRole})) }
- {Ec2Role: MyEc2Role, RdsRole: MyRdsRole}
LogGroupName: !Ref LogGroup
MetricTransformations:
- MetricValue: 1
MetricNamespace: SpecialMetrics
MetricName: PolicyUpdateMetrics
LogGroup:
Type: AWS::Logs::LogGroup
On creating that stack the following Metric Filter is created:
▶ aws logs describe-metric-filters --query 'metricFilters[].filterPattern'
[
"{ ($.eventSource = iam.amazonaws.com) && (($.eventName = Update*) || ($.eventName = Attach*) || ($.eventName = Delete*) || ($.eventName = Detach*) ||($.eventName = Put*)) && (($.requestParameters.roleName = MyEc2Role) || ($.requestParameters.roleName = MyRdsRole)) }"
]
Thus, you would need to change your Fn::Sub to:
FilterPattern:
Fn::Sub:
- >-
{ ($.eventSource = iam.amazonaws.com) &&
(($.eventName = Update*) || ($.eventName = Attach*) || ($.eventName = Delete*) || ($.eventName = Detach*) ||($.eventName = Put*)) &&
(($.requestParameters.roleName = ${Ec2Role}) || ($.requestParameters.roleName = ${RdsRole})) }
- {Ec2Role: !ImportValue infra-Ec2IamRole, RdsRole: !ImportValue infra-RdsIamRole}
How to get better error messages:
The first thing I did was run cloudformation validate-template:
▶ aws cloudformation validate-template --template-body file://cloudformation.yml
An error occurred (ValidationError) when calling the ValidateTemplate operation:
Template format error: YAML not well-formed. (line 23, column 45)
Since it's a YAML formatting issue, the yamllint utility usually provides more information:
▶ yamllint cloudformation.yml
cloudformation.yml
23:45 error syntax error: could not find expected ':'
Going into the vim editor and issuing a command:
:cal cursor(23,45)
Takes me to line 23, column 45 where I find the beginning of the string ${Ec2Role}.
The first problem I see is that the indenting is wrong. That's actually the cause of that message.
By indenting lines 21-23 by 2 more spaces makes the template valid YAML. Then I got a more helpful response from cloudformation validate-template:
▶ aws cloudformation validate-template --template-body file://cloudformation.yml
An error occurred (ValidationError) when calling the ValidateTemplate operation:
Template error: One or more Fn::Sub intrinsic functions don't specify expected
arguments. Specify a string as first argument, and an optional second argument
to specify a mapping of values to replace in the string
At this point, it can be seen from the documentation that the call to Fn::Sub is syntactically wrong.

Related

Google Cloud Deployment ,invalid_argument

I'm trying to create a cloud SQL instance by deployment API, when I try to create it directly from YAML file it is created successfully ,meanwhile when I create the instance from jinja/python file I get an error as below:
code: RESOURCE_ERROR
location: /deployments/olpr/resources/test
message: '{"ResourceType":"sqladmin.v1beta4.instance","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Request
contains an invalid argument.","status":"INVALID_ARGUMENT","statusMessage":"Bad
Request","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/project_id/instances","httpMethod":"POST"}}'
Is there any way where I can see the invalid_argument so that I can fix it.
Please help me with some valid suggestions.
The resource as below:
*resources = [
{
'name': 'test',
'type': 'sqladmin.v1beta4.instance',
'properties': {
'zone': 'europe-west1-b',
'rootPassword': '1234567' ,
'instanceType': 'CLOUD_SQL_INSTANCE',
'databaseVersion': 'SQLSERVER_2017_EXPRESS',
'backendType': 'SECOND_GEN',
'settings':{
'machineType' : 'db-custom-1-3840',
'dataDiskSizeGb': 10,
'dataDiskType': 'PD_SSD',
'ipConfiguration': {
'ipv4Enabled': False,
'privateNetwork':'projects/project_id/global/networks/project_id-vpc'
}
}
}
}
]*
**
**Yaml file:
resources:
- name: he
type: sqladmin.v1beta4.instance
properties:
region: europe-west1
zone: europe-west1-b
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
databaseVersion: SQLSERVER_2017_EXPRESS
serviceAccountEmailAddress: user#project_id.iam.gserviceaccount.com
rootPassword: mypass
settings:
dataDiskSizeGb: 10
dataDiskType: PD_SSD
ipConfiguration:
ipv4Enabled: false
privateNetwork: vpc
kind: sql#settings
machineType: db-custom-1-3840**
**
You're not supplying a region in the Python version. Try adding `'region': 'europe-west1' to the properties.

How to dynamically create Resource (UserPool) name by concatenating parameter value and string in AWS CloudFormation YAML template?

I am trying to create an AWS CloudFormation template using YAML. I add a UserPool resource as follows. The user pool name & id should be obtained via a parameter value i.e., if the value of parameter paramUserPoolName is 'Sample', then:
UserPoolName = Sample
UserPool Resource Name = SampleUserPool i.e., concatenated value of 'paramUserPoolName + UserPool'
Parameters:
paramUserPoolName:
Type: String
Resources:
<I need 'paramUserPoolName + UserPool' here >:
Type: 'AWS::Cognito::UserPool'
Properties: {
"UserPoolName": paramUserPoolName
}
How can I dynamically create a resource id in CloudFormation template?
PS:
The following worked:
Resources:
SampleUserPool:
Type: 'AWS::Cognito::UserPool'
Properties:
UserPoolName: !Sub ${paramUserPoolName}UserPool
Use !Sub for that. You can also use !Join, but !Sub is easier.
Parameters:
paramUserPoolName:
Type: String
Resources:
Type: 'AWS::Cognito::UserPool'
Properties:
UserPoolName: !Sub ${paramUserPoolName}UserPool

Cannot deploy aws sam stack due to Handler not found error

I am having issues deploying a lambda with a handler in a nested directory using sam.
I perform the following steps:
package:
sam package --template template.yaml --output-template-file packaged.yaml --s3-bucket
Creates a packaged.yaml that I use in the next step.
deploy:
aws cloudformation deploy --template-file /Users/localuser/Do/learn-sam/dynamo-stream-lambda/packaged.yaml --stack-name barkingstack
ERROR
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [PublishNewBark] is invalid. Missing required property 'Handler'.
Cloudformation/SAM Template
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Globals:
Function:
Runtime: nodejs8.10
Timeout: 300
Resources:
PublishNewBark:
Type: AWS::Serverless::Function
FunctionName: publishNewBark
CodeUri: .
Handler: src/index.handler
Role: "<ROLE_ARN>"
Description: Reads from the DynamoDB Stream and publishes to an SNS topic
Events:
- ReceiveBark:
Type: DynamoDB
Stream: !GetAtt BarkTable.StreamArn
StartingPosition: TRIM_HORIZON
BatchSize: 1
BarkTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: BarkTable
KeySchema:
- KeyType: HASH
AttributeName: id
AttributeDefinitions:
- AttributeName: id
AttributeType: S
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
ProvisionedThroughput:
WriteCapacityUnits: 5
ReadCapacityUnits: 5
WooferTopic:
Type: AWS::SNS::Topic
Properties:
DisplayName: wooferTopic
TopicName: wooferTopic
Subscription:
- Endpoint: <my_email>
Protocol: email
DIRECTORY STRUCTURE
root_directory/
events/ (for sample events)
policies/ (for IAM Role to be created for the lambda using CLI)
src/index.js
package.json
node_modules
template.yaml
HANDLER CODE
async function handler (event, context) {
console.log(JSON.stringify(event, null, 2))
return {}
}
module.exports = {handler}
I believe you have to put everything except the resource type under "Properties".
Your function declaration should be:
PublishNewBark:
Type: AWS::Serverless::Function
Properties:
FunctionName: publishNewBark
CodeUri: .
Handler: src/index.handler
Role: "<ROLE_ARN>"
Description: Reads from the DynamoDB Stream and publishes to an SNS topic
Events:
- ReceiveBark:
Type: DynamoDB
Stream: !GetAtt BarkTable.StreamArn
StartingPosition: TRIM_HORIZON
BatchSize: 1

YAML Error: could not determine a constructor for the tag

This is very similar to questions/44786412 but mine appears to be triggered by YAML safe_load(). I'm using Ruamel's library and YamlReader to glue a bunch of CloudFormation pieces together into a single, merged template. Is bang-notation just not proper YAML?
Outputs:
Vpc:
Value: !Ref vpc
Export:
Name: !Sub "${AWS::StackName}-Vpc"
No problem with these
Outputs:
Vpc:
Value:
Ref: vpc
Export:
Name:
Fn::Sub: "${AWS::StackName}-Vpc"
Resources:
vpc:
Type: AWS::EC2::VPC
Properties:
CidrBlock:
Fn::FindInMap: [ CidrBlock, !Ref "AWS::Region", Vpc ]
Part 2; how to get load() to leave what's right of the 'Fn::Select:' alone.
FromPort:
Fn::Select: [ 0, Fn::FindInMap: [ Service, https, Ports ] ]
gets converted to this, that now CF doesn't like.
FromPort:
Fn::Select: [0, {Fn::FindInMap: [Service, https, Ports]}]
If I unroll the statement fully then no problems. I guess the shorthand is just problematic.
FromPort:
Fn::Select:
- 0
- Fn::FindInMap: [Service, ssh, Ports]
Your "bang notation" is proper YAML, normally this is called a tag. If you want to use the safe_load() with those you'll have to provide constructors for the !Ref and !Sub tags, e.g. using:
ruamel.yaml.add_constructor(u'!Ref', your_ref_constructor, constructor=ruamel.yaml.SafeConstructor)
where for both tags you should expect to handle scalars a value. and not the more common mapping.
I recommend you use the RoundTripLoader instead of the SafeLoader, that will preserve order, comments, etc. as well. The RoundTripLoader is a subclass of the SafeLoader.
If you are using ruamel.yaml>=0.15.33, which supports round-tripping scalars, you can do (using the new ruamel.yaml API):
import sys
from ruamel.yaml import YAML
yaml = YAML()
yaml.preserve_quotes = True
data = yaml.load("""\
Outputs:
Vpc:
Value: !Ref: vpc # first tag
Export:
Name: !Sub "${AWS::StackName}-Vpc" # second tag
""")
yaml.dump(data, sys.stdout)
to get:
Outputs:
Vpc:
Value: !Ref: vpc # first tag
Export:
Name: !Sub "${AWS::StackName}-Vpc" # second tag
In older 0.15.X versions, you'll have to specify the classes for the scalar objects yourself. This is cumbersome, if you have many objects, but allows for additional functionality:
import sys
from ruamel.yaml import YAML
class Ref:
yaml_tag = u'!Ref:'
def __init__(self, value, style=None):
self.value = value
self.style = style
#classmethod
def to_yaml(cls, representer, node):
return representer.represent_scalar(cls.yaml_tag,
u'{.value}'.format(node), node.style)
#classmethod
def from_yaml(cls, constructor, node):
return cls(node.value, node.style)
def __iadd__(self, v):
self.value += str(v)
return self
class Sub:
yaml_tag = u'!Sub'
def __init__(self, value, style=None):
self.value = value
self.style = style
#classmethod
def to_yaml(cls, representer, node):
return representer.represent_scalar(cls.yaml_tag,
u'{.value}'.format(node), node.style)
#classmethod
def from_yaml(cls, constructor, node):
return cls(node.value, node.style)
yaml = YAML(typ='rt')
yaml.register_class(Ref)
yaml.register_class(Sub)
data = yaml.load("""\
Outputs:
Vpc:
Value: !Ref: vpc # first tag
Export:
Name: !Sub "${AWS::StackName}-Vpc" # second tag
""")
data['Outputs']['Vpc']['Value'] += '123'
yaml.dump(data, sys.stdout)
which gives:
Outputs:
Vpc:
Value: !Ref: vpc123 # first tag
Export:
Name: !Sub "${AWS::StackName}-Vpc" # second tag

!ImportValue in Serverless Framework not working

I'm attempting to export a DynamoDb StreamArn from a stack created in CloudFormation, then reference the export using !ImportValue in the serverless.yml.
But I'm getting this error message:
unknown tag !<!ImportValue> in "/codebuild/output/src/serverless.yml"
The cloudformation and serverless.yml are defined as below. Any help appreciated.
StackA.yml
AWSTemplateFormatVersion: 2010-09-09
Description: Resources for the registration site
Resources:
ClientTable:
Type: AWS::DynamoDB::Table
DeletionPolicy: Retain
Properties:
TableName: client
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 2
WriteCapacityUnits: 2
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
Outputs:
ClientTableStreamArn:
Description: The ARN for My ClientTable Stream
Value: !GetAtt ClientTable.StreamArn
Export:
Name: my-client-table-stream-arn
serverless.yml
service: my-service
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: nodejs6.10
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeStream
- dynamodb:GetRecords
- dynamodb:GetShardIterator
- dynamodb:ListStreams
- dynamodb:GetItem
- dynamodb:PutItem
Resource: arn:aws:dynamodb:*:*:table/client
functions:
foo:
handler: foo.main
events:
- stream:
type: dynamodb
arn: !ImportValue my-client-table-stream-arn
batchSize: 1
Solved by using ${cf:stackName.outputKey}
I struggled with this as well, and what did trick for me was:
functions:
foo:
handler: foo.main
events:
- stream:
type: dynamodb
arn:
!ImportValue my-client-table-stream-arn
batchSize: 1
Note, that intrinsic functions ImportValue is on a new line and indented, otherwise the whole event is ignored when cloudformation-template-update-stack.json is generated.
It appears that you're using the !ImportValue shorthand for CloudFormation YAML. My understanding is that when CloudFormation parses the YAML, and !ImportValue actually aliases Fn::ImportValue. According to the Serverless Function documentation, it appears that they should support the Fn::ImportValue form of imports.
Based on the documentation for Fn::ImportValue, you should be able to reference the your export like
- stream:
type: dynamodb
arn: {"Fn::ImportValue": "my-client-table-stream-arn"}
batchSize: 1
Hope that helps solve your issue.
I couldn't find it clearly documented anywhere but what seemed to resolve the issue for me is:
For the Variables which need to be exposed/exported in outputs, they must have an "Export" property with a "Name" sub-property:
In serverless.ts
resources: {
Resources: resources["Resources"],
Outputs: {
// For eventbus
EventBusName: {
Export: {
Name: "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME",
},
Value: {
Ref: "UNIQUE_EVENTBUS_NAME",
},
},
// For something like sqs, or anything else, would be the same
IDVerifyQueueARN: {
Export: {
Name: "${self:service}-${self:provider.stage}-UNIQUE_SQS_NAME",
},
Value: { "Fn::GetAtt": ["UNIQUE_SQS_NAME", "Arn"] },
}
},
}
Once this is deployed you can check if the exports are present by running in the terminal (using your associated aws credentials):
aws cloudformation list-exports
Then there should be a Name property in a list:
{
"ExportingStackId": "***",
"Name": "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME", <-- same as given above (but will be populated with your service and stage)
"Value": "***"
}
And then if the above is successful, you can reference it with "Fn::ImportValue" like so, e.g.:
"Resource": {
"Fn::ImportValue": "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME", <-- same as given above (but will be populated with your service and stage)
}

Categories