Set IAM Role Description - aws-cloudformation

In the AWS Console there is an option to set a description for an IAM Role. How do you do this with CloudFormation? The documentation does not state how to do this. So far I have tried.
Resources:
MyRole:
Type: "AWS::IAM::Role"
Properties:
Description: My Description
Resulting error: No actual CF error, but this description does not show in the Console
Resources:
MyRole:
Type: "AWS::IAM::Role"
Description: My Description
Properties:
.....
Resulting error: "Encountered unsupported property Description"
Resources:
MyRole:
Type: "AWS::IAM::Role"
Properties:
Tags:
Key: Description
Value: My Description
Resulting error: "Encountered unsupported property Tags"
Resources:
MyRole:
Type: "AWS::IAM::Role"
Tags:
Key: Description
Value: My Description
Properties:
.....
Resulting error: "Encountered unsupported property Tags"

Update November 2019:
The Description field is now supported in CloudFormation.
Properties:
AssumeRolePolicyDocument: Json
Description: String <--- Here
ManagedPolicyArns:
- String
MaxSessionDuration: Integer
Path: String
PermissionsBoundary: String
Policies:
- Policy
RoleName: String
Tags:
- Tag

Tested and can confirm John Rotenstein's answer remains the best option as of 29/Mar/2019. Sometimes updates can sneak in without making the documentation, but not in this case unfortunately.
(Would have preferred to put this as a comment however the reputation requirement is a pain)
Edit:
July/2019 - Still no update, however this can be done through the SDK

Seems like it has been added as of September 3rd, 2019. Check this part of the AWS CloudFormation docs.

Related

AWS sam template extraneous key that should be valid

Trying to create a dataset and template in a bitbucket pipeline using aws sam and it returns the error
Model validation failed (#: extraneous key [DataSourceArn] is not permitted
I've tried
xxxxDataset:
Type: AWS::QuickSight::DataSet
Properties:
AwsAccountId: !Ref "AWS::AccountId"
Name: !Sub 'xxxxDataset${PlatformParameter}'
ImportMode: 'DIRECT_QUERY'
PhysicalTableMap:
RelationalTable:
Catalog: 'AwsDataCatalog'
DataSourceArn: !Sub 'arn:aws:quicksight:${RegionParameter}:${AWS::AccountId}:datasource/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
and
xxxxDataset:
Type: AWS::QuickSight::DataSet
Properties:
AwsAccountId: !Ref "AWS::AccountId"
Name: !Sub 'xxxxDataset${PlatformParameter}'
ImportMode: 'DIRECT_QUERY'
S3Source:
DataSourceArn: !Sub 'arn:aws:quicksight:${RegionParameter}:${AWS::AccountId}:datasource/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
both give the same error while DataSourceArn is a valid key according to the documentation. I'm referring to cloudformation doc https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-quicksight-dataset.html but there may be differences with aws-sam for which I haven't found quicksight documentation...
Thanks any help appreciated

Using CloudFormation Fn::Transform to include OpenAPI schema into EventSchema

I have been trying to create Schema in EventSchema registry using CloudFormation template
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
Test
Resources:
SchemaRegistry:
Type: AWS::EventSchemas::Registry
Properties:
RegistryName: macak
Schema1:
Type: AWS::EventSchemas::Schema
Properties:
RegistryName: !Ref SchemaRegistry
Type: OpenApi3
Content:
'Fn::Transform':
Name: AWS::Include
Parameters:
Location: ./schema.openapi.json
But I am getting following error during sam deploy
CREATE_FAILED AWS::EventSchemas::Schema Schema1 Property validation failure:
[Value of property
{/Content} does not match
type {String}]
ROLLBACK_IN_PROGRESS AWS::CloudFormation::Stack macak-test The following resource(s)
failed to create: [Schema1].
Rollback requested by user.
Is it possible to include file-contents into EventSchema without using some pre-processing like jinja or handlebars? I hoped this approach would work but I am not able to make it working.

How to do a major version upgrade for Aurora Postgres in CloudFormation with custom Parameter Groups?

Issue
I am trying to do a major version upgrade from Aurora Postgres 10.14 to 11.9 using Cloudformation. My template creates a DBCluster, a DBInstance, a DBClusterParameterGroup and a DBParameterGroup.
The problem is that when I try to update the stack and change the EngineVersion property for DBCluster from 10.14 to 11.9 and change the Family property for DBClusterParameterGroup and DBParameterGroup from aurora-postgresql10 to aurora-postgresql11, I get this error in CloudFormation:
Error
The following parameters are not defined for the specified group: enable_partitionwise_aggregate, enable_parallel_append, enable_partition_pruning, vacuum_cleanup_index_scale_factor, pg_bigm.last_update, apg_enable_semijoin_push_down, parallel_leader_participation, pg_bigm.enable_recheck, pg_bigm.gin_key_limit, max_parallel_maintenance_workers, pg_bigm.similarity_limit, enable_parallel_hash, enable_partitionwise_join (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterValue)
I think this is because, according to the AWS Documentation for RDS Parameter Groups:
The DB cluster parameter group family can't be changed when updating a DB cluster parameter group
So even though I am trying to update the Family property for DBClusterParameterGroup and DBParameterGroup, CloudFormation simply ignores that, and so it is trying to apply the aurora-postgresql10 Parameter group Family aurora-postgresql10 to a database now trying to run Aurora Postgres 11.9
What I've tried
Updating the Description property for
DBClusterParameterGroupandDBParameterGroupto include a reference to thepEngineVersion` parameter, as the AWS Documentation
says that will trigger a Replacement, but it does not actually do
this, and so I get the same error
Manually adding the parameters listed in the error to DBParameterGroup before running update. Got error "Unmodifiable DB Parameter: pg_bigm.last_update"
The only workaround I have found is clunky:
Manually update the database version in the console from 10.14 to 11.9, and change the DB cluster parameter group and Parameter group both to default.aurora-postgresql11 as well
Comment out the code for DBClusterParameterGroup and DBParameterGroup and update the stack with the updated EngineVersion 11.9 for DBCluster
Uncomment out the code for DBClusterParameterGroup and DBParameterGroup and update the stack again with the correct Family property aurora-postgresql11 on DBClusterParameterGroup and DBParameterGroup. Now the database is updated, it is using the custom parameter groups, and the stack is not drifting
Code
Parameters:
pEngineVersion:
Type: String
#currently '10.14'
#trying to change to '11.9'
pFamily:
Type: String
#currently 'aurora-postgresql10'
#trying to change to 'aurora-postgresql11'
Resources:
DBClusterParamGroup:
Type: AWS::RDS::DBClusterParameterGroup
Properties:
Description: !Sub 'AuroraDBClusterParamGroup-${AWS::Region}'
Family: !Ref pFamily
Parameters:
application_name: "App name"
log_statement: all
log_min_duration_statement: 0
DBParamGroup:
Type: AWS::RDS::DBParameterGroup
Properties:
Description: !Sub 'AuroraDBParamGroup-${AWS::Region}'
Family: !Ref pFamily
Parameters:
max_connections: 1000
AuroraDBCluster:
Type: AWS::RDS::DBCluster
Properties:
EngineVersion: !Ref pEngineVersion
Engine: aurora-postgresql
DBClusterParameterGroupName: !Ref 'DBClusterParamGroup'
#snipping unneccesary code#
AuroraDBInstance:
Type: AWS::RDS::DBInstance
Properties:
Engine: aurora-postgresql
DBParameterGroupName: !Ref 'DBParamGroup'
DBClusterIdentifier: !Ref 'AuroraDBCluster'
#snipping unneccesary code#
Any help would be very appreciated
Solution
This method worked for me (I've modified some things slightly to post in a public forum) EDIT: I have updated this to use the Blue/Green solution now that I have had the chance to test it.
Parameters:
[...]
DBEngineVersion:
Type: String
Description: "Choices are engine versions from Postgres 9/10/11, only newest and legacy deployed in Prod versions"
Default: 11.7
AllowedValues:
- 9.6.18
- 10.7
- 10.14
- 11.7
MajorVersionUpgrade:
Type: String
Description: Swap this between 'Blue' or 'Green' if we are doing a Major version upgrade
Default: Blue
AllowedValues:
- Blue
- Green
Conditions:
BlueDeployment: !Equals [!Ref MajorVersionUpgrade, "Blue"]
GreenDeployment: !Equals [!Ref MajorVersionUpgrade, "Green"]
Mappings:
EngineToPGFamily:
"9.6.18":
PGFamily: aurora-postgresql9.6
"10.7":
PGFamily: aurora-postgresql10
"10.14":
PGFamily: aurora-postgresql10
"11.7":
PGFamily: aurora-postgresql11
##### Keep parameters of both versions in sync ####
DBClusterParameterGroupBlue: #Keep parameters of both versions in sync
Type: "AWS::RDS::DBClusterParameterGroup"
Condition: BlueDeployment
Properties:
Description: !Sub "Postgres Cluster Parameter Group for ${InfrastructureStackName}"
Family: !FindInMap ['EngineToPGFamily', !Ref 'DBEngineVersion', 'PGFamily']
Parameters:
client_encoding: UTF8
idle_in_transaction_session_timeout: 60000
DBClusterParameterGroupGreen:
Type: "AWS::RDS::DBClusterParameterGroup"
Condition: GreenDeployment
Properties:
Description: !Sub "Postgres Cluster Parameter Group for ${InfrastructureStackName}"
Family: !FindInMap ['EngineToPGFamily', !Ref 'DBEngineVersion', 'PGFamily']
Parameters:
client_encoding: UTF8
idle_in_transaction_session_timeout: 60000
DBCluster:
Type: "AWS::RDS::DBCluster"
Properties:
DBClusterIdentifier: !Sub "${AWS::StackName}-cluster"
DBClusterParameterGroupName: !If [GreenDeployment, !Ref DBClusterParameterGroupGreen, !Ref DBClusterParameterGroupBlue]
Engine: aurora-postgresql
[...]
DBInstance1:
Type: "AWS::RDS::DBInstance"
Properties:
DBClusterIdentifier: !Ref DBCluster
EngineVersion: !Ref DBEngineVersion
Engine: aurora-postgresql
[...]
DBInstance2:
Type: "AWS::RDS::DBInstance"
Properties:
DBClusterIdentifier: !Ref DBCluster
EngineVersion: !Ref DBEngineVersion
Engine: aurora-postgresql
[...]
Usage and Methodology
This method will create a new ClusterParameterGroup that will be used during the upgrade (when doing a Major version upgrade), and clean up the original group when the upgrade is done. The advantage to this solution is that there is no version hardcoding in the group and the same template can be used for continuous lifecycle updates without additional changes needed (aside from desired Parameters updates).
Initial Setup
Initially, leave MajorVersionUpgrade set to Blue and the conditions will ensure that only the Blue ParameterGroup is created using the proper DB family.
Performing a Major Version Upgrade
Update the stack and set the DBEngineVersion to the new major version and MajorVersionUpgrade to Green if it is currently Blue (or Blue if it is currently Green). CloudFormation will:
Start by creating the new resources (which is the new parameter group referencing the new version)
Then it will update the Cluster and point to that new group
Finally, it will delete the old ParameterGroup (referncing the old version) during the cleanup phase since the Blue/Green Condition for that group no longer evaluates to true.
Important notes
The Parameters in both parameter groups must manually kept in sync
Obviously, update the DBEngineVersion AllowedValues list with versions as needed, just be sure to update the EngineToPGFamily Mappings to have a valid Postgres family value for that Engine version
Special care must be paid when implementing Parameters that are new to a DB Major version and don't exist in an older version, I have not tested this
You are correct that you cannot change the family on a DB Parameter Group that already has been created. That is why you get the error about the missing parameters, which isn't a great error response by AWS given the root cause.
Since DB Parameter Groups can be replaced without having to replace the DB Instance, you should change the Logical ID of the DB Parameter Group so it is detected as a new resource by CloudFormation, and removes the old v10 DB Parameter Group.
Example using your code:
Parameters:
pEngineVersion:
Type: String
Default: '11.9'
#currently '10.14'
#trying to change to '11.9'
pFamily:
Type: String
Default: 'aurora-postgresql11'
#currently 'aurora-postgresql10'
#trying to change to 'aurora-postgresql11'
Resources:
DBClusterParamGroup11:
Type: AWS::RDS::DBClusterParameterGroup
Properties:
Description: !Sub 'AuroraDBClusterParamGroup-${AWS::Region}'
Family: !Ref pFamily
Parameters:
application_name: "App name"
log_statement: all
log_min_duration_statement: 0
DBParamGroup11:
Type: AWS::RDS::DBParameterGroup
Properties:
Description: !Sub 'AuroraDBParamGroup-${AWS::Region}'
Family: !Ref pFamily
Parameters:
max_connections: 1000
AuroraDBCluster:
Type: AWS::RDS::DBCluster
Properties:
EngineVersion: !Ref pEngineVersion
Engine: aurora-postgresql
DBClusterParameterGroupName: !Ref 'DBClusterParamGroup'
#snipping unneccesary code#
AuroraDBInstance:
Type: AWS::RDS::DBInstance
Properties:
Engine: aurora-postgresql
DBParameterGroupName: !Ref 'DBParamGroup'
DBClusterIdentifier: !Ref 'AuroraDBCluster'
#snipping unneccesary code#
Notice how I renamed DBParamGroup to DBParamGroup11 and DBClusterParamGroup to DBClusterParamGroup11.

Swagger Editor - Additional Properties Error

I'm just starting to use Swagger Editor/OpenAPI 3 spec and so it's not going good so far. I have installed and running Swagger Editor v. 3.15.2 on my local machine.
This is the yaml I have so far:
openapi: "3.0.0"
info:
version: 1.0.0
title: Test
paths:
/object:
post:
summary: Create an object
operationId: createObject
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/Object"
responses:
'201':
description: Created
components:
schemas:
Object:
required:
- name
- description
properties:
name:
type: string
description:
type: string
And it's displaying this error:
Errors
Resolver error
e is undefined
Structural error at paths./object
should NOT have additional properties
additionalProperty: responses
Jump to line 6
Structural error at paths./object.post
should have required property 'responses'
missingProperty: responses
Jump to line 7
I have ensured that I am using two spaces for all the indents. When I copied the yaml from the editor and put it into Notepad++ it looked fine. I also pasted it into another editor and noticed that it only used line feeds and not carriage returns. I updated it to use both and still get the same error.
I have looked at other questions with the same problem but none of the solutions worked for me. So, not sure what I'm doing wrong. Any guidance is greatly appreciated.
You have a small indentation problem.
Add one indentation level to
responses:
'201':
description: Created
So that you then have:
openapi: "3.0.0"
info:
version: 1.0.0
title: Test
paths:
/object:
post:
summary: Create an object
operationId: createObject
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/Object"
responses:
'201':
description: Created
components:
schemas:
Object:
required:
- name
- description
properties:
name:
type: string
description:
type: string

Serverless CloudFormation template error instance of Fn::GetAtt references undefined resource

I'm trying to setup a new repo and I keep getting the error
The CloudFormation template is invalid: Template error: instance of Fn::GetAtt
references undefined resource uatLambdaRole
in my uat stage, however the dev stage with the exact same format works fine.
I have a resource file for each of these environments.
dev
devLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: dev-lambda-role # The name of the role to be created in aws
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
#Documentation states the below policy is included automatically when you add VPC configuration but it is currently bugged.
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
uat
uatLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: uat-lambda-role # The name of the role to be created in aws
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
#Documentation states the below policy is included automatically when you add VPC configuration but it is currently bugged.
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
In my serverless.yml my role is defined as
role: ${self:custom.stage}LambdaRole
and the stage is set as
custom:
stage: ${opt:stage, self:provider.stage}
Running serverless deploy --stage dev --verbose succeeds, but running serverless deploy --stage uat --verbose fails with the error. Can anyone see what I'm doing wrong? The uat resource was copied directly from the dev one with only the stage name change.
Here is a screenshot of the directory the resource files are in
I had the same issue, eventually I discovered that my SQS queue name wasn't the same in all 3 places. The following 3 places that the SQS name should match are shown below:
...
functions:
mylambda:
handler: sqsHandler.handler
events:
- sqs:
arn:
Fn::GetAtt:
- mySqsName # <= Make sure that these match
- Arn
resources:
Resources:
mySqsName: # <= Make sure that these match
Type: "AWS::SQS::Queue"
Properties:
QueueName: "mySqsName" # <= Make sure that these match
FifoQueue: true
Ended up here with the same error message. My issue ended up being that I got the "resource" and "Resource" keys in serverless.yml backwards.
Correct:
resources: # <-- lowercase "r" first
Resources: # <-- uppercase "R" second
LambdaRole:
Type: AWS::IAM::Role
Properties:
...
🤦‍♂️
I missed copying a key part of my config here, the actual reference to my Resources file
resources:
Resources: ${file(./serverless-resources/${self:provider.stage}-resources.yml)}
The issue was that I had copied this from a guide and had accientally used self:provider.stage rather than self:custom.stage. When I changed this, it could then deploy.
Indentation Issue
In general, when YAML isn't working I start by checking the indentation.
I hit this issue in my case one of my resources was indented too much, therefore, putting the resource in the wrong node/object. The resources should be two indents in as they're in node resources sub-node Resources
For more info on this see yaml docs