For ECS deployment group, ec2TagFilters can not be specified - aws-cloudformation

When creating a AWS::CodeDeploy::DeploymentGroup in CloudFormation, I get this error, even though I have no EC2TagFilters in my script:
For ECS deployment group, ec2TagFilters can not be specified (Service: AmazonCodeDeploy; Status Code: 400; Error Code: InvalidEC2TagException; Request ID: af5c3f68-6033-4df0-9f6f-ecd064ad6b7b; Proxy: null)
CodeDeploymentGroupDev:
Type: AWS::CodeDeploy::DeploymentGroup
DependsOn:
- CodeDeployApplication
Properties:
ApplicationName: !Ref ApplicationName
DeploymentConfigName: CodeDeployDefault.AllAtOnce
DeploymentGroupName: !Sub "${ApplicationName}-Dev"
DeploymentStyle:
DeploymentType: IN_PLACE
OnPremisesTagSet:
OnPremisesTagSetList:
- OnPremisesTagGroup:
- Key: !Ref OnPremisesTagKey
Type: KEY_AND_VALUE
Value: !Ref OnPremisesTagValue
ServiceRoleArn: !Sub 'arn:aws:iam::${AWS::AccountId}:role/CodeDeployServiceRole'
Is AWS::CodeDeploy::DeploymentGroup not implemented correctly in CloudFormation?

I can see that you are using OnPremisesTagSet which mean you should already register on-premises instances with CodeDeploy, see https://docs.aws.amazon.com/codedeploy/latest/userguide/instances-on-premises.html.
If you are using Ec2 then you need to use Ec2TagSet instead of OnPremisesTagSet.

This occurred because my corresponding AWS::CodeDeploy::Application was not configured correctly (not in original post, but shown below). I was attempting an onprem deploy using a registered agent. I had ECS for the compute platform, but it should have been Server:
CodeDeployApplication:
Type: AWS::CodeDeploy::Application
Properties:
ApplicationName: !Ref ApplicationName
ComputePlatform: Server // Allowed values: ECS | Lambda | Server
It worked, after I fixed that.

Related

AWS sam template extraneous key that should be valid

Trying to create a dataset and template in a bitbucket pipeline using aws sam and it returns the error
Model validation failed (#: extraneous key [DataSourceArn] is not permitted
I've tried
xxxxDataset:
Type: AWS::QuickSight::DataSet
Properties:
AwsAccountId: !Ref "AWS::AccountId"
Name: !Sub 'xxxxDataset${PlatformParameter}'
ImportMode: 'DIRECT_QUERY'
PhysicalTableMap:
RelationalTable:
Catalog: 'AwsDataCatalog'
DataSourceArn: !Sub 'arn:aws:quicksight:${RegionParameter}:${AWS::AccountId}:datasource/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
and
xxxxDataset:
Type: AWS::QuickSight::DataSet
Properties:
AwsAccountId: !Ref "AWS::AccountId"
Name: !Sub 'xxxxDataset${PlatformParameter}'
ImportMode: 'DIRECT_QUERY'
S3Source:
DataSourceArn: !Sub 'arn:aws:quicksight:${RegionParameter}:${AWS::AccountId}:datasource/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
both give the same error while DataSourceArn is a valid key according to the documentation. I'm referring to cloudformation doc https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-quicksight-dataset.html but there may be differences with aws-sam for which I haven't found quicksight documentation...
Thanks any help appreciated

Why does cloudformation give "Invalid method response 200" error, but manual deployment work? (AWS API Gateway Websocket)

I am getting this error when I deploy a simple websocket mock route.
Execution failed due to configuration error: Output mapping refers to an invalid method response: 200
First of all, I'm a little confused about what method response means, as in Websocket API, the terminology used is Route Response and Integration Response. I'm guessing this is referring to the Route Response.
The resources I have are:
Websocket API
Stage
Deployment
$connect route
$connect integration with mock (default maps to {"statusCode": 200})
$connect integration response (just passes the integration through)
$connect route response
The funny part is: to fix this, all I have to do is go to the console and click deploy API. I don't have to change any configuration. But that is not a good solution for me, as I want to run this on a CI/CD pipeline.
I'm guessing the problem is with the Route Response, as that is not configurable from the console. So something must be going on behind the scenes during console deployment, which I am missing during cloudformation deployment. Any ideas how to solve this?
Here's my Cloudformation Template.
Resources:
testWsApiBackendWsApi40DF2EE8:
Type: AWS::ApiGatewayV2::Api
Properties:
Name: testWsApi
ProtocolType: WEBSOCKET
RouteSelectionExpression: $request.body.action
testWsApiApiDeployment423ACBB9:
Type: AWS::ApiGatewayV2::Deployment
Properties:
ApiId:
Fn::GetAtt:
- testWsApiBackendWsApi40DF2EE8
- ApiId
DependsOn:
- MockWithAuthAwsStackwsMockRoute04DB7577
testWsApiApiStageF40CAAE0:
Type: AWS::ApiGatewayV2::Stage
Properties:
ApiId:
Fn::GetAtt:
- testWsApiBackendWsApi40DF2EE8
- ApiId
StageName: production
DeploymentId:
Fn::GetAtt:
- testWsApiApiDeployment423ACBB9
- DeploymentId
MockWithAuthAwsStackwsMockRoute04DB7577:
Type: AWS::ApiGatewayV2::Route
Properties:
ApiId:
Fn::GetAtt:
- testWsApiBackendWsApi40DF2EE8
- ApiId
RouteKey: $connect
Target:
Fn::Join:
- ""
- - integrations/
- Ref: MockWithAuthAwsStackwsMockIntegration36E7A460
MockWithAuthAwsStackwsMockIntegration36E7A460:
Type: AWS::ApiGatewayV2::Integration
Properties:
ApiId:
Fn::GetAtt:
- testWsApiBackendWsApi40DF2EE8
- ApiId
IntegrationType: MOCK
PassthroughBehavior: WHEN_NO_TEMPLATES
RequestTemplates:
$default: '{"statusCode":200}'
TemplateSelectionExpression: \$default
MockWithAuthAwsStackwsMockRouteResponseAEE0B8ED:
Type: AWS::ApiGatewayV2::RouteResponse
Properties:
ApiId:
Fn::GetAtt:
- testWsApiBackendWsApi40DF2EE8
- ApiId
RouteId:
Ref: MockWithAuthAwsStackwsMockRoute04DB7577
RouteResponseKey: $default
MockWithAuthAwsStackwsMockIntegrationResponse85928773:
Type: AWS::ApiGatewayV2::IntegrationResponse
Properties:
ApiId:
Fn::GetAtt:
- testWsApiBackendWsApi40DF2EE8
- ApiId
IntegrationId:
Ref: MockWithAuthAwsStackwsMockIntegration36E7A460
IntegrationResponseKey: $default
TemplateSelectionExpression: \$default
P.S I am actually using AWS CDK. The above template is the result of cdk synth. Let me know if you want to see the CDK code.
The reason why manual deployment using console works, while using cloudformation causes errors occasionally, is due to the order in which these resources are created. In the console, this is the order followed:
Routes, Integrations and Responses are created
They are associated with a stage
They are deployed to the specified stage
When using cloudformation, the order of creation of resources gets mixed up, resulting in them not being wired up properly. It seems that you have wired up the deployment to depend on the route being created first. You also need to make sure that the stage is created before the deployment. For this you could add an explicit DependsOn attribute, or an implicit reference to the stage within the deployment; perhaps in the stageName attribute as !Ref StageResource.
Or you could save the trouble and just add an autoDeploy: true to your stage, which will take care of the linking and order on its own.

How to do a major version upgrade for Aurora Postgres in CloudFormation with custom Parameter Groups?

Issue
I am trying to do a major version upgrade from Aurora Postgres 10.14 to 11.9 using Cloudformation. My template creates a DBCluster, a DBInstance, a DBClusterParameterGroup and a DBParameterGroup.
The problem is that when I try to update the stack and change the EngineVersion property for DBCluster from 10.14 to 11.9 and change the Family property for DBClusterParameterGroup and DBParameterGroup from aurora-postgresql10 to aurora-postgresql11, I get this error in CloudFormation:
Error
The following parameters are not defined for the specified group: enable_partitionwise_aggregate, enable_parallel_append, enable_partition_pruning, vacuum_cleanup_index_scale_factor, pg_bigm.last_update, apg_enable_semijoin_push_down, parallel_leader_participation, pg_bigm.enable_recheck, pg_bigm.gin_key_limit, max_parallel_maintenance_workers, pg_bigm.similarity_limit, enable_parallel_hash, enable_partitionwise_join (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterValue)
I think this is because, according to the AWS Documentation for RDS Parameter Groups:
The DB cluster parameter group family can't be changed when updating a DB cluster parameter group
So even though I am trying to update the Family property for DBClusterParameterGroup and DBParameterGroup, CloudFormation simply ignores that, and so it is trying to apply the aurora-postgresql10 Parameter group Family aurora-postgresql10 to a database now trying to run Aurora Postgres 11.9
What I've tried
Updating the Description property for
DBClusterParameterGroupandDBParameterGroupto include a reference to thepEngineVersion` parameter, as the AWS Documentation
says that will trigger a Replacement, but it does not actually do
this, and so I get the same error
Manually adding the parameters listed in the error to DBParameterGroup before running update. Got error "Unmodifiable DB Parameter: pg_bigm.last_update"
The only workaround I have found is clunky:
Manually update the database version in the console from 10.14 to 11.9, and change the DB cluster parameter group and Parameter group both to default.aurora-postgresql11 as well
Comment out the code for DBClusterParameterGroup and DBParameterGroup and update the stack with the updated EngineVersion 11.9 for DBCluster
Uncomment out the code for DBClusterParameterGroup and DBParameterGroup and update the stack again with the correct Family property aurora-postgresql11 on DBClusterParameterGroup and DBParameterGroup. Now the database is updated, it is using the custom parameter groups, and the stack is not drifting
Code
Parameters:
pEngineVersion:
Type: String
#currently '10.14'
#trying to change to '11.9'
pFamily:
Type: String
#currently 'aurora-postgresql10'
#trying to change to 'aurora-postgresql11'
Resources:
DBClusterParamGroup:
Type: AWS::RDS::DBClusterParameterGroup
Properties:
Description: !Sub 'AuroraDBClusterParamGroup-${AWS::Region}'
Family: !Ref pFamily
Parameters:
application_name: "App name"
log_statement: all
log_min_duration_statement: 0
DBParamGroup:
Type: AWS::RDS::DBParameterGroup
Properties:
Description: !Sub 'AuroraDBParamGroup-${AWS::Region}'
Family: !Ref pFamily
Parameters:
max_connections: 1000
AuroraDBCluster:
Type: AWS::RDS::DBCluster
Properties:
EngineVersion: !Ref pEngineVersion
Engine: aurora-postgresql
DBClusterParameterGroupName: !Ref 'DBClusterParamGroup'
#snipping unneccesary code#
AuroraDBInstance:
Type: AWS::RDS::DBInstance
Properties:
Engine: aurora-postgresql
DBParameterGroupName: !Ref 'DBParamGroup'
DBClusterIdentifier: !Ref 'AuroraDBCluster'
#snipping unneccesary code#
Any help would be very appreciated
Solution
This method worked for me (I've modified some things slightly to post in a public forum) EDIT: I have updated this to use the Blue/Green solution now that I have had the chance to test it.
Parameters:
[...]
DBEngineVersion:
Type: String
Description: "Choices are engine versions from Postgres 9/10/11, only newest and legacy deployed in Prod versions"
Default: 11.7
AllowedValues:
- 9.6.18
- 10.7
- 10.14
- 11.7
MajorVersionUpgrade:
Type: String
Description: Swap this between 'Blue' or 'Green' if we are doing a Major version upgrade
Default: Blue
AllowedValues:
- Blue
- Green
Conditions:
BlueDeployment: !Equals [!Ref MajorVersionUpgrade, "Blue"]
GreenDeployment: !Equals [!Ref MajorVersionUpgrade, "Green"]
Mappings:
EngineToPGFamily:
"9.6.18":
PGFamily: aurora-postgresql9.6
"10.7":
PGFamily: aurora-postgresql10
"10.14":
PGFamily: aurora-postgresql10
"11.7":
PGFamily: aurora-postgresql11
##### Keep parameters of both versions in sync ####
DBClusterParameterGroupBlue: #Keep parameters of both versions in sync
Type: "AWS::RDS::DBClusterParameterGroup"
Condition: BlueDeployment
Properties:
Description: !Sub "Postgres Cluster Parameter Group for ${InfrastructureStackName}"
Family: !FindInMap ['EngineToPGFamily', !Ref 'DBEngineVersion', 'PGFamily']
Parameters:
client_encoding: UTF8
idle_in_transaction_session_timeout: 60000
DBClusterParameterGroupGreen:
Type: "AWS::RDS::DBClusterParameterGroup"
Condition: GreenDeployment
Properties:
Description: !Sub "Postgres Cluster Parameter Group for ${InfrastructureStackName}"
Family: !FindInMap ['EngineToPGFamily', !Ref 'DBEngineVersion', 'PGFamily']
Parameters:
client_encoding: UTF8
idle_in_transaction_session_timeout: 60000
DBCluster:
Type: "AWS::RDS::DBCluster"
Properties:
DBClusterIdentifier: !Sub "${AWS::StackName}-cluster"
DBClusterParameterGroupName: !If [GreenDeployment, !Ref DBClusterParameterGroupGreen, !Ref DBClusterParameterGroupBlue]
Engine: aurora-postgresql
[...]
DBInstance1:
Type: "AWS::RDS::DBInstance"
Properties:
DBClusterIdentifier: !Ref DBCluster
EngineVersion: !Ref DBEngineVersion
Engine: aurora-postgresql
[...]
DBInstance2:
Type: "AWS::RDS::DBInstance"
Properties:
DBClusterIdentifier: !Ref DBCluster
EngineVersion: !Ref DBEngineVersion
Engine: aurora-postgresql
[...]
Usage and Methodology
This method will create a new ClusterParameterGroup that will be used during the upgrade (when doing a Major version upgrade), and clean up the original group when the upgrade is done. The advantage to this solution is that there is no version hardcoding in the group and the same template can be used for continuous lifecycle updates without additional changes needed (aside from desired Parameters updates).
Initial Setup
Initially, leave MajorVersionUpgrade set to Blue and the conditions will ensure that only the Blue ParameterGroup is created using the proper DB family.
Performing a Major Version Upgrade
Update the stack and set the DBEngineVersion to the new major version and MajorVersionUpgrade to Green if it is currently Blue (or Blue if it is currently Green). CloudFormation will:
Start by creating the new resources (which is the new parameter group referencing the new version)
Then it will update the Cluster and point to that new group
Finally, it will delete the old ParameterGroup (referncing the old version) during the cleanup phase since the Blue/Green Condition for that group no longer evaluates to true.
Important notes
The Parameters in both parameter groups must manually kept in sync
Obviously, update the DBEngineVersion AllowedValues list with versions as needed, just be sure to update the EngineToPGFamily Mappings to have a valid Postgres family value for that Engine version
Special care must be paid when implementing Parameters that are new to a DB Major version and don't exist in an older version, I have not tested this
You are correct that you cannot change the family on a DB Parameter Group that already has been created. That is why you get the error about the missing parameters, which isn't a great error response by AWS given the root cause.
Since DB Parameter Groups can be replaced without having to replace the DB Instance, you should change the Logical ID of the DB Parameter Group so it is detected as a new resource by CloudFormation, and removes the old v10 DB Parameter Group.
Example using your code:
Parameters:
pEngineVersion:
Type: String
Default: '11.9'
#currently '10.14'
#trying to change to '11.9'
pFamily:
Type: String
Default: 'aurora-postgresql11'
#currently 'aurora-postgresql10'
#trying to change to 'aurora-postgresql11'
Resources:
DBClusterParamGroup11:
Type: AWS::RDS::DBClusterParameterGroup
Properties:
Description: !Sub 'AuroraDBClusterParamGroup-${AWS::Region}'
Family: !Ref pFamily
Parameters:
application_name: "App name"
log_statement: all
log_min_duration_statement: 0
DBParamGroup11:
Type: AWS::RDS::DBParameterGroup
Properties:
Description: !Sub 'AuroraDBParamGroup-${AWS::Region}'
Family: !Ref pFamily
Parameters:
max_connections: 1000
AuroraDBCluster:
Type: AWS::RDS::DBCluster
Properties:
EngineVersion: !Ref pEngineVersion
Engine: aurora-postgresql
DBClusterParameterGroupName: !Ref 'DBClusterParamGroup'
#snipping unneccesary code#
AuroraDBInstance:
Type: AWS::RDS::DBInstance
Properties:
Engine: aurora-postgresql
DBParameterGroupName: !Ref 'DBParamGroup'
DBClusterIdentifier: !Ref 'AuroraDBCluster'
#snipping unneccesary code#
Notice how I renamed DBParamGroup to DBParamGroup11 and DBClusterParamGroup to DBClusterParamGroup11.

Get attribute of EC2 created via LaunchConfiguration

I would like to get the PrivateIP attribute of EC2s that i create via LaunchConfiguration.
I need that attribute so that i can assign a type A dns record to the instance for other purposes.
Here is my code:
Resources:
webLaunchConfig:
Type: 'AWS::AutoScaling::LaunchConfiguration'
Properties:
ImageId: !Ref webEc2AMI
InstanceType: !Ref ec2WebInstanceType
SecurityGroups: !Ref webEc2SG
UserData:
'Fn::Base64': !Sub >
#!/bin/bash -xe
apt update -y
dnsWebServerName:
Type: 'AWS::Route53::RecordSet'
Properties:
HostedZoneId: !Ref hostedZoneId
Comment: DNS name for my db server.
Name: !Ref dnsWebServerNamePar
Type: A
TTL: '900'
ResourceRecords:
- !GetAtt webLaunchConfig.PrivateIp
... and when i try to launch it i get this error:
Template contains errors.: Template error: resource webLaunchConfig
does not support attribute type PrivateIp in Fn::GetAtt
... indicating me that what i am trying to do is not supported. Though there must be a way to achieve this.
Do you know how to do it? Or a workaround for this?
Sadly you can't do this. AWS::AutoScaling::LaunchConfiguration is only a blueprint of an instance to be launched. Thus it does not provide information about instance PrivateIp. The get the PrivateIp you have to actually launch the instance.
To do so you have to use AWS::EC2::Instance. However AWS::EC2::Instance does not support launching from ``AWS::AutoScaling::LaunchConfiguration. So either you have to change your LaunchConfigurationintoLaunchTemplateor just create instance directly usingAWS::EC2::Instance` rather then any templates.

Logging for public hosted zone Route53

I'm trying to set up the logging for a public hosted zone on Route53 AWS. the template looks like this:
Resources:
HostedZonePublic1:
Type: AWS::Route53::HostedZone
Properties:
HostedZoneConfig:
Comment: !Join ['', ['Hosted zone for ', !Ref 'DomainNamePublic' ]]
Name: !Ref DomainNamePublic
QueryLoggingConfig:
CloudWatchLogsLogGroupArn: !GetAtt Route531LogGroup.Arn
Route531LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: Route531-AWSLogGroup
RetentionInDays: 7
But when I try to launch the stack I'm getting the following message:
The ARN for the CloudWatch Logs log group is invalid. (Service: AmazonRoute53; Status Code: 400; Error Code: InvalidInput; Request ID: 6c02db60-ef62-11e8-bce8-d14210c1b0cd)
Anybody an idea what could be wrong with this setup?
merci A
I encountered the same issue. The CloudWatch logs log group needs to be created in a specific region to be valid.
See following:
You must create the log group in the us-east-1 region.
You must use the same AWS account to create the log group and the hosted zone that you want to configure query logging for.
When you create log groups for query logging, we recommend that you use a consistent prefix.
You can find the full documentation here.