Use Iops property in cloudformation when StorageType : io1 and donot use when Storage type : gp2 - aws-cloudformation

I created a template to provision RDS using Cloudformation. While creating RDS we have two options io1,gp2 when we use gp2 we do not need to define iops but when using io1 we need to define iops.
I am able to use io1 but gp2 it show error:-
Encountered non numeric value for property Iops
Snippet of my template
StorageType:
Description: choose the storage type gp2 for 'General purpose SSD' & io1 for 'IOPS SSD'.
Default: "gp2"
Type: String
AllowedValues: ["gp2","io1"]
Conditions:
iops: !Equals [!Ref StorageType, "io1"]
gp2: !Equals [!Ref StorageType, "gp2"]
DB:
Type: AWS::RDS::DBInstance
Properties:
DBInstanceIdentifier: !Sub ${AWS::StackName}
DBName: !Ref 'DBName'
AllocatedStorage: !Ref 'Storage'
DBInstanceClass: !Ref 'InstanceType'
StorageType: !Ref StorageType
****Iops: !If [iops, !Ref iops, "AWS::NoValue"]****
StorageEncrypted: !Ref Encryption
Engine: postgres
EngineVersion: 11.2
Port: !Ref PortNo
MasterUsername: !Ref DBUser
MasterUserPassword: !Ref DBPassword
DBSubnetGroupName: !Ref RDSSubnetGroup
VPCSecurityGroups: [!Ref SecurityGroup]
DBParameterGroupName: !Ref RDSParamGroup
MultiAZ: !Ref MultiAz
PubliclyAccessible: !Ref PublicAccessibelity
Thank you in advance

Correct syntax would be:
Iops: !If [iops, !Ref iops, !Ref "AWS::NoValue"]
Documentation:
Condition Functions

Related

How do I set DeletionPolicy and UpdateReplacePolicy on AWS::RDS::DBCluster?

The AWS docs for DeletionPolicy mention a default policy for DB Clusters:
For AWS::RDS::DBCluster resources, the default policy is Snapshot
But when I try to set it to Delete I get the following error:
#: extraneous key [DeletionPolicy] is not permitted
Is there a way to change the DeletionPolicy and/or the UpdateReplacePolicy for DB Clusters?
DBCluster CloudFormation template (part of a Serverless template):
auroraCluster:
Type: AWS::RDS::DBCluster
Properties:
AvailabilityZones:
- eu-north-1a
- eu-north-1b
- eu-north-1c
DatabaseName:
publisher
DeletionPolicy: Delete,
DBClusterIdentifier: ${self:service}-db-cluster-${sls:stage}
DBSubnetGroupName: !Ref subnetGroup
DeletionProtection: !If [isProd, 'true', 'false']
Engine: aurora-postgresql
Port: 5432
EngineVersion: 14.3
KmsKeyId: !Ref kmsKey
ManageMasterUserPassword: 'true'
MasterUsername: postgres
MasterUserSecret:
SecretArn: !Ref secretRds
ServerlessV2ScalingConfiguration:
MaxCapacity: 2
MinCapacity: 1
StorageEncrypted: 'true'
VpcSecurityGroupIds:
- !Ref securityGroupDb
DeletionPolicy goes directly under the resource name, on the same level as Type and the Properties-object, not inside the Properties-object. As such:
auroraCluster:
Type: AWS::RDS::DBCluster
DeletionPolicy: Delete
Properties:
...

RDS does not support creating a DB instance

I am trying to create postgres rds using cloudformation but it give me this error
RDS does not support creating a DB instance with the following combination: DBInstanceClass=db.t2.micro, Engine=postgres, EngineVersion=13.3, LicenseModel=postgresql-license. For supported combinations of instance class and database engine version, see the documentation. (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: cdb3ewcd-17ef-404c-adc5-fcd04a590553; Proxy: null).
I have tried changing the instance type and EngineVersion but same error. Any help would be appreciated.
myDBEC2SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Frontend Access
VpcId: !Ref Ec2Vpc
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 5432
ToPort: 5432
CidrIp: 10.0.0.0/16
myDBParamGroup:
Type: AWS::RDS::DBParameterGroup
Properties:
Description: Database Parameter Group + pg_stat_statements
Family: postgres13
Parameters:
shared_preload_libraries: pg_stat_statements
myDBSubnetGroup:
Type: AWS::RDS::DBSubnetGroup
Properties:
DBSubnetGroupDescription: DB Private Subnet
SubnetIds:
- !Ref Ec2SubnetPrivate1
- !Ref Ec2SubnetPrivate2
pgDB:
Type: AWS::RDS::DBInstance
Properties:
DBName: !Ref 'DBName'
AllocatedStorage: !Ref DBAllocatedStorage
DBInstanceClass: !Ref 'DBInstanceClass'
Engine: postgres
EngineVersion: '13.3'
MasterUsername: !Ref 'DBUser'
MasterUserPassword: !Ref 'DBPassword'
MultiAZ: !Ref 'MultiAZ'
Tags:
- Key: Name
Value: Master Database
DBSubnetGroupName: !Ref myDBSubnetGroup
DBParameterGroupName: !Ref myDBParamGroup
VPCSecurityGroups:
- Fn::GetAtt:
- myDBEC2SecurityGroup
- GroupId
You have to find the instance that supports your engine postgres and version 13.3 for your region. To do this you should use the following command:
aws rds describe-orderable-db-instance-options --engine postgres --engine-version 13.3 --query "*[].{DBInstanceClass:DBInstanceClass,StorageType:StorageType}|[?StorageType=='gp2']|[].{DBInstanceClass:DBInstanceClass}" --output text
which gives (t2 is not supported at all):
db.m5.12xlarge
db.m5.16xlarge
db.m5.24xlarge
db.m5.2xlarge
db.m5.4xlarge
db.m5.8xlarge
db.m5.large
db.m5.xlarge
db.m6g.12xlarge
db.m6g.16xlarge
db.m6g.2xlarge
db.m6g.4xlarge
db.m6g.8xlarge
db.m6g.large
db.m6g.xlarge
db.r5.12xlarge
db.r5.16xlarge
db.r5.24xlarge
db.r5.2xlarge
db.r5.4xlarge
db.r5.8xlarge
db.r5b.12xlarge
db.r5b.16xlarge
db.r5b.24xlarge
db.r5b.2xlarge
db.r5b.4xlarge
db.r5b.8xlarge
db.r5b.large
db.r5b.xlarge
db.r5.large
db.r5.xlarge
db.r6g.12xlarge
db.r6g.16xlarge
db.r6g.2xlarge
db.r6g.4xlarge
db.r6g.8xlarge
db.r6g.large
db.r6g.xlarge
db.t3.2xlarge
db.t3.large
db.t3.medium
db.t3.micro
db.t3.small
db.t3.xlarge
db.t4g.2xlarge
db.t4g.large
db.t4g.medium
db.t4g.micro
db.t4g.small
db.t4g.xlarge
db.x2g.12xlarge
db.x2g.16xlarge
db.x2g.2xlarge
db.x2g.4xlarge
db.x2g.8xlarge
db.x2g.large
db.x2g.xlarge

Trying to add additional EBS volumes to MarkLogic Cluster Cloud Formation Template

New the Yaml and Cloud Formation. Trying to utilize MarkLogics template for deploying a clustered MarkLogic DB utilizing our own VPC. We have the cluster working but have gotten to the point where we would like to mount an additional volume to save backups too.
Were added additional volumes:
MarklogicVolume1root:
Type: 'AWS::EC2::Volume'
Properties:
AvailabilityZone: !Select [0, !Ref AZ]
Size: !Ref VolumeSize
Tags:
- Key: Name
Value: MarkLogic-GroupA-Host1-Volume1Aroot
VolumeType: !Ref VolumeType
Encrypted: !If [UseVolumeEncryption, 'true', 'false']
KmsKeyId: !If [HasCustomEBSKey, !Ref VolumeEncryptionKey, !Ref 'AWS::NoValue']
Metadata:
'AWS::CloudFormation::Designer':
id: c81032f7-b0ec-47ca-a236-e24d57b49ae3
MarklogicVolume1data:
Type: 'AWS::EC2::Volume'
Properties:
AvailabilityZone: !Select [0, !Ref AZ]
Size: !Ref VolumeSizeData
Tags:
- Key: Name
Value: MarkLogic-GroupA-Host1-Volume1Adata
VolumeType: !Ref VolumeType
Encrypted: !If [UseVolumeEncryption, 'true', 'false']
KmsKeyId: !If [HasCustomEBSKey, !Ref VolumeEncryptionKey, !Ref 'AWS::NoValue']
MarklogicVolume1backup:
Type: 'AWS::EC2::Volume'
Properties:
AvailabilityZone: !Select [0, !Ref AZ]
Size: !Ref VolumeSizeBackup
Tags:
- Key: Name
Value: MarkLogic-GroupA-Host1-Volume1Abackup
VolumeType: !Ref VolumeType
Encrypted: !If [UseVolumeEncryption, 'true', 'false']
KmsKeyId: !If [HasCustomEBSKey, !Ref VolumeEncryptionKey, !Ref 'AWS::NoValue']
Updated the blockmapping within the Launch Configuration and the User Data script:
LaunchConfig1:
Type: 'AWS::AutoScaling::LaunchConfiguration'
DependsOn:
- InstanceSecurityGroup
Properties:
BlockDeviceMappings:
- DeviceName: !Ref MarklogicVolume1root
NoDevice: true
Ebs: {}
- DeviceName: !Ref MarklogicVolume1data
NoDevice: true
Ebs: {}
- DeviceName: !Ref MarklogicVolume1backup
NoDevice: true
Ebs: {}
KeyName: !Ref KeyName
ImageId: !If [EssentialEnterprise, !FindInMap [LicenseRegion2AMI,!Ref 'AWS::Region',"Enterprise"], !FindInMap [LicenseRegion2AMI, !Ref 'AWS::Region', "BYOL"]]
UserData: !Base64
'Fn::Join':
- ''
- - MARKLOGIC_CLUSTER_NAME=
- !Ref MarkLogicDDBTable
- |+
- MARKLOGIC_EBS_VOLUME1=
- !Ref MarklogicVolume1root
- ',:'
- !Ref VolumeSize
- '::'
- !Ref VolumeType
- |
::,*
- |
- MARKLOGIC_EBS_VOLUME2=
- !Ref MarklogicVolume1data
- ',:'
- !Ref VolumeSizeData
- '::'
- !Ref VolumeType
- |
::,*
- |
- MARKLOGIC_EBS_VOLUME3=
- !Ref MarklogicVolume1backup
- ',:'
- !Ref VolumeSizeBackup
- '::'
- !Ref VolumeType
- |
::,*
- |
MARKLOGIC_NODE_NAME=NodeA#
- MARKLOGIC_ADMIN_USERNAME=
- !Ref AdminUser
- |+
- MARKLOGIC_ADMIN_PASSWORD=
- !Ref AdminPass
- |+
- |
MARKLOGIC_CLUSTER_MASTER=1
- MARKLOGIC_LICENSEE=
- !Ref Licensee
- |+
- MARKLOGIC_LICENSE_KEY=
- !Ref LicenseKey
- |+
- MARKLOGIC_LOG_SNS=
- !Ref LogSNS
- |+
- !If
- UseVolumeEncryption
- !Join
- ''
- - 'MARKLOGIC_EBS_KEY='
- !If
- HasCustomEBSKey
- !Ref VolumeEncryptionKey
- 'default'
- ''
We are able to deploy the additional volumes but they are not mounting. This is also interrupting the final configuration of the Ec2 instances as well because they also fail their load balancer health checks. Any help or insight is greatly appreciated!
The documentation talks about the steps needed to add an EBS volume to an instance.
Creating an EBS Volume and Attaching it to an Instance
The brief over view is you need to:
Create the Volume
Attach the Volume (as /dev/sdf or higher)
Run init-volumes-from-system
Once that is done, you may also want to manually check the DynamoDB table associated with the CF Template, and ensure the entries have the updated volume information from the host.
Michael's answer is how to add a EBS volume after a stack is created, this is different from the question which is how to define a CF template which pre-defines volumes. If the volume is created but not mounted I recommend looking into the system logs, both marklogic logs and /var/log/messages for indications. If that does not provide sufficient information to resolve then you should open a support ticket for assistance.

CloudFormation doesn't receive signal from cfn-signal in non-default VPC

I have CloudFormation template with LaunchTemplate & ASG,
When cfn-init completes deploy cfn-signal should send signal to CloudFormation with result.
From /var/log/cfn-init.log I see that signal has been sent:
..and from /var/log/cfn-wire.log I see that it's been received successfully:
..but CloudFormation doesn't receive it and fails stack on timeout:
Relevant piece of CloudFormation code:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
VPC:
Type: AWS::EC2::VPC::Id
Default: "vpc-f98e0683"
Subnet1:
Type: String
Default: "subnet-da88f186"
KeyName:
Type: String
Default: "test-aws6-virginia"
AMI:
Type: AWS::EC2::Image::Id
Default: "ami-07b4156579ea1d7ba" #Ubuntu 16.04
InstanceType:
Type: String
Default: "t2.micro"
Az1:
Type: AWS::EC2::AvailabilityZone::Name
Default: "us-east-1a"
Resources:
SecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupName: "SecurityGroup"
GroupDescription: "Security Group"
VpcId: !Ref VPC
SecurityGroupEgress:
- CidrIp: 0.0.0.0/0
IpProtocol: "-1"
SecurityGroupIngress:
- CidrIp: 0.0.0.0/0
IpProtocol: "-1"
InstanceRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: "InstanceRole"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/AdministratorAccess"
InstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Path: "/"
Roles:
- !Ref InstanceRole
NetworkInterface:
Type: "AWS::EC2::NetworkInterface"
Properties:
GroupSet:
- !Ref SecurityGroup
SubnetId: !Ref Subnet1
Tags:
- Key: Name
Value: "NetworkInterface"
ZabbixLaunchTemplate:
Type: "AWS::EC2::LaunchTemplate"
Metadata:
AWS::CloudFormation::Init:
configSets:
Zabbix:
- 00-ZabbixInstall
00-ZabbixInstall:
commands:
download:
command: "wget https://repo.zabbix.com/zabbix/4.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_4.0-2+xenial_all.deb && dpkg -i zabbix-release_4.0-2+xenial_all.deb"
update:
command: "apt update"
install:
command: "apt -y install zabbix-server-pgsql zabbix-frontend-php php-pgsql zabbix-agent"
services:
sysvinit:
zabbix-server:
enabled: "true"
ensureRunning: "true"
zabbix-agent:
enabled: "true"
ensureRunning: "true"
apache2:
enabled: "true"
ensureRunning: "true"
Properties:
LaunchTemplateName: "ZabbixLaunchTemplate"
LaunchTemplateData:
TagSpecifications:
- ResourceType: "instance"
Tags:
- Key: Name
Value: "Instance"
- ResourceType: volume
Tags:
- Key: Name
Value: "Instance"
DisableApiTermination: false
KeyName: !Ref KeyName
ImageId: !Ref AMI
InstanceType: !Ref InstanceType
IamInstanceProfile:
Name: !Ref InstanceProfile
NetworkInterfaces:
- NetworkInterfaceId: !Ref NetworkInterface
DeviceIndex: 0
UserData:
Fn::Base64:
!Join
- ''
- - |
#!/bin/bash
- |
- apt-get update -y && apt-get install python-pip -y && pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
- |+
- |
- "cfn-init --verbose"
- " --stack "
- !Ref "AWS::StackName"
- " --resource ZabbixLaunchTemplate"
- " --configsets Zabbix"
- " --region "
- !Ref "AWS::Region"
- |+
- |
- "cfn-signal --exit-code $?"
- " --stack "
- !Ref "AWS::StackName"
- " --resource ZabbixASG"
- " --region "
- !Ref "AWS::Region"
- |+
ZabbixASG:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
AutoScalingGroupName: "ZabbixASG"
DesiredCapacity: "1"
MaxSize: "1"
MinSize: "1"
HealthCheckType: "EC2"
LaunchTemplate:
LaunchTemplateId: !Ref ZabbixLaunchTemplate
Version: !GetAtt ZabbixLaunchTemplate.LatestVersionNumber
AvailabilityZones:
- !Ref Az1
CreationPolicy:
ResourceSignal:
Timeout: PT15M
It doesn't work only if it's deployed in non-default VPC, e.g. it doesn't work if VPC is created from this template:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
VpcCIDR:
Type: String
Default: "172.29.0.0/16"
Subnet1CIDR:
Type: String
Default: "172.29.1.0/24"
Subnet2CIDR:
Type: String
Default: "172.29.2.0/24"
Az1:
Type: String
Default: "us-west-2a"
Az2:
Type: String
Default: "us-west-2c"
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: !Ref VpcCIDR
EnableDnsHostnames: true
EnableDnsSupport: true
InstanceTenancy: default
InternetGateway:
Type: AWS::EC2::InternetGateway
VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref VPC
RouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Subnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: !Ref Subnet1CIDR
AvailabilityZone: !Ref Az1
MapPublicIpOnLaunch: true
Subnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: !Ref Subnet2CIDR
AvailabilityZone: !Ref Az2
MapPublicIpOnLaunch: true
Subnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref RouteTable
SubnetId: !Ref Subnet1
Subnet2RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref RouteTable
SubnetId: !Ref Subnet2
Route:
Type: AWS::EC2::Route
Properties:
DestinationCidrBlock: "0.0.0.0/0"
GatewayId: !Ref InternetGateway
RouteTableId: !Ref RouteTable
Outputs:
VpcId:
Value:
!Ref VPC
Subnet1Id:
Value:
!Ref Subnet1
Subnet2Id:
Value:
!Ref Subnet2
It is the same on both Ubuntu 16.04 and AWS Linux 2
Any ideas why and how to fix?
This one has me stumped!
I have managed to reproduce your results both in a VPC created with your supplied template, and also in a VPC created by the VPC wizard.
In such cases, CloudFormation does not recognize the completion of the ASG. When I tried sending the cfn-signal manually, it responded with:
$ cfn-signal --exit-code 0 --stack s7 --resource ZabbixASG --region us-west-2
2019-06-20 23:13:24,571 [DEBUG] CloudFormation client initialized with endpoint https://cloudformation.us-west-2.amazonaws.com
2019-06-20 23:13:24,571 [DEBUG] Signaling resource ZabbixASG in stack s7 with unique ID i-07d2be90dc51c509a and status SUCCESS
ValidationError: Signal with ID i-07d2be90dc51c509a for resource ZabbixASG already exists. Signals may only be updated with a FAILURE status.
This indicates that the service has already received the signal, so it was sent correctly. However, the status of the ASG remains in Resource creation Initiated.
Why the result would vary when using a Default VPC, I have no idea! There is no communication difference that would impact such a signal.
The only thing I can suggest is contacting AWS Support and asking them to help debug.

AWS CloudFormation - AWS::ElasticLoadBalancingV2::LoadBalancer - SecurityGroups

Is there a possibility in CFN templates to add some specific Security Groups to ALB depending on the parameter?
I have a situation where two security groups are adding to the ALB:
ALB
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
...
SecurityGroups:
- !Ref 'SecurityGroup1'
- !Ref 'SecurityGroup2'
Now there is a SecurityGroup3 that I would like to eventually add only if some parameter has a specific value. Let's say if parameter add_sg3 equals yes then the third SG is added to ALB. I always use "!If in similar situations but there are more than 2 SGs. Any advice would be welcome. Thanks!
You can achieve that using a Condition and the AWS::NoValue pseudo-parameter. Follow below a complete example:
Parameters:
Environment:
Type: String
Default: dev
AllowedValues: ["dev", "prod"]
VpcId:
Type: 'AWS::EC2::VPC::Id'
Subnet1:
Type: 'AWS::EC2::Subnet::Id'
Subnet2:
Type: 'AWS::EC2::Subnet::Id'
Conditions:
MyTest: !Equals ["dev", !Ref Environment]
Resources:
ALB:
Type: 'AWS::ElasticLoadBalancingV2::LoadBalancer'
Properties:
SecurityGroups:
- !Ref SecurityGroup1
- !If [ MyTest, !Ref SecurityGroup2, !Ref 'AWS::NoValue' ]
Subnets:
- !Ref Subnet1
- !Ref Subnet2
SecurityGroup1:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupDescription: 'Group 1'
VpcId: !Ref VpcId
SecurityGroup2:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupDescription: 'Group 2'
VpcId: !Ref VpcId