Create Aurora Cluster with Babelfish enables using Cloud Formation - aws-cloudformation

Is there any way we can create an Aurora RDS Cluster Multi-AZ with Babelfish enabled using cloud formation. We can create it using [console or cli][1]. However, I want to create it using Cloudformation. I am not finding the options in the below available options.
DBCluster:
Type: AWS::RDS::DBCluster
Properties:
AssociatedRoles:
AssociatedRoles
AvailabilityZones:
AvailabilityZones
BacktrackWindow: Number
BackupRetentionPeriod: Number
CopyTagsToSnapshot: false
DBClusterIdentifier: "String"
DBClusterParameterGroupName: "String"
DBSubnetGroupName: "String"
DatabaseName: "String"
DeletionProtection: false
EnableCloudwatchLogsExports:
EnableCloudwatchLogsExports
EnableHttpEndpoint: false
EnableIAMDatabaseAuthentication: false
Engine: "String" # Required
EngineMode: "String"
EngineVersion: "String"
GlobalClusterIdentifier: "String"
KmsKeyId: "String"
MasterUserPassword: "String"
MasterUsername: "String"
Port: Number
PreferredBackupWindow: "String"
PreferredMaintenanceWindow: "String"
ReplicationSourceIdentifier: "String"
RestoreType: "String"
ScalingConfiguration:
AutoPause: false
MaxCapacity: Number
MinCapacity: Number
SecondsUntilAutoPause: Number
SnapshotIdentifier: "Number"
SourceDBClusterIdentifier: "Number"
SourceRegion: "Number"
StorageEncrypted: false
Tags:
Tags
UseLatestRestorableTime: false
VpcSecurityGroupIds:
VpcSecurityGroupIds
Not even at instance level
DBInstance:
Type: AWS::RDS::DBInstance
Properties:
AllocatedStorage: "String"
AllowMajorVersionUpgrade: false
AssociatedRoles:
AssociatedRoles
AutoMinorVersionUpgrade: false
AvailabilityZone: "String"
BackupRetentionPeriod: Number
CACertificateIdentifier: "String"
CharacterSetName: "String"
CopyTagsToSnapshot: false
DBClusterIdentifier: "String"
DBInstanceClass: "String" # Required
DBInstanceIdentifier: "String"
DBName: "String"
DBParameterGroupName: "String"
DBSecurityGroups:
DBSecurityGroups
DBSnapshotIdentifier: "String"
DBSubnetGroupName: "String"
DeleteAutomatedBackups: false
DeletionProtection: false
Domain: "String"
DomainIAMRoleName: "String"
EnableCloudwatchLogsExports:
EnableCloudwatchLogsExports
EnableIAMDatabaseAuthentication: false
EnablePerformanceInsights: false
Engine: "String"
EngineVersion: "String"
Iops: Number
KmsKeyId: "String"
LicenseModel: "String"
MasterUserPassword: "String"
MasterUsername: "String"
MaxAllocatedStorage: Number
MonitoringInterval: Number
MonitoringRoleArn: "String"
MultiAZ: false
OptionGroupName: "String"
PerformanceInsightsKMSKeyId: "String"
PerformanceInsightsRetentionPeriod: Number
Port: "String"
PreferredBackupWindow: "String"
PreferredMaintenanceWindow: "String"
ProcessorFeatures:
ProcessorFeatures
PromotionTier: Number
PubliclyAccessible: false
SourceDBInstanceIdentifier: "String"
SourceRegion: "String"
StorageEncrypted: false
StorageType: "String"
Tags:
Tags
Timezone: "String"
UseDefaultProcessorFeatures: false
VPCSecurityGroups:
VPCSecurityGroups
Thank you guys for your time and help.
[1]: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/babelfish-create.html

Well, I got the answer to this question from somewhere else and it is the same as #SilentSteel mentioned in his comment. You have to create a new DB Cluster Parameter group with babel fish option 'on' and assign that parameter group to the cluster. However, babel fish is only supported in postgresql13 and later versions.
RDSClusterParameterGroupforBF:
Type: AWS::RDS::DBClusterParameterGroup
Properties:
Description: "Parameter Group for adding BableFish support in Aurora PostgreSQL" # Required
Family: "aurora-postgresql13" # Required
Parameters:
rds.babelfish_status: 'on'
Then in the cluster you will assign this using
RDSCluster:
Type: 'AWS::RDS::DBCluster'
Properties:
MasterUsername: !Join ['', ['{{resolve:secretsmanager:', !Ref MyRDSSecret, ':SecretString:username}}' ]]
MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref MyRDSSecret, ':SecretString:password}}' ]]
DBClusterIdentifier: aurora-postgresql-cluster
Engine: aurora-postgresql
EngineVersion: '13.6'
EngineMode: provisioned
**DBClusterParameterGroupName: !Ref RDSClusterParameterGroupforBF**
DBSubnetGroupName: !Ref AuroraDBSubnetGroup1
DatabaseName: Sample
Port: '5432'
VpcSecurityGroupIds:
- Ref: DatabaseSecurityGroup
EnableCloudwatchLogsExports:
- postgresql
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Aurora DB Cluster1

Related

Cloudformation: substitute variable in map key

I have a role defined like this:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
AWSAccountId:
Type: String
OidcProvider:
Type: String
AppNamespace:
Type: String
AppServiceAccountName:
Type: String
Resources:
CloudWatchRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Principal:
Federated:
- !Join ["", [ "arn:aws:iam::", !Ref AWSAccountId, ":oidc-provider/", !Ref OidcProvider ] ]
Action:
- "sts:AssumeRoleWithWebIdentity"
Condition:
StringEquals:
!Sub ${OidcProvider}:sub: "system:serviceaccount:${AppNamespace}:${AppServiceAccountName}"
My challenge is how to substitute parameters in the StringEquals section. Everything works in the Federated block. But in the StringEquals block I couldn't get join or sub to work.
With the code as is, I get error message:
An error occurred (ValidationError) when calling the CreateStack operation:
Template format error[/Resources/CloudWatchRole/Properties/AssumeRolePolicyDocument/
Statement/0/Condition/StringEquals] map keys must be strings; received a map instead
So, I guess my issue is how to substitute variables in the keys of a map. UserData didn't help either.
You problem is on Federated not on StringEquals.
Federated value needs to be string but you define it as Map. Please remove - before !Join.
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
AWSAccountId:
Type: String
OidcProvider:
Type: String
AppNamespace:
Type: String
AppServiceAccountName:
Type: String
Resources:
CloudWatchRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument: !Sub
- |
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "${IamOidcProviderArn}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OidcProvider}:sub": "system:serviceaccount:${AppNamespace}:${AppServiceAccountName}"
}
}
}
]
}
- IamOidcProviderArn: !Join
- ''
- - 'arn:aws:iam::'
- !Ref AWSAccountId
- ':oidc-provider/'
- !Ref OidcProvider
OidcProvider: !Ref OidcProvider
AppNamespace: !Ref AppNamespace
AppServiceAccountName: !Ref AppServiceAccountName

Two HTTP Methods for one AWS API Gateway Resource

There is this wicked post about configuring an API Gateway method for CORS through CloudFormation, and I'm giving it a go. I want to create the following endpoint with two methods, "options" and "post":
/image/submit
Here is my CF template snippet:
ApiDefault:
Type: "AWS::ApiGateway::RestApi"
Properties:
Name: "Stash-Default"
FailOnWarnings: true
ApiDefaultDeployment:
Type: AWS::ApiGateway::Deployment
DependsOn:
- "ApiMethodImageSubmitPost"
- "ApiMethodImageSubmitOption"
Properties:
RestApiId: !Ref "ApiDefault"
StageName: "v1"
ApiResourceImage:
Type: "AWS::ApiGateway::Resource"
Properties:
ParentId: !GetAtt ["ApiDefault", "RootResourceId"]
PathPart: "image"
RestApiId: !Ref "ApiDefault"
ApiResourceImageSubmit:
Type: "AWS::ApiGateway::Resource"
Properties:
ParentId: !Ref "ApiResourceImage"
PathPart: "submit"
RestApiId: !Ref "ApiDefault"
ApiMethodImageSubmitPost:
Type: "AWS::ApiGateway::Method"
Properties:
HttpMethod: "POST"
AuthorizationType: "NONE"
MethodResponses:
- StatusCode: "200"
Integration:
IntegrationHttpMethod: "POST"
Type: "AWS_PROXY"
IntegrationResponses:
- StatusCode: "200"
Credentials: !GetAtt [ "ExecuteApiMethodImageSubmit", "Arn" ]
Uri: !Sub
- "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${lambdaArn}/invocations"
- lambdaArn: !GetAtt [ "ImageReceive", "Arn" ]
RestApiId: !Ref "ApiDefault"
ResourceId: !Ref "ApiResourceImageSubmit"
ApiMethodImageSubmitOption:
Type: "AWS::ApiGateway::Method"
Properties:
HttpMethod: "OPTIONS"
AuthorizationType: "NONE"
Integration:
Type: "MOCK"
IntegrationResponses:
- StatusCode: "200"
ResponseParameters:
method.response.header.Access-Control-Allow-Headers: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
method.response.header.Access-Control-Allow-Methods: "'POST,OPTIONS'"
method.response.header.Access-Control-Allow-Origin: "'*'"
MethodResponses:
- StatusCode: "200"
ResponseModels:
application/json: "Empty"
ResponseParameters:
method.response.header.Access-Control-Allow-Headers: false
method.response.header.Access-Control-Allow-Methods: false
method.response.header.Access-Control-Allow-Origin: false
RestApiId: !Ref "ApiDefault"
ResourceId: !Ref "ApiResourceImageSubmit"
It bombs saying ApiMethodImageSubmitPost:
Method already exists for this resource (Service: AmazonApiGateway;
Status Code: 409; Error Code: ConflictException; Request ID:
454cf46a-b434-4626-bd4b-b6d4fe21142c)
Can you create two http-methods for a single API resource in this fashion? I'm not having a ton of luck with AWS' docs on this one.

Using Exported values

I am able to export the keys using this cloudformation template...
https://github.com/shantanuo/cloudformation/blob/master/restricted.template.txt
But how do I import the saved keys directly into "UserData" section of another template? I tried this, but does not work...
aws-ec2-assign-elastic-ip --access-key !Ref {"Fn::ImportValue" : "accessKey" } --secret-key --valid-ips 35.174.198.170
The rest of the template (without access and secret key reference) is working as expected.
https://github.com/shantanuo/cloudformation/blob/master/security.template2.txt
So, if this is your script that does the export (sorry, this one is in yaml)
AWSTemplateFormatVersion: '2010-09-09'
Metadata:
License: Apache-2.0
Description: 'AWS CloudFormation Sample Template'
Parameters:
NewUsername:
NoEcho: 'false'
Type: String
Description: New account username
MinLength: '1'
MaxLength: '41'
ConstraintDescription: the username must be between 1 and 41 characters
Password:
NoEcho: 'true'
Type: String
Description: New account password
MinLength: '1'
MaxLength: '41'
ConstraintDescription: the password must be between 1 and 41 characters
Resources:
CFNUser:
Type: AWS::IAM::User
Properties:
LoginProfile:
Password: !Ref 'Password'
UserName : !Ref 'NewUsername'
CFNAdminGroup:
Type: AWS::IAM::Group
Admins:
Type: AWS::IAM::UserToGroupAddition
Properties:
GroupName: !Ref 'CFNAdminGroup'
Users: [!Ref 'CFNUser']
CFNAdminPolicies:
Type: AWS::IAM::Policy
Properties:
PolicyName: CFNAdmins
PolicyDocument:
Statement:
- Effect: Allow
Action: '*'
Resource: '*'
Condition:
StringEquals:
aws:RequestedRegion:
- ap-south-1
- us-east-1
Groups: [!Ref 'CFNAdminGroup']
CFNKeys:
Type: AWS::IAM::AccessKey
Properties:
UserName: !Ref 'CFNUser'
Outputs:
AccessKey:
Value: !Ref 'CFNKeys'
Description: AWSAccessKeyId of new user
Export:
Name: 'accessKey'
SecretKey:
Value: !GetAtt [CFNKeys, SecretAccessKey]
Description: AWSSecretAccessKey of new user
Export:
Name: 'secretKey'
Then here is an example of how you would import those values in userdata in the import cloudformation script:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Test instance stack",
"Parameters": {
"KeyName": {
"Description": "The EC2 Key Pair to allow SSH access to the instance",
"Type": "AWS::EC2::KeyPair::KeyName"
},
"BaseImage": {
"Description": "The AMI to use for machines.",
"Type": "String"
},
"VPCID": {
"Description": "ID of the VPC",
"Type": "String"
},
"SubnetID": {
"Description": "ID of the subnet",
"Type": "String"
}
},
"Resources": {
"InstanceSecGrp": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Instance Security Group",
"SecurityGroupIngress": [{
"IpProtocol": "-1",
"CidrIp": "0.0.0.0/0"
}],
"SecurityGroupEgress": [{
"IpProtocol": "-1",
"CidrIp": "0.0.0.0/0"
}],
"VpcId": {
"Ref": "VPCID"
}
}
},
"SingleInstance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"KeyName": {
"Ref": "KeyName"
},
"ImageId": {
"Ref": "BaseImage"
},
"InstanceType": "t2.micro",
"Monitoring": "false",
"BlockDeviceMappings": [{
"DeviceName": "/dev/xvda",
"Ebs": {
"VolumeSize": "20",
"VolumeType": "gp2"
}
}],
"NetworkInterfaces": [{
"GroupSet": [{
"Ref": "InstanceSecGrp"
}],
"AssociatePublicIpAddress": "true",
"DeviceIndex": "0",
"DeleteOnTermination": "true",
"SubnetId": {
"Ref": "SubnetID"
}
}],
"UserData": {
"Fn::Base64": {
"Fn::Join": ["", [
"#!/bin/bash -xe\n",
"yum install httpd -y\n",
"sudo sh -c \"echo ",
{ "Fn::ImportValue" : "secretKey" },
" >> /home/ec2-user/mysecret.txt\" \n",
"sudo sh -c \"echo ",
{ "Fn::ImportValue" : "accessKey" },
" >> /home/ec2-user/myaccesskey.txt\" \n"
]]
}
}
}
}
}
}
In this example I am just echoing the value of the import into a file. If you ssh onto the SingleInstance and check the logs at /var/lib/cloud/instance/scripts/part-001 then you will see what the user data script looks like on the server itself. In my case the contents of that file is (values aren't real for the keys):
#!/bin/bash -xe
yum install httpd -y
sudo sh -c "echo hAc7/TJA123143235ASFFgKWkKSjIC4 >> /home/ec2-user/mysecret.txt"
sudo sh -c "echo AKIAQ123456789123D >> /home/ec2-user/myaccesskey.txt"
Using this as a starting point you can do whatever you need to with the import value.
I've tested all of this with the exact scripts above and it all works.
What is suggested in the comments seems to be correct. I can directly refer to the name (for e.g. 'accessKey' in this case) using ImportValue!
AWSTemplateFormatVersion: '2010-09-09'
Metadata:
License: Apache-2.0
Description: 'AWS CloudFormation Sample Template'
Resources:
CFNUser:
Type: AWS::IAM::User
Outputs:
AccessKey:
Value:
Fn::ImportValue: accessKey
Description: AWSAccessKeyId of new user
For e.g. the above template will return the value of accessKey if it is already exported by some other template.

How to define an ECR Lifecycle Policy with CloudFormation

In order to limit the number of images in a repository, I'd like to define a Lifecycle policy. Since all the stack is defined with CloudFormation, I'd like to define this policy too.
For example, my policy could be "keep only the most recent 8 images, no matter if tagged or not".
The solution was pretty easy, but since I could not find any example or similar questions (ECR is not mainstream, I know), let me post here the easy solution that I found, which simply requires to insert the policy as JSON into the CloudFormation definition:
MyRepository:
Type: AWS::ECR::Repository
Properties:
LifecyclePolicy:
LifecyclePolicyText: |
{
"rules": [
{
"rulePriority": 1,
"description": "Only keep 8 images",
"selection": {
"tagStatus": "any",
"countType": "imageCountMoreThan",
"countNumber": 8
},
"action": { "type": "expire" }
}]
}
Of course this is very simplistic, but it's the starting point that I was looking for
You can also define a reference to your PolicyText and later on your parameters.json stringify your policy.
It would look like something like this:
template.yml
Parameters:
lifecyclePolicyText:
Description: Lifecycle policy content (JSON), the policy content the pre-fixes for the microservices and the kind of policy (CountMoreThan).
Type: String
repositoryName:
Description: ECR Repository Name to which we will apply the lifecycle policies.
Type: String
registryId:
Description: AWS account identification number (12 digits)
Type: String
Default: xxxxx
Resources:
Repository:
Type: AWS::ECR::Repository
Properties:
LifecyclePolicy:
LifecyclePolicyText: !Ref lifecyclePolicyText
RegistryId: !Ref registryId
RepositoryName: !Ref repositoryName
Outputs:
Arn:
Value: !GetAtt Repository.Arn
parameters.json
[
{
"ParameterKey": "lifecyclePolicyText",
"ParameterValue": "{'rules':[{'rulePriority':1,'description':'Only keep 8 images','selection':{'tagStatus':'any','countType':'imageCountMoreThan','countNumber':8},'action':{'type':'expire'}}]}"
},
{
"ParameterKey": "repositoryName",
"ParameterValue": "xxxx"
}
]
| will allow you to add text inline.
AWSTemplateFormatVersion: "2010-09-09"
Resources:
ECRRepo:
Type: AWS::ECR::Repository
Properties:
RepositoryName: "images"
LifecyclePolicy:
LifecyclePolicyText: |
{
"rules": [
{
"rulePriority": 2,
"description": "Keep only one untagged image, expire all others",
"selection": {
"tagStatus": "untagged",
"countType": "imageCountMoreThan",
"countNumber": 1
},
"action": {
"type": "expire"
}
}
]
}

Create Aws Cloudformation Ec2 t2.micro Template

I would like to create a CloudFormation template whose instance type is "t2.micro". However, i couldn't find any example about this instance type. Ec2 whose type is "t2.micro" needs VPC, etc.
Thanks.
You can make use the following template snippet.
It will output the EC2 instance Public DNS name.
Note: I haven't tested this template!
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "CloudFormation template for creating an ec2 instance",
"Parameters": {
"KeyName": {
"Description": "Key Pair name",
"Type": "AWS::EC2::KeyPair::KeyName",
"Default": "my_keypair_name"
},
"VPC": {
"Type": "AWS::EC2::VPC",
"Properties":{
"CidrBlock": "10.0.0.0/16",
"EnableDnsHostnames": "true"
}
},
"Subnet":{
"Type": "AWS::EC2::Subnet",
"Properties": {
"VpcId": {"Ref": "VPC"},
"CidrBlock": "10.0.0.0/24",
"AvailabilityZone": "us-east-1a"
}
},
"InstanceType": {
"Description": "Select one of the possible instance types",
"Type": "String",
"Default": "t2.micro",
"AllowedValues": ["t2.micro", "t2.small", "t2.medium"]
}
},
"Resources":{
"SecurityGroup":{
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "My security group",
"VpcId": {"Ref": "VPC"},
"SecurityGroupIngress": [{
"CidrIp": "0.0.0.0/0",
"FromPort": 22,
"IpProtocol": "tcp",
"ToPort": 22
}]
}
},
"Server": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-123456",
"InstanceType": {"Ref": "InstanceType"},
"KeyName": {"Ref": "KeyName"},
"SecurityGroupIds": [{"Ref": "SecurityGroup"}],
"SubnetId": {"Ref": "Subnet"}
}
}
},
"Outputs": {
"PublicName": {
"Value": {"Fn::GetAtt": ["Server", "PublicDnsName"]},
"Description": "Public name (connect via SSH)"
}
}
}
For more information see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html
Below stack allows to create EC2 instance, just specify parameters like VPC id,Subnet Id, SG id, instance type, also ami id , for instance type u can define in this it is t2,micro by default. This Is sucessfully tested and running w/o errors.kindly comment if you have any queries.######Created by MARK Machines
So is this,:
{"Description": "CloudFormation template for creating an ec2 instance",
"Parameters": {
"KeyName": {
"Description": "Key Pair name",
"Type": "AWS::EC2::KeyPair::KeyName",
"Default": "xxx-xxx"
},
"VPC": {
"Type": "AWS::EC2::VPC::Id",
"Default":"givevpcid"
},
"Subnet":{
"Type": "AWS::EC2::Subnet::Id",
"Default": "givesubnetid"
},
"InstanceType": {
"Description": "Select one of the possible instance types",
"Type": "String",
"Default": "t2.micro",
"AllowedValues": ["t2.micro", "t2.small", "t2.medium"]
},
"SecurityGroup":{
"Type": "AWS::EC2::SecurityGroup::Id",
"Default" : "givesecuritygroupid",
"AllowedValues": ["sg-xxxxx", "sg-yyy", "sg-zzz"]
}
},
"Resources":{
"Server": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-098789xxxxxxxxx",
"InstanceType": {"Ref": "InstanceType"},
"KeyName": {"Ref": "KeyName"},
"SecurityGroupIds": [{"Ref": "SecurityGroup"}],
"SubnetId": {"Ref": "Subnet"}
}
}
},
"Outputs": {
"PublicName": {
"Value": {"Fn::GetAtt": ["Server", "PublicDnsName"]},
"Description": "Public name (connect via SSH)"
}
}
}
You can use my following template which works fine.
single-instance.yml
AWSTemplateFormatVersion: 2010-09-09
Description: >-
AWS CloudFormation Sample Template EC2InstanceWithSecurityGroupSample: Create
an Amazon EC2 instance running the Amazon Linux AMI. The AMI is chosen based
on the region in which the stack is run. This example creates an EC2 security
group for the instance to give you SSH access. **WARNING** This template
creates an Amazon EC2 instance. You will be billed for the AWS resources used
if you create a stack from this template.
Parameters:
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
Type: 'AWS::EC2::KeyPair::KeyName'
ConstraintDescription: must be the name of an existing EC2 KeyPair.
InstanceType:
Description: WebServer EC2 instance type
Type: String
Default: t2.micro
ConstraintDescription: must be a valid EC2 instance type.
SSHLocation:
Description: The IP address range that can be used to SSH to the EC2 instances
Type: String
MinLength: '9'
MaxLength: '18'
Default: 0.0.0.0/0
AllowedPattern: '(\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2})'
ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.
Mappings:
AWSInstanceType2Arch:
t2.micro:
Arch: HVM64
AWSInstanceType2NATArch:
t2.micro:
Arch: NATHVM64
AWSRegionArch2AMI:
us-east-1:
HVM64: ami-0080e4c5bc078760e
HVMG2: ami-0aeb704d503081ea6
Resources:
EC2Instance:
Type: 'AWS::EC2::Instance'
Properties:
InstanceType: !Ref InstanceType
SecurityGroups:
- !Ref InstanceSecurityGroup
KeyName: !Ref KeyName
ImageId: !FindInMap
- AWSRegionArch2AMI
- !Ref 'AWS::Region'
- !FindInMap
- AWSInstanceType2Arch
- !Ref InstanceType
- Arch
InstanceSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupDescription: Enable SSH access via port 22
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: !Ref SSHLocation
Outputs:
InstanceId:
Description: InstanceId of the newly created EC2 instance
Value: !Ref EC2Instance
AZ:
Description: Availability Zone of the newly created EC2 instance
Value: !GetAtt
- EC2Instance
- AvailabilityZone
PublicDNS:
Description: Public DNSName of the newly created EC2 instance
Value: !GetAtt
- EC2Instance
- PublicDnsName
PublicIP:
Description: Public IP address of the newly created EC2 instance
Value: !GetAtt
- EC2Instance
- PublicIp
Then Run Following Command:
aws cloudformation create-stack --template-body file://single-instance.yml --stack-name single-instance --parameters ParameterKey=KeyName,ParameterValue=sample ParameterKey=InstanceType,ParameterValue=t2.micro