I am able to export the keys using this cloudformation template...
https://github.com/shantanuo/cloudformation/blob/master/restricted.template.txt
But how do I import the saved keys directly into "UserData" section of another template? I tried this, but does not work...
aws-ec2-assign-elastic-ip --access-key !Ref {"Fn::ImportValue" : "accessKey" } --secret-key --valid-ips 35.174.198.170
The rest of the template (without access and secret key reference) is working as expected.
https://github.com/shantanuo/cloudformation/blob/master/security.template2.txt
So, if this is your script that does the export (sorry, this one is in yaml)
AWSTemplateFormatVersion: '2010-09-09'
Metadata:
License: Apache-2.0
Description: 'AWS CloudFormation Sample Template'
Parameters:
NewUsername:
NoEcho: 'false'
Type: String
Description: New account username
MinLength: '1'
MaxLength: '41'
ConstraintDescription: the username must be between 1 and 41 characters
Password:
NoEcho: 'true'
Type: String
Description: New account password
MinLength: '1'
MaxLength: '41'
ConstraintDescription: the password must be between 1 and 41 characters
Resources:
CFNUser:
Type: AWS::IAM::User
Properties:
LoginProfile:
Password: !Ref 'Password'
UserName : !Ref 'NewUsername'
CFNAdminGroup:
Type: AWS::IAM::Group
Admins:
Type: AWS::IAM::UserToGroupAddition
Properties:
GroupName: !Ref 'CFNAdminGroup'
Users: [!Ref 'CFNUser']
CFNAdminPolicies:
Type: AWS::IAM::Policy
Properties:
PolicyName: CFNAdmins
PolicyDocument:
Statement:
- Effect: Allow
Action: '*'
Resource: '*'
Condition:
StringEquals:
aws:RequestedRegion:
- ap-south-1
- us-east-1
Groups: [!Ref 'CFNAdminGroup']
CFNKeys:
Type: AWS::IAM::AccessKey
Properties:
UserName: !Ref 'CFNUser'
Outputs:
AccessKey:
Value: !Ref 'CFNKeys'
Description: AWSAccessKeyId of new user
Export:
Name: 'accessKey'
SecretKey:
Value: !GetAtt [CFNKeys, SecretAccessKey]
Description: AWSSecretAccessKey of new user
Export:
Name: 'secretKey'
Then here is an example of how you would import those values in userdata in the import cloudformation script:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Test instance stack",
"Parameters": {
"KeyName": {
"Description": "The EC2 Key Pair to allow SSH access to the instance",
"Type": "AWS::EC2::KeyPair::KeyName"
},
"BaseImage": {
"Description": "The AMI to use for machines.",
"Type": "String"
},
"VPCID": {
"Description": "ID of the VPC",
"Type": "String"
},
"SubnetID": {
"Description": "ID of the subnet",
"Type": "String"
}
},
"Resources": {
"InstanceSecGrp": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Instance Security Group",
"SecurityGroupIngress": [{
"IpProtocol": "-1",
"CidrIp": "0.0.0.0/0"
}],
"SecurityGroupEgress": [{
"IpProtocol": "-1",
"CidrIp": "0.0.0.0/0"
}],
"VpcId": {
"Ref": "VPCID"
}
}
},
"SingleInstance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"KeyName": {
"Ref": "KeyName"
},
"ImageId": {
"Ref": "BaseImage"
},
"InstanceType": "t2.micro",
"Monitoring": "false",
"BlockDeviceMappings": [{
"DeviceName": "/dev/xvda",
"Ebs": {
"VolumeSize": "20",
"VolumeType": "gp2"
}
}],
"NetworkInterfaces": [{
"GroupSet": [{
"Ref": "InstanceSecGrp"
}],
"AssociatePublicIpAddress": "true",
"DeviceIndex": "0",
"DeleteOnTermination": "true",
"SubnetId": {
"Ref": "SubnetID"
}
}],
"UserData": {
"Fn::Base64": {
"Fn::Join": ["", [
"#!/bin/bash -xe\n",
"yum install httpd -y\n",
"sudo sh -c \"echo ",
{ "Fn::ImportValue" : "secretKey" },
" >> /home/ec2-user/mysecret.txt\" \n",
"sudo sh -c \"echo ",
{ "Fn::ImportValue" : "accessKey" },
" >> /home/ec2-user/myaccesskey.txt\" \n"
]]
}
}
}
}
}
}
In this example I am just echoing the value of the import into a file. If you ssh onto the SingleInstance and check the logs at /var/lib/cloud/instance/scripts/part-001 then you will see what the user data script looks like on the server itself. In my case the contents of that file is (values aren't real for the keys):
#!/bin/bash -xe
yum install httpd -y
sudo sh -c "echo hAc7/TJA123143235ASFFgKWkKSjIC4 >> /home/ec2-user/mysecret.txt"
sudo sh -c "echo AKIAQ123456789123D >> /home/ec2-user/myaccesskey.txt"
Using this as a starting point you can do whatever you need to with the import value.
I've tested all of this with the exact scripts above and it all works.
What is suggested in the comments seems to be correct. I can directly refer to the name (for e.g. 'accessKey' in this case) using ImportValue!
AWSTemplateFormatVersion: '2010-09-09'
Metadata:
License: Apache-2.0
Description: 'AWS CloudFormation Sample Template'
Resources:
CFNUser:
Type: AWS::IAM::User
Outputs:
AccessKey:
Value:
Fn::ImportValue: accessKey
Description: AWSAccessKeyId of new user
For e.g. the above template will return the value of accessKey if it is already exported by some other template.
Related
Whenever I try to install Ory Kratos with Helm on Kubernetes, it doesn't work.
Here is my values.yaml file
kratos:
config:
dsn: postgres://admin:Strongpassword#10.43.90.243:5432/postgres_db
secrets:
cookie:
- randomsecret
cipher:
- randomsecret
default:
- randomsecret
identity:
default_schema_id: default
schemas:
- id: default
url: file:///etc/config/identity.default.schema.json
courier:
smtp:
connection_uri: smtps://username:password#smtp.gmail.com
selfservice:
default_browser_return_url: http://127.0.0.1:4455/
automigration:
enabled: true
identitySchemas:
'identity.default.schema.json': |
{
"$id": "https://schemas.ory.sh/presets/kratos/identity.email.schema.json",
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Person",
"type": "object",
"properties": {
"traits": {
"type": "object",
"properties": {
"email": {
"type": "string",
"format": "email",
"title": "E-Mail",
"ory.sh/kratos": {
"credentials": {
"password": {
"identifier": true
}
},
"recovery": {
"via": "email"
},
"verification": {
"via": "email"
}
}
}
},
"required": [
"email"
],
"additionalProperties": false
}
}
}
I type in the command $helm install kratos -f values.yaml ory/kratos. It pauses for a while and then outputs
Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition
It then creates one job which repeatedly creates kratos-automigrate pods which crash in a couple of minutes with status "Error" and creates a new pod.
I am currently attempting to have a AWS::Serverless::HttpApi integrate with a group of AWS::Serverless::Function's. The goal is to define these resources within a SAM template, and define the actual API using a swagger file.
I have my SAM template defined as so:
Resources:
apiPing:
Type: AWS::Serverless::Function
Properties:
Description: 'Ping'
CodeUri: ../bin/cmd-api-ping.zip
Handler: cmd-api-ping
Runtime: go1.x
Role:
Fn::GetAtt: apiLambdaRole.Arn
Events:
PingEvent:
Type: HttpApi
Properties:
ApiId: !Ref api
Path: /ping
Method: post
api:
Type: AWS::Serverless::HttpApi
Properties:
StageName: prod
DefinitionBody:
Fn::Transform:
Name: AWS::Include
Parameters:
Location: swagger.yaml
AccessLogSettings:
DestinationArn: !GetAtt accessLogs.Arn
Format: $context.requestId
And my swagger file:
openapi: 3.0.1
info:
title: 'API'
version: 2019-10-13
paths:
/ping:
post:
summary: 'invoke ping'
operationId: 'apiPing'
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/PingRequest'
required: true
responses:
'200':
description: 'Successful'
content:
application/json:
schema:
$ref: '#/components/schemas/PongResponse'
x-amazon-apigateway-integration:
httpMethod: "POST"
type: aws_proxy
uri:
Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${apiPing.Arn}/invocations
responses:
default:
statusCode: "200"
contentHandling: "CONVERT_TO_TEXT"
passthroughBehavior: "when_no_match"
components:
schemas:
PingRequest:
description: 'a ping request'
type: object
properties:
ping:
description: 'some text'
type: string
PongResponse:
description: 'a pong response'
type: object
properties:
pong:
description: 'some text'
type: string
This template deploys without any errors, however there is no integration attached to the /ping POST route.
The transformed template in CloudFormation does show a loaded swagger file:
"api": {
"Type": "AWS::ApiGatewayV2::Api",
"Properties": {
"Body": {
"info": {
"version": 1570924800000,
"title": "API"
},
"paths": {
"/ping": {
"post": {
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PingRequest"
}
}
},
"required": true
},
"x-amazon-apigateway-integration": {
"contentHandling": "CONVERT_TO_TEXT",
"responses": {
"default": {
"statusCode": "200"
}
},
"uri": {
"Fn::Sub": "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${apiPing.Arn}/invocations"
},
"httpMethod": "POST",
"passthroughBehavior": "when_no_match",
"type": "aws_proxy"
},
"summary": "invoke ping",
"responses": {
"200": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PongResponse"
}
}
},
"description": "Successful"
}
},
"operationId": "apiPing"
}
}
},
"openapi": "3.0.1",
"components": {
"schemas": {
"PingRequest": {
"type": "object",
"description": "a ping request",
"properties": {
"ping": {
"type": "string",
"description": "some text"
}
}
},
"PongResponse": {
"type": "object",
"description": "a pong response",
"properties": {
"pong": {
"type": "string",
"description": "some text"
}
}
}
}
},
"tags": [
{
"name": "httpapi:createdBy",
"x-amazon-apigateway-tag-value": "SAM"
}
]
}
}
}
I'm trying to understand what I may need to change or add to add the integration to the http api. I can't find any clear explanation in the aws documentation.
I have managed to resolve this. aws::serverless::httpapi creates a AWS::ApiGatewayV2::Api resource. This requires a different integration than the previous versioned ApiGateway.
x-amazon-apigateway-integration has a key defined, payloadFormatVersion. Despite documentation suggesting both 1.0 and 2.0 are supported, it seems 2.0 must be used. As such, my x-amazon-apigateway-integration has become the following (I did clean it up a bit):
x-amazon-apigateway-integration:
payloadFormatVersion: "2.0"
httpMethod: "POST"
type: "aws_proxy"
uri:
Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${apiPing.Arn}/invocations
responses:
default:
statusCode: "200"
connectionType: "INTERNET"
And with this, integration is applied upon deployment.
I have created a jps file using documentation https://docs.jelastic.com/application-manifest.
But there is no clear documentation to use PostgreSQL.
Jelastic JPS Node:
{
"nodeType": "postgres9",
"restart": false,
"database": {
"name": "xxxx",
"user": "xxx",
"dump": "xxx.sql"
}
}
Error while configuring environment,
"data": {
"result": 11005,
"source": "marketplace",
"error": "database query error: java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=10.101.3.225)(port=3306)(type=master) : Connection refused (Connection refused)"
}
I have provided whole JPS file content here. In this, i got error when importing database and others are working fine in configs object.
{
"jpsVersion": "0.1",
"jpsType": "install",
"application": {
"id": "xxx",
"name": "xxx",
"version": "0.0.1",
"logo": "http://example.com/img/logo.png",
"type": "php",
"homepage": "http://example.com/",
"description": {
"en": "xxx"
},
"env": {
"topology": {
"ha": false,
"engine": "php7.2",
"ssl": false,
"nodes": [
{
"extip": false,
"count": 1,
"cloudlets": 16,
"nodeType": "nginxphp"
},
{
"extip": false,
"count": 1,
"cloudlets": 16,
"nodeType": "postgres9"
}
]
},
"upload": [
{
"nodeType": "nginxphp",
"sourcePath": "https://example.com/xxx.conf",
"destPath": "${SERVER_CONF_D}/xxx.conf"
}
],
"deployments": [
{
"archive": "https://example.com/xxx.zip",
"name": "xxx.zip",
"context": "ROOT"
}
],
"configs": [
{
"nodeType": "nginxphp",
"restart": true,
"path": "${SERVER_CONF_D}/xxx.conf",
"replacements": [
{
"pattern":"/usr/share/nginx/html",
"replacement":"${SERVER_WEBROOT}"
}
]
},
{
"nodeType": "postgres9",
"restart": false,
"database": {
"name": "xxx",
"user": "xxx",
"dump": "https://example.com/xxx.sql"
}
}, {
"restart": false,
"nodeType": "nginxphp",
"path": "${SERVER_WEBROOT}/ROOT/server/php/config.inc.php",
"replacements": [{
"replacement": "${nodes.postgres9.address}",
"pattern": "localhost"
}, {
"replacement": "${nodes.postgres9.database.password}",
"pattern": "xxx"
}
]
}
]
},
"success": {
"text": "Installation completed. username: admin and password: xxx"
}
}
}
Since Actions are disabled for the Postgres so far (The action is executed only for mysql5, mariadb, and mariadb10 containers) we've improved your manifest based on the recent updates. Yaml was used because it's more clear for reading and understanding:
jpsVersion: 0.1
jpsType: install
name: xxx
version: 0.0.1
logo: http://example.com/img/logo.png
engine: php7.2
nodes:
- cloudlets: 16
nodeType: nginxphp
- cloudlets: 16
nodeType: postgres9
onInstall:
- upload [nginxphp]:
sourcePath: https://example.com/xxx.conf
destPath: ${SERVER_CONF_D}/xxx.conf
- deploy:
archive: https://example.com/xxx.zip
name: xxx.zip
context: ROOT
- replaceInFile [nginxphp]:
path: ${SERVER_CONF_D}/xxx.conf
replacements:
- pattern: /usr/share/nginx/html
replacement: ${SERVER_WEBROOT}
- restartNodes [nginxphp]
- replaceInFile [nginxphp]:
path: ${SERVER_WEBROOT}/ROOT/server/php/config.inc.php
replacements:
- pattern: localhost
replacement: ${nodes.postgres9.address}
- pattern: xxx
replacement: ${nodes.postgres9.password}
- cmd [postgres9]: printf "PGPASSWORD=${nodes.postgres9.password};\nexport PGPASSWORD;\npsql postgres webadmin -c \"CREATE DATABASE Jelastic;\"\n" > /tmp/createDb
- cmd [postgres9]: chmod +x /tmp/createDb && /tmp/createDb
success: Installation completed. username admin and password xxx
Please note, that you can debug every action in the /console tab
I am trying to run a cloudformation stack to create an EMR cluster and connect a loadbalancer to the Master node of that EMR.
To generate the EMR cluster I use the following code in my stack:
"EmrCluster": {
"Type": "AWS::EMR::Cluster",
"DependsOn": "nat0a30a49f0c2913dc4",
"Properties": {
"Name": "EmrCluster",
"Applications": [
{
"Name": "Spark"
},
{
"Name": "Hive"
},
{
"Name": "Hadoop"
},
{
"Name": "Livy"
},
{
"Name": "Hue"
},
{
"Name": "Zeppelin"
}
],
"Instances": {
"AdditionalMasterSecurityGroups": [
{
"Ref": "sgMaster"
}
],
"Ec2KeyName": {"Ref": "KeyName"},
"Ec2SubnetId": {"Ref": "subnet03ead9bfe3352d90b"},
"MasterInstanceGroup": {
"InstanceCount": 1,
"InstanceType": "m3.xlarge",
"Name": "Master",
},
"CoreInstanceGroup": {
"InstanceCount": 2,
"InstanceType": "m3.xlarge",
"Name": "Core",
}
},
"Configurations": [
{
"Classification": "spark-env",
"Configurations": [
{
"Classification": "export",
"ConfigurationProperties": {
"PYSPARK_PYTHON": "/usr/bin/python3"
}
}
]
}
],
"JobFlowRole": "EMR_EC2_DefaultRole",
"ServiceRole": "EMR_DefaultRole",
"ReleaseLabel": "emr-5.13.0"
}
},
In order to connect the loadbalancer to the master node of the above cluster I need the instanceID of the master node to be added in the following code:
"loadBalancer" : {
"Type": "AWS::ElasticLoadBalancingV2::LoadBalancer",
"Properties": {
"Subnets" : [ {"Ref": "subnet04a1fe03c3f3c42fd"}, {"Ref" : "subnet03ead9bfe3352d90b"} ]
}
},
"TargetGroup" : {
"Type" : "AWS::ElasticLoadBalancingV2::TargetGroup",
"Properties" : {
"HealthCheckIntervalSeconds": 30,
"HealthCheckProtocol": "HTTP",
"HealthCheckTimeoutSeconds": 10,
"HealthyThresholdCount": 5,
"Matcher" : {
"HttpCode" : "200"
},
"Name": "MyTargets",
"Port": 8890,
"Protocol": "HTTP",
"Targets": [
{ "Id": {"Ref" : "MasternodeID"}, "Port": 8890 }
],
"UnhealthyThresholdCount": 3,
"VpcId": {"Ref" : "vpc07164705742b384fb"}
}
}
How can I find and pass the instanceID of the master node of the EMR cluster as the "MasternodeID" in the target group. In simple words, how can I get the instanceID of the master node of the EMR cluster that is created in the same stack.
As public master DNS is available in cloud formation, I have fetched the ip from the DNS that is working for me
EMRMasterTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
DependsOn: EMRELB
Properties:
Name: emr-master-target-group
Protocol: HTTP
Port: 8088
VpcId: !Ref VPCId
HealthCheckPort: 8088
HealthCheckProtocol: HTTP
HealthCheckPath: /cluster
TargetType: ip
Targets:
- Id: !Join
- '.'
- - !Select [1, !Split ['-', !GetAtt EMRCluster.MasterPublicDNS]]
- !Select [2, !Split ['-', !GetAtt EMRCluster.MasterPublicDNS]]
- !Select [3, !Split ['-', !GetAtt EMRCluster.MasterPublicDNS]]
- !Select [0, !Split ['.', !Select [4, !Split ['-', !GetAtt EMRCluster.MasterPublicDNS]]]]
Port: 8088
Tags:
- Key: Name
Value: emr-master-target-group
I would like to create a CloudFormation template whose instance type is "t2.micro". However, i couldn't find any example about this instance type. Ec2 whose type is "t2.micro" needs VPC, etc.
Thanks.
You can make use the following template snippet.
It will output the EC2 instance Public DNS name.
Note: I haven't tested this template!
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "CloudFormation template for creating an ec2 instance",
"Parameters": {
"KeyName": {
"Description": "Key Pair name",
"Type": "AWS::EC2::KeyPair::KeyName",
"Default": "my_keypair_name"
},
"VPC": {
"Type": "AWS::EC2::VPC",
"Properties":{
"CidrBlock": "10.0.0.0/16",
"EnableDnsHostnames": "true"
}
},
"Subnet":{
"Type": "AWS::EC2::Subnet",
"Properties": {
"VpcId": {"Ref": "VPC"},
"CidrBlock": "10.0.0.0/24",
"AvailabilityZone": "us-east-1a"
}
},
"InstanceType": {
"Description": "Select one of the possible instance types",
"Type": "String",
"Default": "t2.micro",
"AllowedValues": ["t2.micro", "t2.small", "t2.medium"]
}
},
"Resources":{
"SecurityGroup":{
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "My security group",
"VpcId": {"Ref": "VPC"},
"SecurityGroupIngress": [{
"CidrIp": "0.0.0.0/0",
"FromPort": 22,
"IpProtocol": "tcp",
"ToPort": 22
}]
}
},
"Server": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-123456",
"InstanceType": {"Ref": "InstanceType"},
"KeyName": {"Ref": "KeyName"},
"SecurityGroupIds": [{"Ref": "SecurityGroup"}],
"SubnetId": {"Ref": "Subnet"}
}
}
},
"Outputs": {
"PublicName": {
"Value": {"Fn::GetAtt": ["Server", "PublicDnsName"]},
"Description": "Public name (connect via SSH)"
}
}
}
For more information see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html
Below stack allows to create EC2 instance, just specify parameters like VPC id,Subnet Id, SG id, instance type, also ami id , for instance type u can define in this it is t2,micro by default. This Is sucessfully tested and running w/o errors.kindly comment if you have any queries.######Created by MARK Machines
So is this,:
{"Description": "CloudFormation template for creating an ec2 instance",
"Parameters": {
"KeyName": {
"Description": "Key Pair name",
"Type": "AWS::EC2::KeyPair::KeyName",
"Default": "xxx-xxx"
},
"VPC": {
"Type": "AWS::EC2::VPC::Id",
"Default":"givevpcid"
},
"Subnet":{
"Type": "AWS::EC2::Subnet::Id",
"Default": "givesubnetid"
},
"InstanceType": {
"Description": "Select one of the possible instance types",
"Type": "String",
"Default": "t2.micro",
"AllowedValues": ["t2.micro", "t2.small", "t2.medium"]
},
"SecurityGroup":{
"Type": "AWS::EC2::SecurityGroup::Id",
"Default" : "givesecuritygroupid",
"AllowedValues": ["sg-xxxxx", "sg-yyy", "sg-zzz"]
}
},
"Resources":{
"Server": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-098789xxxxxxxxx",
"InstanceType": {"Ref": "InstanceType"},
"KeyName": {"Ref": "KeyName"},
"SecurityGroupIds": [{"Ref": "SecurityGroup"}],
"SubnetId": {"Ref": "Subnet"}
}
}
},
"Outputs": {
"PublicName": {
"Value": {"Fn::GetAtt": ["Server", "PublicDnsName"]},
"Description": "Public name (connect via SSH)"
}
}
}
You can use my following template which works fine.
single-instance.yml
AWSTemplateFormatVersion: 2010-09-09
Description: >-
AWS CloudFormation Sample Template EC2InstanceWithSecurityGroupSample: Create
an Amazon EC2 instance running the Amazon Linux AMI. The AMI is chosen based
on the region in which the stack is run. This example creates an EC2 security
group for the instance to give you SSH access. **WARNING** This template
creates an Amazon EC2 instance. You will be billed for the AWS resources used
if you create a stack from this template.
Parameters:
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
Type: 'AWS::EC2::KeyPair::KeyName'
ConstraintDescription: must be the name of an existing EC2 KeyPair.
InstanceType:
Description: WebServer EC2 instance type
Type: String
Default: t2.micro
ConstraintDescription: must be a valid EC2 instance type.
SSHLocation:
Description: The IP address range that can be used to SSH to the EC2 instances
Type: String
MinLength: '9'
MaxLength: '18'
Default: 0.0.0.0/0
AllowedPattern: '(\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2})'
ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.
Mappings:
AWSInstanceType2Arch:
t2.micro:
Arch: HVM64
AWSInstanceType2NATArch:
t2.micro:
Arch: NATHVM64
AWSRegionArch2AMI:
us-east-1:
HVM64: ami-0080e4c5bc078760e
HVMG2: ami-0aeb704d503081ea6
Resources:
EC2Instance:
Type: 'AWS::EC2::Instance'
Properties:
InstanceType: !Ref InstanceType
SecurityGroups:
- !Ref InstanceSecurityGroup
KeyName: !Ref KeyName
ImageId: !FindInMap
- AWSRegionArch2AMI
- !Ref 'AWS::Region'
- !FindInMap
- AWSInstanceType2Arch
- !Ref InstanceType
- Arch
InstanceSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupDescription: Enable SSH access via port 22
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: !Ref SSHLocation
Outputs:
InstanceId:
Description: InstanceId of the newly created EC2 instance
Value: !Ref EC2Instance
AZ:
Description: Availability Zone of the newly created EC2 instance
Value: !GetAtt
- EC2Instance
- AvailabilityZone
PublicDNS:
Description: Public DNSName of the newly created EC2 instance
Value: !GetAtt
- EC2Instance
- PublicDnsName
PublicIP:
Description: Public IP address of the newly created EC2 instance
Value: !GetAtt
- EC2Instance
- PublicIp
Then Run Following Command:
aws cloudformation create-stack --template-body file://single-instance.yml --stack-name single-instance --parameters ParameterKey=KeyName,ParameterValue=sample ParameterKey=InstanceType,ParameterValue=t2.micro