AWS CloudFormation: Nested Sub with Dynamic References using {{resolve}} causes error and doesn't execute resolve to get value from Parameter Store - aws-cloudformation

I am trying to use AWS CloudFormation Template to create an EC2 Instance with some userdata generated using dynamic references and cross-stack reference in the template . There is a parameter stored in AWS Systems Manager Parameter Store with Name:/MyCustomParameter and Value:Test1.
The idea is to pass a parameter to the template stack (Stack A) which refers to another cloudformation stack (StackB). Stack B exports a variable with reference "StackB::ParameterStoreName". Stack A uses Fn::ImportValue: 'StackB::ParameterStoreName' to get it's value so that it can be used with dynamic references method to get it's value from AWS SSM Parameter Store using {{resolve:ssm:/MyCustomParameter:1}} and pass it's value to the UserData field in the template. I am facing difficulties while trying to use nested Fn::Sub: function with this use-case.
I tried removing the | pipe and using double quotes with escaped new line character but that doesn't work.
I also tried using a different type of resource and it's properties where is worked. Below is an example of the code that worked.
Resources:
TestBucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName:
Fn::Sub:
- '${SSMParameterValue}-12345'
- SSMParameterValue:
Fn::Sub:
- '{{resolve:ssm:${SSMParameterName}:1}}'
- SSMParameterName:
Fn::ImportValue:
!Sub '${CustomStack}::ParameterStoreName'
Below is an extract of the current code I have:
Parameters:
CustomStack:
Type: "String"
Default: "StackB"
Resources:
MyCustomInstance:
Type: 'AWS::EC2::Instance'
Properties:
UserData:
Fn::Base64:
Fn::Sub:
- |
#!/bin/bash -e
#
# Bootstrap and join the cluster
/etc/eks/bootstrap.sh --b64-cluster-ca '${SSMParameterValue}' --apiserver-endpoint '${Endpoint}' '${ClusterName}'"
- SSMParameterValue:
Fn::Sub:
- '{{resolve:ssm:/${SSMParameterName}:1}}'
- SSMParameterName:
Fn::ImportValue:
!Sub '${CustomStack}::ParameterStoreName'
Endpoint:
Fn::ImportValue:
!Sub '${CustomStack}::Endpoint'
ClusterName:
Fn::ImportValue:
!Sub '${CustomStack}::ClusterStackName'
Current Output:
#!/bin/bash -e
#
# Bootstrap and join the cluster
/etc/eks/bootstrap.sh --b64-cluster-ca `{{resolve:ssm:MyCustomParameter:1}}` --apiserver-endpoint 'https://04F1597P0HJ11FQ54K0YFM9P19.gr7.us-east-1.eks.amazonaws.com' 'eks-cluster-1'
Expected Output:
#!/bin/bash -e
#
# Bootstrap and join the cluster
/etc/eks/bootstrap.sh --b64-cluster-ca `Test1` --apiserver-endpoint 'https://04F1597P0HJ11FQ54K0YFM9P19.gr7.us-east-1.eks.amazonaws.com' 'eks-cluster-1'

I think it is because the resolve is in the base64, maybe...? When it processes the line it just sees a block of base64 and not the {{resolve...}} code. The "resolves" get processed at a later pass than the !Functions, because they can't be resolved until the code is running.
To work around it, I added a temporary SSM parameter :
eksCAtmp:
Type: "AWS::SSM::Parameter"
Properties:
Type: String
Value:
Fn::Join:
- ''
- - '{{resolve:ssm:'
- Fn::ImportValue:
!Sub "${ClusterName}-EksCA"
- ':1}}'
That imports the original SSM parameter and gets rid of the requirement to "import" and resolve it again. So now you can use !GetAtt eksCAtemp.Value
eg:
UserData: !Base64
"Fn::Sub":
- |
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh ${ClusterName} --b64-cluster-ca ${CA} --apiserver-endpoint ${endpoint} --kubelet-extra-args '--read-only-port=10255'
/opt/aws/bin/cfn-signal --exit-code $? \
--stack ${AWS::StackName} \
--resource NodeGroup \
--region ${AWS::Region}
- endpoint:
Fn::ImportValue:
!Sub "${ClusterName}-EksEndpoint"
CA: !GetAtt eksCAtmp.Value
(Of course if they allowed cross stack exports to be more than 1024 characters, we wouldn't need this for firing up EKS on a private network.)

You can write like below:
UserData:
Fn::Base64:
Fn::Sub:
- |
#!/bin/bash -e
#
# Bootstrap and join the cluster
export SSMParameterValue=$(aws --region ${AWS::Region} ssm get-parameters --names ${SSMParameterName} --query 'Parameters[0].Value' --output text)
/etc/eks/bootstrap.sh --b64-cluster-ca \`$SSMParameterValue\` --apiserver-endpoint '${Endpoint}' '${ClusterName}'"
- SSMParameterName:
Fn::ImportValue:
!Sub '${CustomStack}::ParameterStoreName'
Endpoint:
Fn::ImportValue:
!Sub '${CustomStack}::Endpoint'
Don't forget your EC2 role need ssm:GetParameters permission.

Related

How to call a resource from one yaml template to another yaml template using cloudformation

I need some guidance on cloudformation templates.
I have a stack called test1.yaml, there i created an IAM role called S3Role.
Now I have another stack called test2.yaml, there i created a managed policy to attach to existing iam role.
Now i want to call test1.yml file S3Role in test2.yml file of managed policy.
Can anyone help me with the script?
Can anyone help me with the script.
Obviously due to lack of details in your question, its not possible to provide any script. But I can provide general psudo-code.
test1.yaml
You will have to export the S3Role Arn or Name
Resources:
S3Role:
Type: IAM::ROLE
<rest of role definition>
Outputs:
RoleArn:
Value: !GetAtt S3Role.Arn
Exports:
Name: RoleArn
test2.yml
You will have to import the role exported Arn (or name) from test1.yaml:
Resources:
SomeResouce:
Properties:
Role: !ImportValue RoleArn
Hope this helps.
You need export the role from stack 1 and then import it in stack 2
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-exports.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html
Providing the complete script for cross-referencing an AWS resource in CloudFormation template.
test1.yaml has an IAM role (logical ID: IAMRole) which we export through the Outputs block. Also notice that the indentation of Outputs block is same as that of Resources block.
The Outputs block serves many purposes. From the AWS Documentation
The optional Outputs section declares output values that you can
import into other stacks (to create cross-stack references), return in
response (to describe stack calls), or view on the AWS CloudFormation
console.
test1.yaml
Resources:
IAMRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Sid: TrustPolicy
Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonS3FullAccess
Path: /
RoleName: IAMRole
Outputs:
ExportIAMRole:
Description: Export the IAMRole to use in test2.yaml
Value: !Ref IAMRole
Export:
Name: IAMRole
In test2.yaml we import the value by referencing the name we have given under Export in Outputs block.
test2.yaml
Resources:
IAMPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
ManagedPolicyName: IAMPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- kms:ListAliases
- kms:Encrypt
- kms:Decrypt
Resource: "*"
Roles:
- !ImportValue IAMRole

store the values to a variable after executing the template

This cloudformation template is working as expected:
https://github.com/shantanuo/cloudformation/blob/master/updated/esbck.yml
But how do I output the ARN of IAM role that it creates?
To add to Marcins answer, if you export the output, it becomes available for use in other Cloudformation templates deployed in the same AWS account (in the same region)
Add an export to the output:
Outputs:
RoleArn:
Value: !GetAtt EsSnapshotRole.Arn
Export:
Name: EsSnapshotRoleArn
Once this is done, you can use the Fn::ImportValue intrinsic function in other templates
# some-other-template.yml
Resources:
SomeResourceRequiringRoleArn:
Type: AWS::SomeService::SomeResource
Properties:
IamRoleArn: !ImportValue EsSnapshotRoleArn
Have to add output section:
Outputs:
RoleArn:
Value: !GetAtt EsSnapshotRole.Arn

CloudFormation: Return ARN of Subnet

Is there another way to get ARN of created Subnet Resource AWS::EC2::Subnet via Fn::GetAtt intrinsic function. Subnet resource only returns AvailabilityZone, Ipv6CidrBlocks, NetworkAclAssociationId, VpcId.
Documentation: https://docs.aws.amazon.com/en_pv/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-subnet.html#aws-resource-ec2-subnet-return-values
Since the ARN of a Subnet is in the format arn:aws:ec2:REGION:ACCOUNT_ID:subnet/SUBNET_ID. By using intrinsic function Fn::Join you can generate the ARN of the subnet.
Example: arn:aws:ec2:ap-southeast-1:767022272945:subnet/subnet-0d42d2235s3a2531d
!Join
- ''
- - 'arn:aws:ec2:'
- !Ref 'AWS::Region'
- ':'
- !Ref 'AWS::AccountId'
- ':subnet/'
- Fn::ImportValue:
Fn::Sub: VPC-SubnetId
A simpler solution, as noted in the comment by #sigpwned, is to just use !Sub
Even if the subnet you're referencing isn't local to your template, you can still pass it in as a parameter to the template or import it like in the original answer if it is available to the stack.
!Sub "arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${Subnet}"

Use Application Autoscaling Group with ELB Healthchecks

Has anybody succeeded in using an Application Autoscaling group with an ELB Health check. It replaces the instances over and over. Is there a way to prevent that?
My template looks like that:
Resources:
ECSAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AvailabilityZones:
- Fn::Select:
- '0'
- Fn::GetAZs:
Ref: AWS::Region
- Fn::Select:
- '1'
- Fn::GetAZs:
Ref: AWS::Region
- Fn::Select:
- '2'
- Fn::GetAZs:
Ref: AWS::Region
VPCZoneIdentifier:
- Fn::ImportValue: !Sub ${EnvironmentName}-PrivateEC2Subnet1
- Fn::ImportValue: !Sub ${EnvironmentName}-PrivateEC2Subnet2
- Fn::ImportValue: !Sub ${EnvironmentName}-PrivateEC2Subnet3
HealthCheckGracePeriod: !Ref ASGHealthCheckGracePeriod
HealthCheckType: !Ref ASGHealthCheckType
LaunchTemplate:
LaunchTemplateId: !Ref ECSLaunchTemplate
Version: 1
MetricsCollection:
- Granularity: 1Minute
ServiceLinkedRoleARN:
!Sub arn:aws:iam::${AWS::AccountId}:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling
DesiredCapacity: !Ref ASGDesiredCapacity
MinSize: !Ref ASGMinSize
MaxSize: !Ref ASGMaxSize
TargetGroupARNs:
- Fn::ImportValue: !Sub ${EnvironmentName}-WebTGARN
Fn::ImportValue: !Sub ${EnvironmentName}-DataTGARN
Fn::ImportValue: !Sub ${EnvironmentName}-GeneratorTGARN
TerminationPolicies:
- OldestInstance
the Launchtemplate looks like this:
ECSLaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Properties:
LaunchTemplateName: ECSLaunchtemplate
LaunchTemplateData:
ImageId: !FindInMap [AWSRegionToAMI, !Ref "AWS::Region", AMI]
InstanceType: !Ref InstanceType
SecurityGroupIds:
- Fn::ImportValue: !Sub ${EnvironmentName}-ECSInstancesSecurityGroupID
IamInstanceProfile:
Arn:
Fn::ImportValue:
!Sub ${EnvironmentName}-ecsInstanceProfileARN
Monitoring:
Enabled: true
CreditSpecification:
CpuCredits: standard
TagSpecifications:
- ResourceType: instance
Tags:
- Key: "keyname1"
Value: "value1"
KeyName:
Fn::ImportValue:
!Sub ${EnvironmentName}-ECSKeyPairName
UserData:
"Fn::Base64": !Sub
- |
#!/bin/bash
yum update -y
yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
yum update -y aws-cfn-bootstrap hibagent
/opt/aws/bin/cfn-init -v --region ${AWS::Region} --stack ${AWS::StackName} --resource ECSLaunchTemplate --region ${AWS::Region}
/opt/aws/bin/cfn-signal -e $? --region ${AWS::Region} --stack ${AWS::StackName} --resource ECSAutoScalingGroup
/usr/bin/enable-ec2-spot-hibernation
echo ECS_CLUSTER=${ECSCluster} >> /etc/ecs/ecs.config
PATH=$PATH:/usr/local/bin
- ECSCluster:
Fn::ImportValue:
!Sub ${EnvironmentName}-ECSClusterName
the Load balancer config looks like this:
ApplicationLoadBalancerInternet:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Sub ${EnvironmentName}-${Project}-ALB-Internet
IpAddressType: !Ref ELBIpAddressType
Type: !Ref ELBType
Scheme: internet-facing
Subnets:
- Fn::ImportValue:
!Sub ${EnvironmentName}-PublicSubnet1
- Fn::ImportValue:
!Sub ${EnvironmentName}-PublicSubnet2
- Fn::ImportValue:
!Sub ${EnvironmentName}-PublicSubnet3
SecurityGroups:
- Fn::ImportValue:
!Sub ${EnvironmentName}-ALBInternetSecurityGroupID
As said, its working fine with EC2 Healthchecks but when I switch to ELB Healthchecks the instances are being drained and the ASG spins up a new instance.
Merci A
I would troubleshoot it like this:
Delete this stack.
Edit your template and change the ASG health-check type to ELB (for now).
Create new stack either from CLI or console. I recommend CLI since you might have to recreate it and it's far simpler/quicker than console. The most important step is to enable "Disable-Rollback" feature when the stack fails, otherwise, you wont be able to find out the reason of failure
I believe you will also be creating some IAM resources as a part of this template, so an example CLI command would be this for your quick reference:
aws cloudformation create-stack --stack-name Name-of-your-stack --template-body file://template.json --tags Key=Name,Value=Your_Tag_Value --profile default --region region --capabilities CAPABILITY_NAMED_IAM --disable-rollback yes
For more information on the requirement of CAPABILITY_NAMED_IAM, see this SO answer.
Now, when you create the stack, it's still going to fail, but now we can troubleshoot it. The reason we kept the healthcheck type to ELB in step 2 is that we actually want the ASG to replace the instances on failed healthchecks and we can find out the reason in the ASG's "Activity History tab" from the console.
Chances are high, that you will see a message far more meaningful than, that was returned by CloudFormation.
Now that you have that error message, change the healthcheck type of ASG from the console to EC2, because we do not want the ASG to start of loop of "launch and terminate" for EC2 instances.
Now, login to your EC2 instance and look for the access logs, for the hits from your ELB healthcheck. In httpd, a successful healthcheck gets an HTTP 408.
Also please note that if the ELB healtcheck type is TCP:80 then, there isnt any port conflict on your server and if you have selected HTTP:80, then you have specified a path/file as well as your ping target.
Since your script has some user-data as well, please also review /var/log/cfn-init.log and other entries for any error message. A simple option would be, grep error /var/log/*
Now, at this point, you just have to make sure you get the ELB healthcheck successful and the instance "in-service" behind the ELB and the most important step is to document all the troubleshooting steps because you never know, which step out of many you tried actually fixed this healthcheck.
Once you are able to find the cause, just put it in the template and you should be good to go. I have seen many templates going wrong at Step 8.
Also, do not miss to change the ASG healthecheck to ELB, once again.

Output a list in cloud formation

I have a parameter:
ClusterSubnets:
Description: Subnets where cluster will reside.
Typically private. Use mutiples, each in a different AZ for HA.
ConstraintDescription: comma separated list of valid Subnet IDs
Type: List<AWS::EC2::Subnet::Id>
I'm trying to output this:
ClusterSubnets:
Description: Subnets used by cluster
Value: !Ref ClusterSubnets
Export:
Name: !Sub "${AWS::StackName}-ClusterSubnets"
But I get this error: Template format error: The Value field of every Outputs member must evaluate to a String.
How can I export a list?
You need to join the elements of the list into a string. Try something like this:
ClusterSubnets:
Description: Subnets used by cluster
Value: !Join
- ','
- !Ref ClusterSubnets
Export:
Name: !Sub "${AWS::StackName}-ClusterSubnets"
Here is the relevant AWS documentation.