Cannot Delete Amazon ECS Cluster using CloudFormation - aws-cloudformation

I am using the following CloudFormation template to create ECS Cluster.
AWSTemplateFormatVersion: '2010-09-09'
Description: 'AWS Cloudformation Template to create the Infrastructure'
Resources:
ECSCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: 'Blog-iac-test-1'
EC2InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: /
Roles: [!Ref 'EC2Role']
ECSAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
VPCZoneIdentifier:
- subnet-****
LaunchConfigurationName: !Ref 'ECSAutoscalingLC'
MinSize: '1'
MaxSize: '2'
DesiredCapacity: '1'
ECSAutoscalingLC:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
AssociatePublicIpAddress: true
ImageId: 'ami-b743bed1'
SecurityGroups:
- sg-****
InstanceType: 't2.micro'
IamInstanceProfile: !Ref 'EC2InstanceProfile'
KeyName: 'test'
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
echo ECS_CLUSTER=Blog-iac-test-1 >> /etc/ecs/ecs.config
EC2Role:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [ec2.amazonaws.com]
Action: ['sts:AssumeRole']
Path: /
ECSServicePolicy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: "root"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action: ['ecs:*', 'logs:*', 'ecr:*', 's3:*']
Resource: '*'
Roles: [!Ref 'EC2Role']
The stack is created successfully, but while destroying, I am getting the following error:
The Cluster cannot be deleted while Container Instances are active or draining.
I was able to delete the stack earlier, this issue started to occur recently.
What could be a workaround to avoid this issue ? Should I need to add some dependencies ?

As mentioned in this AWS Documentation Link, have you tried deregistering the instances as well?:
Deregister Container Instances:
Before you can delete a cluster, you
must deregister the container instances inside that cluster. For each
container instance inside your cluster, follow the procedures in
Deregister a Container Instance to deregister it.
Alternatively, you can use the following AWS CLI command to deregister
your container instances. Be sure to substitute the Region, cluster
name, and container instance ID for each container instance that you
are deregistering.
aws ecs deregister-container-instance --cluster default
--container-instance container_instance_id --region us-west-2 --force

Related

Cloudformation Template - IAM Roles and Lambda Resource

I want to create a cloudformation stackset with resources like IAM and lambda in different regions. when I tried to deploy these resources, it failed because IAM roles are global and it is trying to create again in second region and whole stackset is failed.
Is there anyway I can mention the stackset to deploy GLobal Resources in one region and resources like lambda in all other regions?
Is there anyway I can mention the stackset to deploy GLobal Resources in one region and resources like lambda in all other regions?
Sadly there is not. You have to split your template, so that global resource are created as normal regional stacks.
I went through many resources and finally found a solution. If we split the template in stacksets then my dependent resources will break because creation is parallel in cloudformation. i.e. before global role gets created, lambda will try to get deployed and it will fail because the role is not available(required by lambda).
Hence we can add a condition to each of the global resources like below
Conditions:
RegionCheck: !Equals
- !Ref "AWS::Region"
- us-east-1
And, add the condition in the resources section as below,
Resources:
GlobalRolelambda:
Type: 'AWS::IAM::Role'
Condition: RegionCheck
Properties:
RoleName: !Ref LambdaExecutionRole
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/ReadOnlyAccess'
- 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
Path: /
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
Policies:
- PolicyName: lambda-policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 'glue:GetConnections'
- 'mediastore:ListContainers'
- 'mediastore:GetContainerPolicy'
Resource: '*'
But, after doing this, the problem would still exist because, if you add lambda resource with depends on attribute, role would get created in one region but not in the second region, lambda will fail to create in second region. We need to add a wait condition in the template to handle this as below Conditions:
CreateLambdaRole: !Equals [ !Ref LambdaRoleName, 'false' ]
CreateLamdaRoleRegion: !And
- !Condition RegionCheck
- !Condition CreateLambdaRole
and, add below resources after Role Resource,
CreateRoleWaitHandle:
Condition: CreateLamdaRoleRegion
DependsOn: GlobalRolelambda
Type: "AWS::CloudFormation::WaitConditionHandle"
#added, since DependsOn: !If is not possible, trigger by WaitCondition if CreateLamdaRoleRegion is false
WaitHandle:
Type: "AWS::CloudFormation::WaitConditionHandle"
#added, since DependsOn: !If is not possible
WaitCondition:
Type: "AWS::CloudFormation::WaitCondition"
Properties:
Handle: !If [CreateLamdaRoleRegion, !Ref CreateRoleWaitHandle, !Ref WaitHandle]
Timeout: "1"
Count: 0
and now, refer this in lambda resource,
lambdaProcessorFunction:
Type: AWS::Lambda::Function
Properties:
FunctionName: Lambda-processor
Description: ''
Handler: index.handler
Role:
Fn::Sub: 'arn:aws:iam::${AWS::AccountId}:role/LambdaExecutionRole'
Runtime: python3.6
Timeout: 600
MemorySize: 1024
Code:
S3Bucket: !Ref SourceBucketName
S3Key: !Ref SourceBucketKey
DependsOn: WaitCondition
Refer to the below source links, which might help
https://garbe.io/blog/2017/07/17/cloudformation-hacks/
CloudFormation, apply Condition on DependsOn

Why does this SQSQueuePolicy fail to create in AWS CloudFormation?

I've created the following CloudFormation template:
AWSTemplateFormatVersion: 2010-09-09
Description: Creates all resources necessary to send SES emails & track bounces/complaints through AWS
Resources:
IAMUser:
Type: 'AWS::IAM::User'
Properties:
UserName: iam-ses-sqs
SQSQueue:
Type: 'AWS::SQS::Queue'
Properties:
QueueName: ses-queue
SNSTopic:
Type: 'AWS::SNS::Topic'
Properties:
TopicName: sns-notifications
IAMUserPolicy:
Type: 'AWS::IAM::Policy'
Properties:
PolicyName: IAM_Send_SES_Email
PolicyDocument:
Statement:
- Effect: Allow
Action:
- 'SES:SendEmail'
- 'SES:SendRawEmail'
Resource: 'arn:aws:ses:*:*:identity/*'
Users:
- !Ref IAMUser
SQSQueuePolicy:
Type: 'AWS::SQS::QueuePolicy'
Properties:
Queues:
- !Ref SQSQueue
PolicyDocument:
Statement:
- Action:
- 'SQS:ReceiveMessage'
- 'SQS:DeleteMessage'
- 'SQS:GetQueueAttributes'
Effect: Allow
Resource: !Ref SQSQueue
Principal:
AWS:
- !Ref IAMUser
SNSTopicSubscription:
Type: 'AWS::SNS::Subscription'
Properties:
Protocol: SQS
Endpoint: !GetAtt
- SQSQueue
- Arn
TopicArn: !Ref SNSTopic
I'd like to allow IAMUser to perform the SQS ReceiveMessage, DeleteMessage, and GetQueueAttributes actions on the SQSQueue resource. SQSQueue should also be subscribed to the SNSTopic.
When creating a stack using this template in CloudFormation, the SQSQueue, SNSTopic, SNSTopicSubscription, IAMUser, and IAMUserPolicy all create with no problem, in that order. However, the SQSQueuePolicy fails to create and generates the error message:
Invalid value for the parameter Policy. (Service: AmazonSQS; Status Code: 400; Error Code: InvalidAttributeValue; Request ID: {request id})
Why is this failing, and how should I modify the template to ensure that all resources and their associated policies/subscriptions are created successfully?
I found two problems in your CloudFormation template.
The first one, like Marcin said, the resource reference must be the Queue ARN and not the Queue URL.
Resource: !GetAtt SQSQueue.Arn
The second one is that your AWS reference is with your IAM user but it must be the Account ID.
Principal:
AWS:
- !Ref 'AWS::AccountId'
That said, I was able to create successfully the CloudFormation Stack in my account with this CloudFormation Template:
AWSTemplateFormatVersion: 2010-09-09
Description: Creates all resources necessary to send SES emails & track bounces/complaints through AWS
Resources:
IAMUser:
Type: 'AWS::IAM::User'
Properties:
UserName: iam-ses-sqs
SQSQueue:
Type: 'AWS::SQS::Queue'
Properties:
QueueName: ses-queue
SQSQueuePolicy:
Type: 'AWS::SQS::QueuePolicy'
Properties:
Queues:
- !Ref SQSQueue
PolicyDocument:
Statement:
- Action:
- 'SQS:ReceiveMessage'
- 'SQS:DeleteMessage'
- 'SQS:GetQueueAttributes'
Effect: Allow
Resource: !GetAtt SQSQueue.Arn
Principal:
AWS:
- !Ref 'AWS::AccountId'
The following will return queue URL, not ARN:
Resource: !Ref SQSQueue
But you need to use queue ARN in the policy:
Resource: !GetAtt SQSQueue.Arn

Cloudformation build stuck at "create in progress" - trying to add it to use specific SG

So the last item I added to this template was the attempt to have it use a particular SecurityGroup. I did not want it to create a new one. When I do the validate check that comes back ok but apparently my code is still not correct. Other that the template was working ok.
I have tried all I can think of. there is no error when i finally times out other than "internal error" so I am at a loss here.
Parameters:
VPC:
Description: Testing using this VPC
Type: String
Default: vpc-02765
SecGroup:
Description: Name of security group
Type: AWS::EC2::SecurityGroup
KeyName:
Description: Name of an existing EC2 key pair for SSH access to the EC2 instance.
Type: AWS::EC2::KeyPair::KeyName
InstanceType:
Description: EC2 instance type.
Type: String
Default: t2.micro
...
...
...
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref 'InstanceType'
SubnetId: subnet-08b
KeyName: !Ref 'KeyName'
SecurityGroupIds:
- !Ref SecGroup
ImageId: !FindInMap
- AWSRegionArch2AMI
- !Ref 'AWS::Region'
- HVM64
'''
all I am trying to do is use the items I listed in the template. the vpc,securitygroup. The last time this worked was when I had the code in the template that builds a new SG. I than changed my mind and want to use an existing SG. so somewhere I messed up
This works in my templates:
Parameters:
SecGroup:
Type: AWS::EC2::SecurityGroup::Id
...
Resources:
MyInstance:
Properties:
SecurityGroupIds:
- !Ref SecGroup

Use Application Autoscaling Group with ELB Healthchecks

Has anybody succeeded in using an Application Autoscaling group with an ELB Health check. It replaces the instances over and over. Is there a way to prevent that?
My template looks like that:
Resources:
ECSAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AvailabilityZones:
- Fn::Select:
- '0'
- Fn::GetAZs:
Ref: AWS::Region
- Fn::Select:
- '1'
- Fn::GetAZs:
Ref: AWS::Region
- Fn::Select:
- '2'
- Fn::GetAZs:
Ref: AWS::Region
VPCZoneIdentifier:
- Fn::ImportValue: !Sub ${EnvironmentName}-PrivateEC2Subnet1
- Fn::ImportValue: !Sub ${EnvironmentName}-PrivateEC2Subnet2
- Fn::ImportValue: !Sub ${EnvironmentName}-PrivateEC2Subnet3
HealthCheckGracePeriod: !Ref ASGHealthCheckGracePeriod
HealthCheckType: !Ref ASGHealthCheckType
LaunchTemplate:
LaunchTemplateId: !Ref ECSLaunchTemplate
Version: 1
MetricsCollection:
- Granularity: 1Minute
ServiceLinkedRoleARN:
!Sub arn:aws:iam::${AWS::AccountId}:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling
DesiredCapacity: !Ref ASGDesiredCapacity
MinSize: !Ref ASGMinSize
MaxSize: !Ref ASGMaxSize
TargetGroupARNs:
- Fn::ImportValue: !Sub ${EnvironmentName}-WebTGARN
Fn::ImportValue: !Sub ${EnvironmentName}-DataTGARN
Fn::ImportValue: !Sub ${EnvironmentName}-GeneratorTGARN
TerminationPolicies:
- OldestInstance
the Launchtemplate looks like this:
ECSLaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Properties:
LaunchTemplateName: ECSLaunchtemplate
LaunchTemplateData:
ImageId: !FindInMap [AWSRegionToAMI, !Ref "AWS::Region", AMI]
InstanceType: !Ref InstanceType
SecurityGroupIds:
- Fn::ImportValue: !Sub ${EnvironmentName}-ECSInstancesSecurityGroupID
IamInstanceProfile:
Arn:
Fn::ImportValue:
!Sub ${EnvironmentName}-ecsInstanceProfileARN
Monitoring:
Enabled: true
CreditSpecification:
CpuCredits: standard
TagSpecifications:
- ResourceType: instance
Tags:
- Key: "keyname1"
Value: "value1"
KeyName:
Fn::ImportValue:
!Sub ${EnvironmentName}-ECSKeyPairName
UserData:
"Fn::Base64": !Sub
- |
#!/bin/bash
yum update -y
yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
yum update -y aws-cfn-bootstrap hibagent
/opt/aws/bin/cfn-init -v --region ${AWS::Region} --stack ${AWS::StackName} --resource ECSLaunchTemplate --region ${AWS::Region}
/opt/aws/bin/cfn-signal -e $? --region ${AWS::Region} --stack ${AWS::StackName} --resource ECSAutoScalingGroup
/usr/bin/enable-ec2-spot-hibernation
echo ECS_CLUSTER=${ECSCluster} >> /etc/ecs/ecs.config
PATH=$PATH:/usr/local/bin
- ECSCluster:
Fn::ImportValue:
!Sub ${EnvironmentName}-ECSClusterName
the Load balancer config looks like this:
ApplicationLoadBalancerInternet:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Sub ${EnvironmentName}-${Project}-ALB-Internet
IpAddressType: !Ref ELBIpAddressType
Type: !Ref ELBType
Scheme: internet-facing
Subnets:
- Fn::ImportValue:
!Sub ${EnvironmentName}-PublicSubnet1
- Fn::ImportValue:
!Sub ${EnvironmentName}-PublicSubnet2
- Fn::ImportValue:
!Sub ${EnvironmentName}-PublicSubnet3
SecurityGroups:
- Fn::ImportValue:
!Sub ${EnvironmentName}-ALBInternetSecurityGroupID
As said, its working fine with EC2 Healthchecks but when I switch to ELB Healthchecks the instances are being drained and the ASG spins up a new instance.
Merci A
I would troubleshoot it like this:
Delete this stack.
Edit your template and change the ASG health-check type to ELB (for now).
Create new stack either from CLI or console. I recommend CLI since you might have to recreate it and it's far simpler/quicker than console. The most important step is to enable "Disable-Rollback" feature when the stack fails, otherwise, you wont be able to find out the reason of failure
I believe you will also be creating some IAM resources as a part of this template, so an example CLI command would be this for your quick reference:
aws cloudformation create-stack --stack-name Name-of-your-stack --template-body file://template.json --tags Key=Name,Value=Your_Tag_Value --profile default --region region --capabilities CAPABILITY_NAMED_IAM --disable-rollback yes
For more information on the requirement of CAPABILITY_NAMED_IAM, see this SO answer.
Now, when you create the stack, it's still going to fail, but now we can troubleshoot it. The reason we kept the healthcheck type to ELB in step 2 is that we actually want the ASG to replace the instances on failed healthchecks and we can find out the reason in the ASG's "Activity History tab" from the console.
Chances are high, that you will see a message far more meaningful than, that was returned by CloudFormation.
Now that you have that error message, change the healthcheck type of ASG from the console to EC2, because we do not want the ASG to start of loop of "launch and terminate" for EC2 instances.
Now, login to your EC2 instance and look for the access logs, for the hits from your ELB healthcheck. In httpd, a successful healthcheck gets an HTTP 408.
Also please note that if the ELB healtcheck type is TCP:80 then, there isnt any port conflict on your server and if you have selected HTTP:80, then you have specified a path/file as well as your ping target.
Since your script has some user-data as well, please also review /var/log/cfn-init.log and other entries for any error message. A simple option would be, grep error /var/log/*
Now, at this point, you just have to make sure you get the ELB healthcheck successful and the instance "in-service" behind the ELB and the most important step is to document all the troubleshooting steps because you never know, which step out of many you tried actually fixed this healthcheck.
Once you are able to find the cause, just put it in the template and you should be good to go. I have seen many templates going wrong at Step 8.
Also, do not miss to change the ASG healthecheck to ELB, once again.

cross-referencing cloudformation not working

I have created a policy template and outputted the ARN:
Resources:
# Codebuild Policies
CodeBuildServiceRolePolicy1:
Type: AWS::IAM::ManagedPolicy
Properties:
Description: 'This service role enables AWS CodePipeline to interact with other AWS services, including AWS CodeBuild, on your behalf'
Path: "/"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Resource: "*"
Effect: "Allow"
Action:
...
Outputs:
StackName:
Value: !Ref AWS::StackName
CodeBuildServiceRolePolicy:
Description: The ARN of the ManagedPolicy1
Value: !Ref CodeBuildServiceRolePolicy1
Export:
Name: !Sub '${EnvironmentName}-CodeBuildServiceRolePolicy1'
Now I want o import these Policy into a template with Roles and
# Codebuilding service role
CodeBuildRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub ${EnvironmentName}-CodeBuildRole
AssumeRolePolicyDocument:
Statement:
- Action: ["sts:AssumeRole"]
Effect: Allow
Principal:
Service: [codebuild.amazonaws.com]
Version: "2012-10-17"
Path: /
Policies:
- PolicyDocument:
Fn::ImportValue:
!Sub ${EnvironmentName}-CodeBuildServiceRolePolicy1'
But this fails. I'm getting an error, what is wrong?
merci in advance
A
Have you tried to reference the Managed Policy you created with your first stack, using the !Ref function?
The CF for the policy:
AWSTemplateFormatVersion: "2010-09-09"
Resources:
CodeBuildServiceRolePolicy1:
Type: AWS::IAM::ManagedPolicy
Properties:
Path: "/"
PolicyDocument:
...
Outputs:
CodeBuildServiceRolePolicy:
Value: !Ref CodeBuildServiceRolePolicy1.Arn
The CF for the role:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
PolicyName:
Type: String
Resources:
CodeBuildRole:
Type: "AWS::IAM::Role"
Properties:
Path: "/"
Policies: !Ref PolicyName
Also checkout the docs for Cloudformation IAM an CloudFormation Functions
The solution is to use the AWS resource Type: AWS::IAM::ManagedPolicy instead of AWS::IAM::Policy .
If you use AWS::IAM::ManagedPolicy you can export the policy ARN like this
CodeBuildServiceRolePolicy:
Description: ARN of the managed policy
Value: !Ref CodeBuildServiceRolePolicy
and import it into another template with fn::ImportValue or fn::GetAtt
Using AWS::IAM::Policy only allows to create inline policies which cannot be referenced.