I want to create RDS instance trough Clouformation. Below the excerpt from the cloudformation file.
MasterDB:
Type: AWS::RDS::DBInstance
Properties:
DBName: !Ref DBName
DBInstanceIdentifier: !Ref DBName
BackupRetentionPeriod: !Ref DBBackupRetentionPeriod
AllocatedStorage: !Ref DBAllocatedStorage
DBInstanceClass: "db.t3.medium"
DBSubnetGroupName: !Ref DBSubnetGroup
Engine: MySQL
AvailabilityZone: !Ref DBAvailabilityZone
EngineVersion: "5.7.30"
MasterUsername: !Sub "{{resolve:ssm:/DB/USER:1}}"
MasterUserPassword: !Sub "{{resolve:ssm-secure:/DB/PASSWORD:1}}"
MultiAZ: !Ref MultiAZ
EnablePerformanceInsights: 'true'
DeletionProtection: 'true'
DBParameterGroupName: !Ref RDSDBParameterGroup
The RDS is created without a problem, but I've noticed that Performance Insights and Deletion protection is not enabled on the new instance.
Could you advise why these options don't work?
I verified your template in my sandbox account in us-east-1, and it the insights and deletion protection are enabled as expected.
The template I used was as follows (mostly just commented out the references not shown in your question):
Resources:
MasterDB:
Type: AWS::RDS::DBInstance
Properties:
#DBName: !Ref DBName
#DBInstanceIdentifier: !Ref DBName
BackupRetentionPeriod: 0
AllocatedStorage: 20
DBInstanceClass: "db.t3.medium"
#DBSubnetGroupName: !Ref DBSubnetGroup
Engine: MySQL
#AvailabilityZone: !Ref DBAvailabilityZone
EngineVersion: "5.7.30"
MasterUsername: root
MasterUserPassword: fsdf45454
MultiAZ: false
EnablePerformanceInsights: 'true'
DeletionProtection: 'true'
Related
I am trying to define an ECs cluster deployment using CLoudFormation. So far I have been successful with defining and executing the template.
I decided to externalize the environment variables for the container by using the EnvironmentFile property in the AWS::ECS::TaskDefinition resource.
I think I'm using the correct syntax according to the documentation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-containerdefinitions.html
However running the template in CF generates an error, telling me that the keys I'm using for the EnviromentFile definition are not permitted.
The most strange thing is that the stack update since to complete successfully and I can see the property when I look at the task definition in the console. Is this an error I should ignore or Is there a more correct way to define these property
CloudFormation snippet:
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
Family: !Ref 'ServiceName'
Cpu: !Ref 'ContainerCpu'
Memory: !Ref 'ContainerMemory'
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
ExecutionRoleArn: !Ref 'ECSTaskExecutionRole'
TaskRoleArn:
Fn::If:
- 'HasCustomRole'
- !Ref 'Role'
- !Ref "AWS::NoValue"
ContainerDefinitions:
- Name: !Ref 'ServiceName'
Cpu: !Ref 'ContainerCpu'
Memory: !Ref 'ContainerMemory'
Image: !Ref 'ImageUrl'
EnvironmentFiles:
- value: !Ref EnvFile
type: s3
PortMappings:
- ContainerPort: !Ref 'ContainerPort'
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref ApplicationLogGroup
awslogs-region: !Ref AWS::Region
awslogs-stream-prefix: !Sub ${AWS::StackName}-ecs-service
Reported error:
Resource template validation failed for resource TaskDefinition as the template has invalid properties.
Please refer to the resource documentation to fix the template.
Properties validation failed for resource TaskDefinition with message:
#/ContainerDefinitions/0/EnvironmentFiles/0: extraneous key [type] is not permitted
#/ContainerDefinitions/0/EnvironmentFiles/0: extraneous key [value] is not permitted
Ok, I'm answering this to close it. After trying several things I realize that the value and type property were in lover case and CloudFomation enforces that the properties need to start with Uppercase. making this change removed the error
EnvironmentFiles:
- Value: !Ref EnvFile
Type: s3
I am having trouble deploying a fargate cluster, and it is failing on the docker pull image with error "CannotPullContainerError". I am creating the stack with cloudformation, which is not optional, and it creates the full stack, but fails when trying to start the task based on the above error.
I have attached the cloudformation stack file which might highlight the problem, and I have doubled checked that the subnet has a route to nat(below). I also ssh'ed into an instance in the same subnet which was able to route externally. I am wondering if i have not correctly placed the pieces required i.e the service + loadbalancer are in the private subnet, or should i not be placing the internal lb in the same subnet???
This subnet is the one that currently has the placement but all 3 in the file have the same nat settings.
subnet routable (subnet-34b92250)
* 0.0.0.0/0 -> nat-05a00385366da527a
cheers in advance.
yaml cloudformaition script:
AWSTemplateFormatVersion: 2010-09-09
Description: Cloudformation stack for the new GRPC endpoints within existing vpc/subnets and using fargate
Parameters:
StackName:
Type: String
Default: cf-core-ci-grpc
Description: The name of the parent Fargate networking stack that you created. Necessary
vpcId:
Type: String
Default: vpc-0d499a68
Description: The name of the parent Fargate networking stack that you created. Necessary
Resources:
CoreGrcpInstanceSecurityGroupOpenWeb:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupName: sgg-core-ci-grpc-ingress
GroupDescription: Allow http to client host
VpcId: !Ref vpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
LoadBalancer:
Type: 'AWS::ElasticLoadBalancingV2::LoadBalancer'
DependsOn:
- CoreGrcpInstanceSecurityGroupOpenWeb
Properties:
Name: lb-core-ci-int-grpc
Scheme: internal
Subnets:
# # pub
# - subnet-f13995a8
# - subnet-f13995a8
# - subnet-f13995a8
# pri
- subnet-34b92250
- subnet-82d85af4
- subnet-ca379b93
LoadBalancerAttributes:
- Key: idle_timeout.timeout_seconds
Value: '50'
SecurityGroups:
- !Ref CoreGrcpInstanceSecurityGroupOpenWeb
TargetGroup:
Type: 'AWS::ElasticLoadBalancingV2::TargetGroup'
DependsOn:
- LoadBalancer
Properties:
Name: tg-core-ci-grpc
Port: 3000
TargetType: ip
Protocol: HTTP
HealthCheckIntervalSeconds: 30
HealthCheckProtocol: HTTP
HealthCheckTimeoutSeconds: 10
HealthyThresholdCount: 4
Matcher:
HttpCode: '200'
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: '20'
UnhealthyThresholdCount: 3
VpcId: !Ref vpcId
LoadBalancerListener:
Type: 'AWS::ElasticLoadBalancingV2::Listener'
DependsOn:
- TargetGroup
Properties:
DefaultActions:
- Type: forward
TargetGroupArn: !Ref TargetGroup
LoadBalancerArn: !Ref LoadBalancer
Port: 80
Protocol: HTTP
EcsCluster:
Type: 'AWS::ECS::Cluster'
DependsOn:
- LoadBalancerListener
Properties:
ClusterName: ecs-core-ci-grpc
EcsTaskRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service:
# - ecs.amazonaws.com
- ecs-tasks.amazonaws.com
Action:
- 'sts:AssumeRole'
Path: /
Policies:
- PolicyName: iam-policy-ecs-task-core-ci-grpc
PolicyDocument:
Statement:
- Effect: Allow
Action:
- 'ecr:**'
Resource: '*'
CoreGrcpTaskDefinition:
Type: 'AWS::ECS::TaskDefinition'
DependsOn:
- EcsCluster
- EcsTaskRole
Properties:
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
ExecutionRoleArn: !Ref EcsTaskRole
Cpu: '1024'
Memory: '2048'
ContainerDefinitions:
- Name: container-core-ci-grpc
Image: 'nginx:latest'
Cpu: '256'
Memory: '1024'
PortMappings:
- ContainerPort: '80'
HostPort: '80'
Essential: 'true'
EcsService:
Type: 'AWS::ECS::Service'
DependsOn:
- CoreGrcpTaskDefinition
Properties:
Cluster: !Ref EcsCluster
LaunchType: FARGATE
DesiredCount: '1'
DeploymentConfiguration:
MaximumPercent: 150
MinimumHealthyPercent: 0
LoadBalancers:
- ContainerName: container-core-ci-grpc
ContainerPort: '80'
TargetGroupArn: !Ref TargetGroup
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: DISABLED
SecurityGroups:
- !Ref CoreGrcpInstanceSecurityGroupOpenWeb
Subnets:
- subnet-34b92250
- subnet-82d85af4
- subnet-ca379b93
TaskDefinition: !Ref CoreGrcpTaskDefinition
Unfortunately AWS Fargate only supports images hosted in ECR or public repositories in Docker Hub and does not support private repositories which are hosted in Docker Hub. For more info - https://forums.aws.amazon.com/thread.jspa?threadID=268415
Even we faced the same problem using AWS Fargate couple of months back. You have only two options right now:
Migrate your images to Amazon ECR.
Use AWS Batch with custom AMI, where the custom AMI is built with Docker Hub credentials in ECS config (which we are using right now).
Edit: As mentioned by Christopher Thomas in the comment, ECS fargate now supports pulling images from DockerHub Private repositories. More info on how to set it up can be found here.
Do define this policy in your ECR registry and attach the IAM role with your task.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "new statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::99999999999:role/ecsEventsRole"
},
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload"
]
}
]
}
I have created a policy template and outputted the ARN:
Resources:
# Codebuild Policies
CodeBuildServiceRolePolicy1:
Type: AWS::IAM::ManagedPolicy
Properties:
Description: 'This service role enables AWS CodePipeline to interact with other AWS services, including AWS CodeBuild, on your behalf'
Path: "/"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Resource: "*"
Effect: "Allow"
Action:
...
Outputs:
StackName:
Value: !Ref AWS::StackName
CodeBuildServiceRolePolicy:
Description: The ARN of the ManagedPolicy1
Value: !Ref CodeBuildServiceRolePolicy1
Export:
Name: !Sub '${EnvironmentName}-CodeBuildServiceRolePolicy1'
Now I want o import these Policy into a template with Roles and
# Codebuilding service role
CodeBuildRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub ${EnvironmentName}-CodeBuildRole
AssumeRolePolicyDocument:
Statement:
- Action: ["sts:AssumeRole"]
Effect: Allow
Principal:
Service: [codebuild.amazonaws.com]
Version: "2012-10-17"
Path: /
Policies:
- PolicyDocument:
Fn::ImportValue:
!Sub ${EnvironmentName}-CodeBuildServiceRolePolicy1'
But this fails. I'm getting an error, what is wrong?
merci in advance
A
Have you tried to reference the Managed Policy you created with your first stack, using the !Ref function?
The CF for the policy:
AWSTemplateFormatVersion: "2010-09-09"
Resources:
CodeBuildServiceRolePolicy1:
Type: AWS::IAM::ManagedPolicy
Properties:
Path: "/"
PolicyDocument:
...
Outputs:
CodeBuildServiceRolePolicy:
Value: !Ref CodeBuildServiceRolePolicy1.Arn
The CF for the role:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
PolicyName:
Type: String
Resources:
CodeBuildRole:
Type: "AWS::IAM::Role"
Properties:
Path: "/"
Policies: !Ref PolicyName
Also checkout the docs for Cloudformation IAM an CloudFormation Functions
The solution is to use the AWS resource Type: AWS::IAM::ManagedPolicy instead of AWS::IAM::Policy .
If you use AWS::IAM::ManagedPolicy you can export the policy ARN like this
CodeBuildServiceRolePolicy:
Description: ARN of the managed policy
Value: !Ref CodeBuildServiceRolePolicy
and import it into another template with fn::ImportValue or fn::GetAtt
Using AWS::IAM::Policy only allows to create inline policies which cannot be referenced.
I get the error Value of property NetworkInterfaces must be a list of objects when referring to a NetworkInterface in a CloudFormation template.
Here is the relevant section:
MyAppNetworkInterface:
Type: AWS::EC2::NetworkInterface
Properties:
SubnetId: !Ref SubnetPrivate
MyApp:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.medium
NetworkInterfaces:
- !Ref MyAppNetworkInterface
You can actually refer the Network Interface directly from the EC2 Host. But the syntax is slightly different:
MyAppNetworkInterface:
Type: AWS::EC2::NetworkInterface
Properties:
SubnetId: !Ref SubnetPrivate
MyApp:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.medium
NetworkInterfaces:
- NetworkInterfaceId: !Ref MyAppNetworkInterface
DeviceIndex: 0
(see: http://docs.amazonaws.cn/en_us/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-network-interface.html#cfn-awsec2networkinterface-templateexamples)
You can't do it that way. Instead , create the two resources independently, then connect with a network interface attachment resource.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-network-interface-attachment.html
I am using the following CloudFormation template to create ECS Cluster.
AWSTemplateFormatVersion: '2010-09-09'
Description: 'AWS Cloudformation Template to create the Infrastructure'
Resources:
ECSCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: 'Blog-iac-test-1'
EC2InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: /
Roles: [!Ref 'EC2Role']
ECSAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
VPCZoneIdentifier:
- subnet-****
LaunchConfigurationName: !Ref 'ECSAutoscalingLC'
MinSize: '1'
MaxSize: '2'
DesiredCapacity: '1'
ECSAutoscalingLC:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
AssociatePublicIpAddress: true
ImageId: 'ami-b743bed1'
SecurityGroups:
- sg-****
InstanceType: 't2.micro'
IamInstanceProfile: !Ref 'EC2InstanceProfile'
KeyName: 'test'
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
echo ECS_CLUSTER=Blog-iac-test-1 >> /etc/ecs/ecs.config
EC2Role:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [ec2.amazonaws.com]
Action: ['sts:AssumeRole']
Path: /
ECSServicePolicy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: "root"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action: ['ecs:*', 'logs:*', 'ecr:*', 's3:*']
Resource: '*'
Roles: [!Ref 'EC2Role']
The stack is created successfully, but while destroying, I am getting the following error:
The Cluster cannot be deleted while Container Instances are active or draining.
I was able to delete the stack earlier, this issue started to occur recently.
What could be a workaround to avoid this issue ? Should I need to add some dependencies ?
As mentioned in this AWS Documentation Link, have you tried deregistering the instances as well?:
Deregister Container Instances:
Before you can delete a cluster, you
must deregister the container instances inside that cluster. For each
container instance inside your cluster, follow the procedures in
Deregister a Container Instance to deregister it.
Alternatively, you can use the following AWS CLI command to deregister
your container instances. Be sure to substitute the Region, cluster
name, and container instance ID for each container instance that you
are deregistering.
aws ecs deregister-container-instance --cluster default
--container-instance container_instance_id --region us-west-2 --force