I am trying to create a new role and a new policy which would be attached
to the same role created in the template in the same template and getting
this error:
Error:
Missing required field Principal(Service:AmazonIdentityManagement;
Status Code: 400;
Error Code: MalformedPolicyDocument;
Proxy: null)
Resources:
lambdaFullPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action: "*"
Resource: "*"
LambdaFullRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version : '2012-10-17'
Statement :
- Effect : Allow
- Principal :
service :
- lambda.amazonaws.com
- Action :
- sts: AssumeRole
ManagedPolicyArns:
- !Ref lambdaFullPolicy
DependsOn:
- lambdaFullPolicy
#------------------------------output -----------------------#
Outputs:
PolicyFullLambda:
Description: table
Value: !Ref lambdaFullPolicy
Export:
Name:
"Fn::Sub": "${AWS::StackName}-PolicyFullLambda"
RollFullLambda:
Value: !Ref LambdaFullRole
Export:
Name:
"Fn::Sub": "${AWS::StackName}-RollFullLambda"
There is an extra space in sts: AssumeRole and it should read sts:AssumeRole. This is because this is not a YAML component, but a string literal that AWS uses for the Action section of the Role creation/update.
I'm trying to compose CF template to deploy a serverless system consisting of several Lambdas. In my case, Lambda resource descriptions share a lot of properties; the only difference is filename and handler function.
How can I define something like common set of parameters in my template?
This boilerplate is awful:
LambdaCreateUser:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket:
Ref: BucketForLambdas
S3Key: create_user.zip
Handler: create_user.lambda_handler
Runtime: python3.7
Role:
Fn::GetAtt: [ LambdaRole , "Arn" ]
Environment:
Variables: { "EnvTable": !Ref EnvironmentTable, "UsersTable": !Ref UsersTable }
LambdaDeleteUser:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket:
Ref: BucketForLambdas
S3Key: delete_user.zip
Handler: delete_user.lambda_handler
Runtime: python3.7
Role:
Fn::GetAtt: [ LambdaRole , "Arn" ]
Environment:
Variables: { "EnvTable": !Ref EnvironmentTable, "UsersTable": !Ref UsersTable }
What you're looking for is AWS SAM which is a layer of syntactic sugar on top on CloudFormation. A basic representation of your template with AWS SAM would look like this:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Globals:
Function:
Runtime: python3.7
Environment:
Variables:
EnvTable: !Ref EnvironmentTable
UsersTable: !Ref UsersTable
Resources:
LambdaCreateUser:
Type: AWS::Serverless::Function
Properties:
Code:
S3Bucket:
Ref: BucketForLambdas
S3Key: create_user.zip
Handler: create_user.lambda_handler
Role: !GetAtt LambdaRole.Arn
LambdaDeleteUser:
Type: AWS::Serverless::Function
Properties:
Code:
S3Bucket:
Ref: BucketForLambdas
S3Key: delete_user.zip
Handler: delete_user.lambda_handler
Role: !GetAtt LambdaRole.Arn
But that's not the end. You can replace the code definition with a path to your code or even inline code and use sam build and sam package to build and upload your artifacts. You can also probably replace the role definition with SAM policy templates for further reduction of boilerplate code.
I am trying to add the resource arn of the policy created from the step one to the second step. But i am unable to refer the arn from resource one to resource two. i tried with !ref and getattr,both are not working. are there any work around to this?
Below the cloudformation template i am trying to execute.
Parameters:
AccountID:
Type: 'String'
Default: "123465646"
Description: "account id where the resources will be created"
Resources:
ssmPolicyEc2Manage:
Type: AWS::IAM::ManagedPolicy
Properties:
Description: "This policy will be attached to EC2 running ssm agent"
Path: "/"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action: "iam:PassRole"
Resource:
- !Join [ "", [ "arn:aws:iam::", !Ref AccountID, ":role/ssm_role_policy" ] ]
snsPolicyRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect : "Allow"
Principal:
Service :
- "ssm.amazonaws.com"
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
ManagedPolicyArns:
- "HERE INCLUDE THE RESOURCE ARN CREATED FROM THE PREVIOUS RESOURCE I.E,ssmPolicyEc2Manage "
DependsOn: ssmPolicyEc2Manage
Thank you for your help.
!Ref ssmPolicyEc2Manage will return the ARN of the resource. You can take a look at the documentation for better understanding.
I'm attempting to setup an AWS CloudFormation configuration using CodePipeline and GitHub.
I've failed both at my own example project and the tutorial: Create a GitHub Pipeline with AWS CloudFormation.
All resources are created, but in CodePipeline I continuously get the following error during the initial "Source" stage.
Could not fetch the contents of the repository from GitHub.
See image below:
Note that this is not a permissions error. It's something else that does not exist on Google until now.
GitHub can be configured to work if I stop using CloudFormation and create a CodePipeline through the console, but for my purposes, I need to use CloudFormation. Need to stick to a template.
Here is the template from the CloudFormation template copied from the tutorial:
Parameters:
BranchName:
Description: GitHub branch name
Type: String
Default: master
RepositoryName:
Description: GitHub repository name
Type: String
Default: test
GitHubOwner:
Type: String
GitHubSecret:
Type: String
NoEcho: true
GitHubOAuthToken:
Type: String
NoEcho: true
ApplicationName:
Description: CodeDeploy application name
Type: String
Default: DemoApplication
BetaFleet:
Description: Fleet configured in CodeDeploy
Type: String
Default: DemoFleet
Resources:
CodePipelineArtifactStoreBucket:
Type: "AWS::S3::Bucket"
CodePipelineArtifactStoreBucketPolicy:
Type: "AWS::S3::BucketPolicy"
Properties:
Bucket: !Ref CodePipelineArtifactStoreBucket
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: DenyUnEncryptedObjectUploads
Effect: Deny
Principal: "*"
Action: "s3:PutObject"
Resource: !Join
- ""
- - !GetAtt
- CodePipelineArtifactStoreBucket
- Arn
- /*
Condition:
StringNotEquals:
"s3:x-amz-server-side-encryption": "aws:kms"
- Sid: DenyInsecureConnections
Effect: Deny
Principal: "*"
Action: "s3:*"
Resource: !Join
- ""
- - !GetAtt
- CodePipelineArtifactStoreBucket
- Arn
- /*
Condition:
Bool:
"aws:SecureTransport": false
AppPipelineWebhook:
Type: "AWS::CodePipeline::Webhook"
Properties:
Authentication: GITHUB_HMAC
AuthenticationConfiguration:
SecretToken: !Ref GitHubSecret
Filters:
- JsonPath: $.ref
MatchEquals: "refs/heads/{Branch}"
TargetPipeline: !Ref AppPipeline
TargetAction: SourceAction
Name: AppPipelineWebhook
TargetPipelineVersion: !GetAtt
- AppPipeline
- Version
RegisterWithThirdParty: true
AppPipeline:
Type: "AWS::CodePipeline::Pipeline"
Properties:
Name: github-events-pipeline
RoleArn: !GetAtt
- CodePipelineServiceRole
- Arn
Stages:
- Name: Source
Actions:
- Name: SourceAction
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: 1
Provider: GitHub
OutputArtifacts:
- Name: SourceOutput
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref RepositoryName
Branch: !Ref BranchName
OAuthToken: !Ref GitHubOAuthToken
PollForSourceChanges: false
RunOrder: 1
- Name: Beta
Actions:
- Name: BetaAction
InputArtifacts:
- Name: SourceOutput
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CodeDeploy
Configuration:
ApplicationName: !Ref ApplicationName
DeploymentGroupName: !Ref BetaFleet
RunOrder: 1
ArtifactStore:
Type: S3
Location: !Ref CodePipelineArtifactStoreBucket
CodePipelineServiceRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- codepipeline.amazonaws.com
Action: "sts:AssumeRole"
Path: /
Policies:
- PolicyName: AWS-CodePipeline-Service-3
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- "codecommit:CancelUploadArchive"
- "codecommit:GetBranch"
- "codecommit:GetCommit"
- "codecommit:GetUploadArchiveStatus"
- "codecommit:UploadArchive"
Resource: "*"
- Effect: Allow
Action:
- "codedeploy:CreateDeployment"
- "codedeploy:GetApplicationRevision"
- "codedeploy:GetDeployment"
- "codedeploy:GetDeploymentConfig"
- "codedeploy:RegisterApplicationRevision"
Resource: "*"
- Effect: Allow
Action:
- "codebuild:BatchGetBuilds"
- "codebuild:StartBuild"
Resource: "*"
- Effect: Allow
Action:
- "devicefarm:ListProjects"
- "devicefarm:ListDevicePools"
- "devicefarm:GetRun"
- "devicefarm:GetUpload"
- "devicefarm:CreateUpload"
- "devicefarm:ScheduleRun"
Resource: "*"
- Effect: Allow
Action:
- "lambda:InvokeFunction"
- "lambda:ListFunctions"
Resource: "*"
- Effect: Allow
Action:
- "iam:PassRole"
Resource: "*"
- Effect: Allow
Action:
- "elasticbeanstalk:*"
- "ec2:*"
- "elasticloadbalancing:*"
- "autoscaling:*"
- "cloudwatch:*"
- "s3:*"
- "sns:*"
- "cloudformation:*"
- "rds:*"
- "sqs:*"
- "ecs:*"
Resource: "*"
I've have taken the following steps:
provided Github Organization, Repo & Branch
setup a personal access token on GitHub and supplied it to the template GitHubOAuthToken parameter with access to repo:all & admin:repo_hook
setup a random string and provided it to the GitHubSecret
tried not including a GitHubSecret as in many other examples
verified that AWS CodePipeline for my region is listed in Github Applications under "Authorized OAuth Applications"
In attempts to start from a clear slate, I've also done the following:
cleared all GitHub webhooks before starting aws codepipeline list-webhooks & aws codepipeline delete-webhook --name
added a new personal access token
tried multiple repos & branches
Any ideas how I can get GitHub to work with CloudFormation & CodePipeline?
Found the solution. The Github organization name is case sensitive.
I need to add suspended processes to Cloudformation.
I tried adding a SuspendedProcesses property.
ASG:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
DesiredCapacity: 1
MinSize: 1
MaxSize: 2
LaunchConfigurationName: !Ref LaunchConfigurationName
SuspendedProcesses:
- ReplaceUnhealthy
However, I receive an error that it's an unsupported property.
You can create a Lambda function to modify the ASG as it it being created using a CustomResource. This also needs an IAM::Role, as the Lambda function needs a reference to one as part of its definition.
credit to https://gist.github.com/atward/9573b9fbd3bfd6c453158c28356bec05 for most of this:
ASG:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
DesiredCapacity: 1
MinSize: 1
MaxSize: 2
LaunchConfigurationName: !Ref LaunchConfigurationName
AsgProcessModificationRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- sts:AssumeRole
Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Policies:
- PolicyName: AsgProcessModification
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- autoscaling:ResumeProcesses
- autoscaling:SuspendProcesses
Resource: "*"
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: arn:aws:logs:*:*:*
AsgProcessModifierFunction:
Type: AWS::Lambda::Function
Properties:
Description: Modifies ASG processes during CF stack creation
Code:
ZipFile: |
import cfnresponse
import boto3
def handler(event, context):
props = event['ResourceProperties']
client = boto3.client('autoscaling')
try:
response = client.suspend_processes(AutoScalingGroupName=props['AutoScalingGroupName'], 'ReplaceUnhealthy'])
cfnresponse.send(event, context, cfnresponse.SUCCESS, {})
except Exception as e:
cfnresponse.send(event, context, cfnresponse.FAILED, {})
Handler: index.handler
Role:
Fn::GetAtt:
- AsgProcessModificationRole
- Arn
Runtime: python2.7
ModifyAsg:
Type: AWS::CloudFormation::CustomResource
Version: 1
Properties:
ServiceToken:
Fn::GetAtt:
- AsgProcessModifierFunction
- Arn
AutoScalingGroupName:
Ref: ASG
ScalingProcesses:
- ReplaceUnhealthy
You can add an UpdatePolicy attribute to your AutoScaleGroup to control this.
AWS has some documentation on this here:
https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/
Here is a sample adding in SuspendProcesses:
ASG:
Type: AWS::AutoScaling::AutoScalingGroup
UpdatePolicy:
AutoScalingRollingUpdate:
SuspendProcesses:
- "ReplaceUnhealthy"
Properties:
DesiredCapacity: 1
MinSize: 1
MaxSize: 2
LaunchConfigurationName: !Ref LaunchConfigurationName
Full information on using the UpdatePolicy attribute is available here:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html#cfn-attributes-updatepolicy-rollingupdate-maxbatchsize
if you are using aws CDK, the following should work
const autoscaling = require('#aws-cdk/aws-autoscaling');
const custom_resource = require('#aws-cdk/custom-resources');
function stopAsgScaling(stack, asgName) {
return new custom_resource.AwsCustomResource(stack, 'MyAwsCustomResource', {
policy: custom_resource.AwsCustomResourcePolicy.fromSdkCalls({
resources: custom_resource.AwsCustomResourcePolicy.ANY_RESOURCE
}),
onCreate: {
service: 'AutoScaling',
action: 'suspendProcesses',
parameters: {
AutoScalingGroupName: asgName,
},
physicalResourceId: custom_resource.PhysicalResourceId.of(
'InvokeLambdaResourceId1234'),
},
onDelete: {
service: 'AutoScaling',
action: 'resumeProcesses',
parameters: {
AutoScalingGroupName: asgName,
},
physicalResourceId: custom_resource.PhysicalResourceId.of(
'InvokeLambdaResourceId1234'),
},
})
};
class MainStack extends cdk.Stack {
constructor(scope, id, props) {
super(scope, id, props);
const autoScalingGroupName = "my-asg"
const myAsg = new autoscaling.AutoScalingGroup(
this,
autoScalingGroupName,
{autoScalingGroupName: autoScalingGroupName})
const acr = stopAsgScaling(this, autoScalingGroupName);
acr.node.addDependency(myAsg);
}
};