Serverless Framework - Configure Cognito User Pool to send emails through SES - aws-cloudformation

I'm able to create a Cognito User Pool with Serverless Framework. Unfortunately, the email verification after a new user registers is being sent using Cognito's email delivery system which is quite limited. I know I can go into the console and change the option to use Amazon's SES instead but how do I do this in Serverless Framework?
service: cognito
provider:
name: aws
runtime: nodejs12.x
region: us-west-2
stage: prod
memorySize: 128
timeout: 5
endpointType: regional
Resources:
# Creates a role that allows Cognito to send SNS messages
SNSRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "cognito-idp.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
- PolicyName: "CognitoSNSPolicy"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "sns:publish"
Resource: "*"
# Creates a user pool in cognito for your app to auth against
UserPool:
Type: AWS::Cognito::UserPool
DeletionPolicy: Retain
Properties:
UserPoolName: MyUserPool
AutoVerifiedAttributes:
- email
Policies:
PasswordPolicy:
MinimumLength: 8
RequireLowercase: true
RequireNumbers: true
RequireSymbols: false
RequireUppercase: true
UsernameAttributes:
- email
# Creates a User Pool Client to be used by the identity pool
UserPoolClient:
Type: "AWS::Cognito::UserPoolClient"
Properties:
ClientName: !Sub ${AuthName}-client
GenerateSecret: false
UserPoolId: !Ref UserPool
# Creates a federeated Identity pool
IdentityPool:
Type: "AWS::Cognito::IdentityPool"
Properties:
IdentityPoolName: !Sub ${AuthName}Identity
AllowUnauthenticatedIdentities: true
CognitoIdentityProviders:
- ClientId: !Ref UserPoolClient
ProviderName: !GetAtt UserPool.ProviderName
# Create a role for unauthorized acces to AWS resources. Very limited access. Only allows users in the previously created Identity Pool
CognitoUnAuthorizedRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Federated: "cognito-identity.amazonaws.com"
Action:
- "sts:AssumeRoleWithWebIdentity"
Condition:
StringEquals:
"cognito-identity.amazonaws.com:aud": !Ref IdentityPool
"ForAnyValue:StringLike":
"cognito-identity.amazonaws.com:amr": unauthenticated
Policies:
- PolicyName: "CognitoUnauthorizedPolicy"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "mobileanalytics:PutEvents"
- "cognito-sync:*"
Resource: "*"
# Create a role for authorized acces to AWS resources. Control what your user can access. This example only allows Lambda invokation
# Only allows users in the previously created Identity Pool
CognitoAuthorizedRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Federated: "cognito-identity.amazonaws.com"
Action:
- "sts:AssumeRoleWithWebIdentity"
Condition:
StringEquals:
"cognito-identity.amazonaws.com:aud": !Ref IdentityPool
"ForAnyValue:StringLike":
"cognito-identity.amazonaws.com:amr": authenticated
Policies:
- PolicyName: "CognitoAuthorizedPolicy"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "mobileanalytics:PutEvents"
- "cognito-sync:*"
- "cognito-identity:*"
Resource: "*"
- Effect: "Allow"
Action:
- "lambda:InvokeFunction"
Resource: "*"
# Assigns the roles to the Identity Pool
IdentityPoolRoleMapping:
Type: "AWS::Cognito::IdentityPoolRoleAttachment"
Properties:
IdentityPoolId: !Ref IdentityPool
Roles:
authenticated: !GetAtt CognitoAuthorizedRole.Arn
unauthenticated: !GetAtt CognitoUnAuthorizedRole.Arn
Outputs:
UserPoolId:
Value: !Ref UserPool
Export:
Name: "UserPool::Id"
UserPoolClientId:
Value: !Ref UserPoolClient
Export:
Name: "UserPoolClient::Id"
IdentityPoolId:
Value: !Ref IdentityPool
Export:
Name: "IdentityPool::Id"

Use the EmailConfiguration property in your user pool.
UserPool:
Type: AWS::Cognito::UserPool
DeletionPolicy: Retain
Properties:
...
EmailConfiguration:
EmailSendingAccount: DEVELOPER
ReplyToEmailAddress: # email address
SourceArn: # sourceARN to verified email address in SES
See the CloudFormation AWS::Cognito::UserPool documentation for more details.

Related

AWS batch cloudformation - “CannotPullContainerError”

I have a cloud Formation template for a AWS Batch POC with 6 resources.
3 AWS::IAM::Role
1 AWS::Batch::ComputeEnvironment
1 AWS::Batch::JobQueue
1 AWS::Batch::JobDefinition
The AWS::IAM::Role have the policy "arn:aws:iam::aws:policy/AdministratorAccess" (In order to avoid issues.)
The roles are used:
1 into the AWS::Batch::ComputeEnvironment
2 into the AWS::Batch::JobDefinition
But even with the policy "arn:aws:iam::aws:policy/AdministratorAccess" I get "CannotPullContainerError: Error response from daemon: Get https://********.dkr.ecr.eu-west-1.amazonaws.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" when I rin a job.
Disclainer: All is FARGATE (Compute enviroment and Job), not EC2
AWSTemplateFormatVersion: '2010-09-09'
Description: Creates a POC AWS Batch environment.
Parameters:
Environment:
Type: String
Description: 'Environment Name'
Default: TEST
Subnets:
Type: List<AWS::EC2::Subnet::Id>
Description: 'List of Subnets to boot into'
ImageName:
Type: String
Description: 'Name and tag of Process Container Image'
Default: 'upload:6.0.0'
Resources:
BatchServiceRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: !Join ['', ['Demo', BatchServiceRole]]
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: 'Allow'
Principal:
Service: 'batch.amazonaws.com'
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/AdministratorAccess'
BatchContainerRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: !Join ['', ['Demo', BatchContainerRole]]
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: 'Allow'
Principal:
Service:
- 'ecs-tasks.amazonaws.com'
Action:
- 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/AdministratorAccess'
BatchJobRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: !Join ['', ['Demo', BatchJobRole]]
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: 'Allow'
Principal:
Service: 'ecs-tasks.amazonaws.com'
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/AdministratorAccess'
BatchCompute:
Type: "AWS::Batch::ComputeEnvironment"
Properties:
ComputeEnvironmentName: DemoContentInput
ComputeResources:
MaxvCpus: 256
SecurityGroupIds:
- sg-0b33333333333333
Subnets: !Ref Subnets
Type: FARGATE
ServiceRole: !Ref BatchServiceRole
State: ENABLED
Type: Managed
Queue:
Type: "AWS::Batch::JobQueue"
DependsOn: BatchCompute
Properties:
ComputeEnvironmentOrder:
- ComputeEnvironment: DemoContentInput
Order: 1
Priority: 1
State: "ENABLED"
JobQueueName: DemoContentInput
ContentInputJob:
Type: "AWS::Batch::JobDefinition"
Properties:
Type: Container
ContainerProperties:
Command:
- -v
- process
- new-file
- -o
- s3://contents/{content_id}/{content_id}.mp4
Environment:
- Name: SECRETS
Value: !Join [ ':', [ '{{resolve:secretsmanager:common.secrets:SecretString:aws_access_key_id}}', '{{resolve:secretsmanager:common.secrets:SecretString:aws_secret_access_key}}' ] ]
- Name: APPLICATION
Value: upload
- Name: API_KEY
Value: '{{resolve:secretsmanager:common.secrets:SecretString:fluzo.api_key}}'
- Name: CLIENT
Value: upload-container
- Name: ENVIRONMENT
Value: !Ref Environment
- Name: SETTINGS
Value: !Join [ ':', [ '{{resolve:secretsmanager:common.secrets:SecretString:aws_access_key_id}}', '{{resolve:secretsmanager:common.secrets:SecretString:aws_secret_access_key}}', 'upload-container' ] ]
ExecutionRoleArn: 'arn:aws:iam::**********:role/DemoBatchJobRole'
Image: !Join ['', [!Ref 'AWS::AccountId','.dkr.ecr.', !Ref 'AWS::Region', '.amazonaws.com/', !Ref ImageName ] ]
JobRoleArn: !Ref BatchContainerRole
ResourceRequirements:
- Type: VCPU
Value: 1
- Type: MEMORY
Value: 2048
JobDefinitionName: DemoContentInput
PlatformCapabilities:
- FARGATE
RetryStrategy:
Attempts: 1
Timeout:
AttemptDurationSeconds: 600
Into AWS::Batch::JobQueue:ContainerProperties:ExecutionRoleArn I harcoded the arn because if write !Ref BatchJobRole I get an error. But it's no my goal with this question.
The question is how to avoid "CannotPullContainerError: Error response from daemon: Get https://********.dkr.ecr.eu-west-1.amazonaws.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" when I run a Job.
It sounds like you can't reach the internet from inside your subnet.
Make sure:
There is an internet gateway device associated with your VPC (create one if there isn't -- even if you are just using nat-gateway for egress)
The route table that is associated with your subnet has a default route (0.0.0./0) to an internet gateway or nat-gateway with an attached elastic-ip.
An attached security group has rules allowing outbound internet traffic (0.0.0.0/0) for your ports and protocols. (e.g. 80/http, 443/https)
The network access control list (network ACL) that is associated with the subnet has rules allowing both outbound and inbound traffic to the internet.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-connect-internet-gateway/

is it possible to create Kubernetes pods, services, replica controllers etc on AWS cloudfromation?

does AWS cloudformation supports creation of Kubernetes pods, services, replica controllers etc or setting up the EKS clusters and worker nodes and using Kubectl to create the resources are the only way?
Not out of the box, but you can if you use a custom resource type backed by a lambda function in CloudFormation.
The AWS EKS quickstart has an example:
AWSTemplateFormatVersion: "2010-09-09"
Description: "deploy an example workload into an existing kubernetes cluster (qs-1p817r5f9)"
Parameters:
KubeConfigPath:
Type: String
KubeConfigKmsContext:
Type: String
Default: "EKSQuickStart"
KubeClusterName:
Type: String
NodeInstanceProfile:
Type: String
QSS3BucketName:
AllowedPattern: ^[0-9a-zA-Z]+([0-9a-zA-Z-]*[0-9a-zA-Z])*$
ConstraintDescription: Quick Start bucket name can include numbers, lowercase
letters, uppercase letters, and hyphens (-). It cannot start or end with a hyphen
(-).
Default: aws-quickstart
Description: S3 bucket name for the Quick Start assets. This string can include
numbers, lowercase letters, uppercase letters, and hyphens (-). It cannot start
or end with a hyphen (-).
Type: String
QSS3KeyPrefix:
AllowedPattern: ^[0-9a-zA-Z-/.]*$
ConstraintDescription: Quick Start key prefix can include numbers, lowercase letters,
uppercase letters, hyphens (-), dots(.) and forward slash (/).
Default: quickstart-amazon-eks/
Description: S3 key prefix for the Quick Start assets. Quick Start key prefix
can include numbers, lowercase letters, uppercase letters, hyphens (-), dots(.) and
forward slash (/).
Type: String
QSS3BucketRegion:
Default: 'us-east-1'
Description: The AWS Region where the Quick Start S3 bucket (QSS3BucketName) is
hosted. When using your own bucket, you must specify this value.
Type: String
LambdaZipsBucketName:
Description: 'OPTIONAL: Bucket Name where the lambda zip files should be placed,
if left blank a bucket will be created.'
Type: String
Default: ''
K8sSubnetIds:
Type: List<AWS::EC2::Subnet::Id>
VPCID:
Type: AWS::EC2::VPC::Id
ControlPlaneSecurityGroup:
Type: AWS::EC2::SecurityGroup::Id
Conditions:
CreateLambdaZipsBucket: !Equals
- !Ref 'LambdaZipsBucketName'
- ''
UsingDefaultBucket: !Equals [!Ref QSS3BucketName, 'aws-quickstart']
Resources:
WorkloadStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Sub
- 'https://${S3Bucket}.s3.${S3Region}.${AWS::URLSuffix}/${QSS3KeyPrefix}templates/example-workload.template.yaml'
- S3Region: !If [UsingDefaultBucket, !Ref 'AWS::Region', !Ref QSS3BucketRegion]
S3Bucket: !If [UsingDefaultBucket, !Sub '${QSS3BucketName}-${AWS::Region}', !Ref QSS3BucketName]
Parameters:
KubeManifestLambdaArn: !GetAtt KubeManifestLambda.Arn
HelmLambdaArn: !GetAtt HelmLambda.Arn
KubeConfigPath: !Ref KubeConfigPath
KubeConfigKmsContext: !Ref KubeConfigKmsContext
KubeClusterName: !Ref KubeClusterName
NodeInstanceProfile: !Ref NodeInstanceProfile
CopyZips:
Type: Custom::CopyZips
Properties:
ServiceToken: !GetAtt 'CopyZipsFunction.Arn'
DestBucket: !Ref LambdaZipsBucketName
SourceBucket: !If [UsingDefaultBucket, !Sub '${QSS3BucketName}-${AWS::Region}', !Ref QSS3BucketName]
Prefix: !Ref 'QSS3KeyPrefix'
Objects:
- functions/packages/Helm/lambda.zip
- functions/packages/DeleteBucketContents/lambda.zip
- functions/packages/KubeManifest/lambda.zip
- functions/packages/LambdaEniCleanup/lambda.zip
VPCLambdaCleanup:
Type: Custom::LambdaCleanup
Properties:
ServiceToken: !GetAtt VPCLambdaCleanupLambdaFunction.Arn
Region: !Ref "AWS::Region"
LambdaFunctionNames:
- !Ref KubeManifestLambda
VPCLambdaCleanupLambdaFunction:
DependsOn: CopyZips
Type: "AWS::Lambda::Function"
Properties:
Handler: lambda_function.lambda_handler
MemorySize: 128
Role: !GetAtt LambdaCleanUpFunctionRole.Arn
Runtime: python3.7
Timeout: 900
Code:
S3Bucket: !Ref LambdaZipsBucketName
S3Key: !Sub '${QSS3KeyPrefix}functions/packages/LambdaEniCleanup/lambda.zip'
HelmLambda:
DependsOn: CopyZips
Type: AWS::Lambda::Function
Properties:
Handler: lambda_function.lambda_handler
MemorySize: 128
Role: !GetAtt ManifestRole.Arn
Runtime: python3.6
Timeout: 900
Code:
S3Bucket: !Ref LambdaZipsBucketName
S3Key: !Sub '${QSS3KeyPrefix}functions/packages/Helm/lambda.zip'
VpcConfig:
SecurityGroupIds: [ !Ref EKSLambdaSecurityGroup ]
SubnetIds: !Ref K8sSubnetIds
KubeManifestLambda:
DependsOn: CopyZips
Type: AWS::Lambda::Function
Properties:
Handler: lambda_function.lambda_handler
MemorySize: 128
Role: !GetAtt ManifestRole.Arn
Runtime: python3.6
Timeout: 900
Code:
S3Bucket: !Ref LambdaZipsBucketName
S3Key: !Sub '${QSS3KeyPrefix}functions/packages/KubeManifest/lambda.zip'
VpcConfig:
SecurityGroupIds: [ !Ref EKSLambdaSecurityGroup ]
SubnetIds: !Ref K8sSubnetIds
DeleteBucketContentsLambda:
DependsOn: CopyZips
Type: AWS::Lambda::Function
Properties:
Handler: lambda_function.lambda_handler
MemorySize: 128
Role: !GetAtt DeleteBucketContentsRole.Arn
Runtime: python3.7
Timeout: 900
Code:
S3Bucket: !Ref LambdaZipsBucketName
S3Key: !Sub '${QSS3KeyPrefix}functions/packages/DeleteBucketContents/lambda.zip'
CopyZipsFunction:
Type: AWS::Lambda::Function
Properties:
Description: Copies objects from a source S3 bucket to a destination
Handler: index.handler
Runtime: python3.7
Role: !GetAtt CopyZipsRole.Arn
Timeout: 900
Code:
ZipFile: |
import json
import logging
import threading
import boto3
import cfnresponse
def copy_objects(source_bucket, dest_bucket, prefix, objects):
s3 = boto3.client('s3')
for o in objects:
key = prefix + o
copy_source = {
'Bucket': source_bucket,
'Key': key
}
print('copy_source: %s' % copy_source)
print('dest_bucket = %s'%dest_bucket)
print('key = %s' %key)
s3.copy_object(CopySource=copy_source, Bucket=dest_bucket,
Key=key)
def delete_objects(bucket, prefix, objects):
s3 = boto3.client('s3')
objects = {'Objects': [{'Key': prefix + o} for o in objects]}
s3.delete_objects(Bucket=bucket, Delete=objects)
def timeout(event, context):
logging.error('Execution is about to time out, sending failure response to CloudFormation')
cfnresponse.send(event, context, cfnresponse.FAILED, {}, physical_resource_id)
def handler(event, context):
physical_resource_id = None
if "PhysicalResourceId" in event.keys():
physical_resource_id = event["PhysicalResourceId"]
# make sure we send a failure to CloudFormation if the function is going to timeout
timer = threading.Timer((context.get_remaining_time_in_millis()
/ 1000.00) - 0.5, timeout, args=[event, context])
timer.start()
print('Received event: %s' % json.dumps(event))
status = cfnresponse.SUCCESS
try:
source_bucket = event['ResourceProperties']['SourceBucket']
dest_bucket = event['ResourceProperties']['DestBucket']
prefix = event['ResourceProperties']['Prefix']
objects = event['ResourceProperties']['Objects']
if event['RequestType'] == 'Delete':
delete_objects(dest_bucket, prefix, objects)
else:
copy_objects(source_bucket, dest_bucket, prefix, objects)
except Exception as e:
logging.error('Exception: %s' % e, exc_info=True)
status = cfnresponse.FAILED
finally:
timer.cancel()
cfnresponse.send(event, context, status, {}, physical_resource_id)
LambdaZipsBucket:
Type: AWS::S3::Bucket
Condition: CreateLambdaZipsBucket
LambdaCleanUpFunctionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal:
Service: [lambda.amazonaws.com]
Version: '2012-10-17'
Path: /
Policies:
- PolicyName: LambdaRole
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Effect: Allow
Resource: !Sub "arn:${AWS::Partition}:logs:*:*:*"
- Action:
- 'ec2:*'
Effect: Allow
Resource: "*"
DeleteBucketContentsRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- !Sub 'arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
Policies:
- PolicyName: deletebucketcontents
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action: s3:*
Resource:
- !Sub 'arn:${AWS::Partition}:s3:::${LambdaZipsBucketName}/*'
- !Sub 'arn:${AWS::Partition}:s3:::${LambdaZipsBucketName}'
CopyZipsRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- !Su 'arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
Policies:
- PolicyName: lambda-copier
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action: s3:GetObject
Resource: !Sub
- 'arn:${AWS::Partition}:s3:::${S3Bucket}/${QSS3KeyPrefix}*'
- S3Bucket: !If [UsingDefaultBucket, !Sub '${QSS3BucketName}-${AWS::Region}', !Ref QSS3BucketName]
- Effect: Allow
Action:
- s3:PutObject
- s3:DeleteObject
Resource: !Sub 'arn:${AWS::Partition}:s3:::${LambdaZipsBucketName}/${QSS3KeyPrefix}*'
ManifestRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: eksStackPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action: s3:GetObject
Resource: !Sub
- "arn:${AWS::Partition}:s3:::${BucketName}/*"
- S3Bucket: !If [UsingDefaultBucket, !Sub '${QSS3BucketName}-${AWS::Region}', !Ref QSS3BucketName]
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
Resource:
- "*"
- Action: "kms:decrypt"
Effect: Allow
Resource: "*"
- Action: "s3:GetObject"
Effect: Allow
Resource: "*"
EKSLambdaSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security group for lambda to communicate with cluster API
VpcId: !Ref VPCID
ClusterControlPlaneSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
Description: Allow lambda to communicate with the cluster API Server
GroupId: !Ref ControlPlaneSecurityGroup
SourceSecurityGroupId: !Ref EKSLambdaSecurityGroup
IpProtocol: tcp
ToPort: 443
FromPort: 443
It works by creating a lambda function customer resource KubeManifestLambda and HelmLambda that has kubectl and helm installed respectively, both configured with a role that allows them to access the EKS k8s cluster.
Then these custom resources can be used to deploy k8s manifests and helm charts with custom values, like in this example.
KubeManifestExample:
Type: "Custom::KubeManifest"
Version: '1.0'
Properties:
# The lambda function that executes the manifest against the cluster. This is created in one of the parent stacks
ServiceToken: !Ref KubeManifestLambdaArn
# S3 path to the encrypted config file eg. s3://my-bucket/kube/config.encrypted
KubeConfigPath: !Ref KubeConfigPath
# context for KMS to use when decrypting the file
KubeConfigKmsContext: !Ref KubeConfigKmsContext
# Kubernetes manifest
Manifest:
apiVersion: v1
kind: ConfigMap
metadata:
# If name is not specified it will be automatically generated,
# and can be retrieved with !GetAtt LogicalID.name
#
# name: test
#
# if namespace is not specified, "default" namespace will be used
namespace: kube-system
data:
# examples of consuming outputs of the HelmExample resource below's output. Creates an implicit dependency, so
# this resource will only launch once the HelmExample resource has completed successfully
ServiceCatalogReleaseName: !Ref HelmExample
ServiceCatalogKubernetesServiceName: !GetAtt HelmExample.Service0
This even lets you reference other Cloud formation resources such as RDS instances that are created as part of a workload.
You can use CloudFormation to create EKS cluster and worker nodes but you have to use kubectl for any operation on cluster like creating service, pods, deployments etc.....you can’t use CloudFormation for that
If you use CDK, you can use cluster.add_helm_chart() or HelmChart class. It will create a lambda behind the scenes.
Or you can create a lambda directly with https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk.lambda_layer_kubectl/README.html

cloudtrail log using cloudformation template

In cloud-trail, I can select the existing log group CloudTrail/DefaultLogGroup under CloudWatch Logs section. Is it possible to complete this step using cloudformation Template?
Assuming you are creating the log group with CloudFormation as well:
LogGroup: # A new log group
Type: AWS::Logs::LogGroup
Properties:
RetentionInDays: 365 # optional
CloudTrailLogsRole: # A role for your trail
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: sts:AssumeRole
Effect: Allow
Principal:
Service: cloudtrail.amazonaws.com
Version: '2012-10-17'
CloudTrailLogsPolicy: # The policy for your role
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action:
- logs:PutLogEvents
- logs:CreateLogStream
Effect: Allow
Resource:
Fn::GetAtt:
- LogGroup
- Arn
Version: '2012-10-17'
PolicyName: DefaultPolicy
Roles:
- Ref: CloudTrailLogsRole
CloudTrail: # The trail
Type: AWS::CloudTrail::Trail
Properties:
IsLogging: true
CloudWatchLogsLogGroupArn:
Fn::GetAtt:
- LogGroup
- Arn
CloudWatchLogsRoleArn:
Fn::GetAtt:
- CloudTrailLogsRole
- Arn
DependsOn:
- CloudTrailLogsPolicy
- CloudTrailLogsRole
If using an existing log group:
CloudTrailLogsRole: # A role for your trail
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: sts:AssumeRole
Effect: Allow
Principal:
Service: cloudtrail.amazonaws.com
Version: '2012-10-17'
CloudTrailLogsPolicy: # The policy for your role
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action:
- logs:PutLogEvents
- logs:CreateLogStream
Effect: Allow
Resource: <your existing log group arn here>
Version: '2012-10-17'
PolicyName: DefaultPolicy
Roles:
- Ref: CloudTrailLogsRole
CloudTrail: # The trail
Type: AWS::CloudTrail::Trail
Properties:
IsLogging: true
CloudWatchLogsLogGroupArn: <your existing log group arn here>
CloudWatchLogsRoleArn:
Fn::GetAtt:
- CloudTrailLogsRole
- Arn
DependsOn:
- CloudTrailLogsPolicy
- CloudTrailLogsRole

AWS CloudFormation CodePipeline: Could not fetch the contents of the repository from GitHub

I'm attempting to setup an AWS CloudFormation configuration using CodePipeline and GitHub.
I've failed both at my own example project and the tutorial: Create a GitHub Pipeline with AWS CloudFormation.
All resources are created, but in CodePipeline I continuously get the following error during the initial "Source" stage.
Could not fetch the contents of the repository from GitHub.
See image below:
Note that this is not a permissions error. It's something else that does not exist on Google until now.
GitHub can be configured to work if I stop using CloudFormation and create a CodePipeline through the console, but for my purposes, I need to use CloudFormation. Need to stick to a template.
Here is the template from the CloudFormation template copied from the tutorial:
Parameters:
BranchName:
Description: GitHub branch name
Type: String
Default: master
RepositoryName:
Description: GitHub repository name
Type: String
Default: test
GitHubOwner:
Type: String
GitHubSecret:
Type: String
NoEcho: true
GitHubOAuthToken:
Type: String
NoEcho: true
ApplicationName:
Description: CodeDeploy application name
Type: String
Default: DemoApplication
BetaFleet:
Description: Fleet configured in CodeDeploy
Type: String
Default: DemoFleet
Resources:
CodePipelineArtifactStoreBucket:
Type: "AWS::S3::Bucket"
CodePipelineArtifactStoreBucketPolicy:
Type: "AWS::S3::BucketPolicy"
Properties:
Bucket: !Ref CodePipelineArtifactStoreBucket
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: DenyUnEncryptedObjectUploads
Effect: Deny
Principal: "*"
Action: "s3:PutObject"
Resource: !Join
- ""
- - !GetAtt
- CodePipelineArtifactStoreBucket
- Arn
- /*
Condition:
StringNotEquals:
"s3:x-amz-server-side-encryption": "aws:kms"
- Sid: DenyInsecureConnections
Effect: Deny
Principal: "*"
Action: "s3:*"
Resource: !Join
- ""
- - !GetAtt
- CodePipelineArtifactStoreBucket
- Arn
- /*
Condition:
Bool:
"aws:SecureTransport": false
AppPipelineWebhook:
Type: "AWS::CodePipeline::Webhook"
Properties:
Authentication: GITHUB_HMAC
AuthenticationConfiguration:
SecretToken: !Ref GitHubSecret
Filters:
- JsonPath: $.ref
MatchEquals: "refs/heads/{Branch}"
TargetPipeline: !Ref AppPipeline
TargetAction: SourceAction
Name: AppPipelineWebhook
TargetPipelineVersion: !GetAtt
- AppPipeline
- Version
RegisterWithThirdParty: true
AppPipeline:
Type: "AWS::CodePipeline::Pipeline"
Properties:
Name: github-events-pipeline
RoleArn: !GetAtt
- CodePipelineServiceRole
- Arn
Stages:
- Name: Source
Actions:
- Name: SourceAction
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: 1
Provider: GitHub
OutputArtifacts:
- Name: SourceOutput
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref RepositoryName
Branch: !Ref BranchName
OAuthToken: !Ref GitHubOAuthToken
PollForSourceChanges: false
RunOrder: 1
- Name: Beta
Actions:
- Name: BetaAction
InputArtifacts:
- Name: SourceOutput
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CodeDeploy
Configuration:
ApplicationName: !Ref ApplicationName
DeploymentGroupName: !Ref BetaFleet
RunOrder: 1
ArtifactStore:
Type: S3
Location: !Ref CodePipelineArtifactStoreBucket
CodePipelineServiceRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- codepipeline.amazonaws.com
Action: "sts:AssumeRole"
Path: /
Policies:
- PolicyName: AWS-CodePipeline-Service-3
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- "codecommit:CancelUploadArchive"
- "codecommit:GetBranch"
- "codecommit:GetCommit"
- "codecommit:GetUploadArchiveStatus"
- "codecommit:UploadArchive"
Resource: "*"
- Effect: Allow
Action:
- "codedeploy:CreateDeployment"
- "codedeploy:GetApplicationRevision"
- "codedeploy:GetDeployment"
- "codedeploy:GetDeploymentConfig"
- "codedeploy:RegisterApplicationRevision"
Resource: "*"
- Effect: Allow
Action:
- "codebuild:BatchGetBuilds"
- "codebuild:StartBuild"
Resource: "*"
- Effect: Allow
Action:
- "devicefarm:ListProjects"
- "devicefarm:ListDevicePools"
- "devicefarm:GetRun"
- "devicefarm:GetUpload"
- "devicefarm:CreateUpload"
- "devicefarm:ScheduleRun"
Resource: "*"
- Effect: Allow
Action:
- "lambda:InvokeFunction"
- "lambda:ListFunctions"
Resource: "*"
- Effect: Allow
Action:
- "iam:PassRole"
Resource: "*"
- Effect: Allow
Action:
- "elasticbeanstalk:*"
- "ec2:*"
- "elasticloadbalancing:*"
- "autoscaling:*"
- "cloudwatch:*"
- "s3:*"
- "sns:*"
- "cloudformation:*"
- "rds:*"
- "sqs:*"
- "ecs:*"
Resource: "*"
I've have taken the following steps:
provided Github Organization, Repo & Branch
setup a personal access token on GitHub and supplied it to the template GitHubOAuthToken parameter with access to repo:all & admin:repo_hook
setup a random string and provided it to the GitHubSecret
tried not including a GitHubSecret as in many other examples
verified that AWS CodePipeline for my region is listed in Github Applications under "Authorized OAuth Applications"
In attempts to start from a clear slate, I've also done the following:
cleared all GitHub webhooks before starting aws codepipeline list-webhooks & aws codepipeline delete-webhook --name
added a new personal access token
tried multiple repos & branches
Any ideas how I can get GitHub to work with CloudFormation & CodePipeline?
Found the solution. The Github organization name is case sensitive.

Set up Cloudtrail s3 Bucket with proper Policy

I'm trying to get Cloudtrail on the road and want to set up a Cloudtrail s3bucket. but the Policy does not complete. Here is my code:
CloudtrailBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Delete
Description: Stores all Trails for this account
Properties:
AccessControl: BucketOwnerFullControl
BucketName: !Sub "${AWS::AccountId}-invoice-cloudtrail"
LifecycleConfiguration:
Rules:
- Id: GlacierRule
Prefix: glacier
Status: Enabled
ExpirationInDays: '365'
Transitions:
- TransitionInDays: '1'
StorageClass: Glacier
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
Tags:
- Key: Name
Value: !Sub '${EnvironmentName} ${Project}-CloudtrailBucket'
VersioningConfiguration:
Status: Suspended
and this is the Policy I want to use:
CloudtrailBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref CloudtrailBucket
PolicyDocument:
Statement:
- Sid: AWSCloudTrailAclCheck
Effect: Allow
Principal:
Service: cloudtrail.amazonaws.com
Action:
- s3:GetBucket*
Resource:
- !Sub "arn:aws:s3:::${AWS::AccountId}-invoice-cloudtrail/*"
- Sid: AWSCloudTrailWrite
Effect: Allow
Principal:
Service: cloudtrail.amazonaws.com
Action: s3:PutObject
Resource:
- !Sub "arn:aws:s3:::${AWS::AccountId}-invoice-cloudtrail/*"
Condition:
StringEquals:
s3:x-amz-acl: bucket-owner-full-control
I really dont know what might wrong. the error message is the following:
Action does not apply to any resource(s) in statement (Service: Amazon S3; Status Code: 400; Error Code: MalformedPolicy; Request ID: 7A458D04A5765AC6; S3 Extended Request ID: EYn2is5Oph1+pnZ0u+zEH067fWwD0fyq1+MRGRxJ1qT3WK+e1LFjhhE9fTLOFiBnhSzbItfdrz0=)
I believe you have to change your policy to match the following:
CloudtrailBucketPolicy:
Type: 'AWS::S3::BucketPolicy'
Properties:
Bucket: !Ref CloudtrailBucket
PolicyDocument:
Statement:
- Sid: AWSCloudTrailAclCheck
Effect: Allow
Principal:
Service: cloudtrail.amazonaws.com
Action:
- s3:GetBucketAcl
Resource:
- !Sub "arn:aws:s3:::${AWS::AccountId}-invoice-cloudtrail"
- Sid: AWSCloudTrailWrite
Effect: Allow
Principal:
Service: cloudtrail.amazonaws.com
Action: s3:PutObject
Resource:
- !Sub "arn:aws:s3:::${AWS::AccountId}-invoice-cloudtrail/*"
Condition:
StringEquals:
s3:x-amz-acl: bucket-owner-full-control
The reasoning is that s3:GetBucket* expands to s3:GetBucketAcl, s3:GetBucketCORS, etc (all here), all of these expecting a bucket as resource, and you have provided many objects on your original policy.
So I've changed the resource (removed /*) and also cleaned the policy a bit, since CloudTrail should required only s3:GetBucketAcl.