I've just moved to cloud formation and I am starting with creating ECR repositories for docker,
I need all repositories to have the same properties except the repository name.
Since this is micro-services I will need at least 40 repo's so I want to create a stack that will create the repo's for me in a loop, and just change the name.
I started looking at nested stacks and this is what I got so far:
ecr-root.yaml:
---
AWSTemplateFormatVersion: '2010-09-09'
Description: ECR docekr repository
Parameters:
ECRRepositoryName:
Description: ECR repository name
Type: AWS::ECR::Repository::RepositoryName
Resources:
ECRStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://cloudformation.s3.amazonaws.com/ecr-stack.yaml
TimeoutInMinutes: '20'
Parameters:
ECRRepositoryName: !GetAtt 'ECRStack.Outputs.ECRRepositoryName'
And ecr-stack.yaml:
---
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
ECRRepositoryName:
Description: ECR repository name
Default: panpwr-mysql-base
Type: String
Resources:
MyRepository:
Type: AWS::ECR::Repository
Properties:
RepositoryName:
ref: ECRRepositoryName
RepositoryPolicyText:
Version: "2012-10-17"
Statement:
-
Sid: AllowPushPull
Effect: Allow
Principal:
AWS:
- "arn:aws:iam::123456789012:user/Bob"
- "arn:aws:iam::123456789012:user/Alice"
Action:
- "ecr:GetDownloadUrlForLayer"
- "ecr:BatchGetImage"
- "ecr:BatchCheckLayerAvailability"
- "ecr:PutImage"
- "ecr:InitiateLayerUpload"
- "ecr:UploadLayerPart"
- "ecr:CompleteLayerUpload"
RepositoryNameExport:
Description: RepositoryName for export
Value:
Ref: ECRRepositoryName
Export:
Name:
Fn::Sub: "ECRRepositoryName"
Everything is working fine,
But when I'm running the stack it asks me for the repository name I want to give it, and it creates one repository.
And then I can have as many stacks that I want with a different name but that is not my purpose.
How do I get it all in one stack that creates as many repositories that I want?
Sounds like you want to loop through a given list of parameters. Looping is not possible in a CloudFormation template. Few things you can try
You could programmatically generate a template. The troposphere Python library provides a nice abstraction to generate templates.
Write custom resource backed by AWS lambda. You can handle your custom logic in the AWS lambda function .
The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define cloud infrastructure in code and provision it through AWS CloudFormation. Use AWS CDK to write custom script for your usecase.
Related
I am creating a REST API using CloudFormation. In an other CloudFormation stack I would like to have access to values that are in the ouput section (the invoke URL) of that CloudFormation script.
Is this possible, and if so how?
You can export your outputs. Exporting makes them accessible to other stacks.
From the AWS Docs:
To export a stack's output value, use the Export field in the Output section of the stack's template. To import those values, use the Fn::ImportValue function in the template for the other stacks
The following exports an API Gateway Id.
Description: API for interacting with API resources
Parameters:
TargetEnvironment:
Description: 'Examples can be dev, test or prod'
Type: 'String'
ProductName:
Description: 'Represents the name of the product you want to call the deployment'
Type: 'String'
Resources:
MyApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapi'
Outputs:
MyApiId:
Description: 'Id of the API'
Value: !Ref MyApi
Export:
Name: !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapi'
MyApiRootResourceId:
Description: 'Id of the root resource on the API'
Value: !GetAtt MyApi.RootResourceId
Export:
Name: !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapirootresource'
The Export piece of the Output is the important part here. If you provide the Export then other Stacks can consume from it.
Now, in another file I can import that MyApiId value by using the Fn::Import intrinsic function, importing the exported name. I can also import it's root resource and consume both of these values when creating a child API resource.
From the AWS Docs:
The intrinsic function Fn::ImportValue returns the value of an output exported by another stack. You typically use this function to create cross-stack references.
Description: Resource endpoints for interacting with the API
Parameters:
TargetEnvironment:
Description: 'Examples can be dev, test or prod'
Type: 'String'
ProductName:
Description: 'Represents the name of the product you want to call the deployment'
Type: 'String'
Resources:
MyResource:
Type: AWS::ApiGateway::Resource
Properties:
ParentId: {'Fn::ImportValue': !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapirootresource' }
PathPart: foobar
RestApiId: {'Fn::ImportValue': !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapi' }
These are two completely different .yaml files and can be deployed as two independent stacks but now they depend on each other. If you try to delete the MyApi API Gateway stack before deleting the MyResource stack the CloudFormation delete operation will fail. You must delete the dependencies first.
One thing to keep in mind is that in some cases you might want to have the flexability to delete the root resource without worrying about dependencies. The delete operation could in some cases be done without any side-effects. For instance, deleting an SNS topic won't break a Lambda - it's prevents it from running. There's no reason to delete the Lambda just to re-deploy a new SNS topic. In that scenario I utilize naming conventions and tie things together that way instead of using exports. For example - the above AWS::ApiGateway::Resource can be tied to an environment specific API Gateway based on the naming convention.
Parameters:
TargetEnvironment:
Description: 'Examples can be dev, test or prod'
Type: 'String'
ProductName:
Description: 'Represents the name of the product you want to call the deployment'
Type: 'String'
Resources:
MyResource:
Type: AWS::ApiGateway::Resource
Properties:
ParentId: {'Fn::ImportValue': !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapirootresource' }
PathPart: foobar
RestApiId: !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapi'
With this there's no need to worry about the export/import as long as the last half of the resource is named the same across all environments. The environment can change via the TargetEnvironment parameter so this can be re-used across dev, test and prod.
One caveat to this approach is that naming conventions only work for when you want to access something that can be referenced by name. If you need a property, such as the RootResource in this example, or EC2 size, EBS Volume size, etc then you can't just use a naming convention. You'll need to export the value and import it. In the example above I could replace the RestApiId import usage with a naming convention but I could not replace the ParentId with a convention - I had to perform an import.
I use a mix of both in my templates - you'll find when it makes sense to use one approach over the other as you build experience.
Question
What is the correct way to get the output of a cloudformation stack in a serverless.yml file without hardcoding the stack name?
Steps
I have a serverless.yml file where I import a cloudformation template to create an ElastiCache cluster.
When I try to do so, I get this error:
Serverless Error ---------------------------------------
Invalid variable reference syntax for variable AWS::StackName. You can only reference env vars, options, & files. You can check our docs for more info.
In my file I'd like to expose as an environment variable the ElastiCacheAddress output from the cloudformation stack. I am using the serverless pseudo-parameters plugin to get the output:
# Here is where I try to reference the CF output value
service: hello-world
provider:
name: aws
# ...
environment:
cacheUrl: ${cf:#{AWS::StackName}.ElastiCacheAddress}
# Reference to the CF template
resources:
- '${file(./cf/cf-elasticache.yml)}'
The CF template is the one from the AWS Samples GitHub repository.
The snippet with the output is here:
ElastiCacheAddress:
Description: ElastiCache endpoint address
Value: !If [ IsRedis, !GetAtt ElastiCacheCluster.RedisEndpoint.Address, !GetAtt ElastiCacheCluster.ConfigurationEndpoint.Address]
Export:
Name: !Sub ${AWS::StackName}-ElastiCacheAddress
You can use a workaround to get your way through these syntax caveats.
In this case, I would suggest you to create a custom node to set variables you would want to reuse. You can then reference these variables using Serverless Framework syntax only, to avoid that error, like so:
# Here is where I try to reference the CF output value
service: hello-world
custom:
stackName:'#{AWS::StackName}'
provider:
name: aws
# ...
environment:
cacheUrl: ${cf:${self:custom.stackName}.ElastiCacheAddress}
# Reference to the CF template
resources:
- '${file(./cf/cf-elasticache.yml)}'
I built out my dev environment manually, I wanted to focus on logic and skip the learning curve on serverless, but before deploying to prod I want to standardize and parameterize my stack.
setting up my dynamodb tables has been straight forward, but I'm running into snags with deploying a new api-gateway.
I've been using aws codebuild to package layers for lambda functions and an s3 bucket to store my lambda code.
Let's take my dev-rest-auth api (custom authentication) as an example.
I have several resources for login/out, passwords and registration; most are protected by a custom authorizer (login/logout aren't) and all have cors policies. I'm using a custom domain account-api.dev.example.com I use several dynamodb tables for housing user data (let's avoid security discussions please, I'm not storing raw passwords and am encrypting using leading industry standards) and temporary codes (password reset & account verification).
to test serverless implementation I'd like to build a yaml file that recreates my existing infrastructure - so the first question is -- is that possible? Can I parameterize the deployment of an API gateway, with custom authorizer, custom domain, and several lambdas?
Next question is how?
Organizationally I'm breaking out my yml files by resource:
I have several dynamodb yml files that look like this:
Resources:
UserTable:
Type: AWS::DynamoDB::Table
DeletionPolicy: Retain
Properties:
TableName: ${self:custom.resource-prefix}-UserTable-${self:custom.stage}
AttributeDefinitions:
- AttributeName: email
AttributeType: S
KeySchema:
- AttributeName: email
KeyType: HASH
# Set the capacity to auto-scale
BillingMode: PAY_PER_REQUEST
This was a much earlier attempt (several months ago, from googling, but I don't remember where I found it or what it does) of standing up an API gateway:
Resources:
SharedGW:
Type: AWS::ApiGateway::RestApi
Properties:
Name: SharedGW
Outputs:
apiGatewayRestApiId:
Value:
Ref: SharedGW
Export:
Name: SharedGW-restApiId
apiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- SharedGW
- RootResourceId
Export:
Name: SharedGW-rootResourceId
I pull everything together in a serverless.yml file that references the resource files like this:
...
resources:
# S3 Bucket
- ${file(resources/s3/s3-static-host.yml)}
- ${file(resources/s3/s3-CodeBuildResults.yml)}
# DynamoDB
- ${file(resources/dynamodb/dynamodb-mealtable.yml)}
- ${file(resources/dynamodb/dynamodb-ziptable.yml)}
- ${file(resources/dynamodb/dynamodb-usertable.yml)}
- ${file(resources/dynamodb/dynamodb-passwordresettable.yml)}
- ${file(resources/dynamodb/dynamodb-accountregistrationtable.yml)}
- ${file(resources/dynamodb/dynamodb-restaurant_table.yml)}
# DNS Records (Route 53)
# TODO: Determine why DNS hangs
# - ${file(resources/route_53/dev_dns.yml)}
# Gateways
- ${file(resources/api_gateway/local_rest_auth.yml)}
# - ${file(resources/api_gateway/rest_auth.yml)}
...
I've seen several examples of connecting a lambda to a gateway, but it's not clear where the gateway is being created), it's also not clear how the lambda is being created/if I'd be able to reference layers/function code in s3.
I've seen some tutorials for doing this with aws amplify via the cli, but my dream-state would be that I could effectively create a new aws account, deploy this serverless and have my site up and running automatically - with just a little route 53 work to point to a new domain.
I'm creating a managed policy in a cloud formation template which locks down access to an s3 container and key path. I've followed the docs in aws for using !Join but I am getting a malformed template error.
Resource:
- !GetAtt ACHSFTPProxyBucket.Arn
-
- !Join
- - ''
- !GetAtt SomeBucketICreated.Arn
- /supersecret/upload/* #note I've also wrapped this in quotes and no dice
I've restricted access in the past using conditionals on the actions but was wondering if this could be done on a resource line and a !Join.
The output should look like this once deployed in the json editor of the console
Resource:[
"arn:aws:s3:::bucketbname",
"arn:aws:s3:::bucketbname/supersecret/upload/*"
]
I've manually modified the json in the console to test if the policy works and it does just trying to figure out how to translate this to the template.
What is the correct way to construct the arn combinded with a string
You need Fn::Sub. Something along the lines:
Resource:
- !GetAtt ACHSFTPProxyBucket.Arn
- !Sub '${SomeBucketICreated.Arn}/supersecret/upload/*'
I am trying to create a GCS bucket using Deployment Manager using the following resource config:
resources:
- type: storage.v1.bucket
name: upload-bucket
properties:
project: <project-id>
name: <unique-bucket-name>
However, I get the following error:
- code: RESOURCE_ERROR
location: /deployments/the-bucket/resources/upload-bucket
message: '{"ResourceType":"storage.v1.bucket","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"errors":[{"domain":"global","message":"205531008256#cloudservices.gserviceaccount.com
does not have storage.buckets.get access to upload-bucket.","reason":"forbidden"}],"message":"205531008256#cloudservices.gserviceaccount.com
does not have storage.buckets.get access to upload-bucket.","statusMessage":"Forbidden","requestPath":"https://www.googleapis.com/storage/v1/b/upload-bucket","httpMethod":"GET","suggestion":"Consider
granting permissions to 205531008256#cloudservices.gserviceaccount.com"}}'
The role of 205531008256#cloudservices.gserviceaccount.com is Project Editor by default (which surely has enough permissions?), however I've also tried adding Storage Admin and Project Owner - neither seems to help.
My 2 questions are:
Why it is trying to use this service account?
How can I get Deployment Manager to be able to create a bucket?
Thanks
I ran into the exact same problem. Allow me to restate Andres S's answer more clearly.
When you wrote
resources:
- type: storage.v1.bucket
name: upload-bucket
properties:
project: <project-id>
name: <unique-bucket-name>
you probably intended create a bucket called <unique-bucket-name> and figured that upload-bucket would just be a name to refer to this bucket in the Deployment Manager. What GCP actually did was attempt to use upload-bucket as the actual bucket name. As far as I can tell, <unique-bucket-name> is never used. This caused a problem, since someone else already owns the bucket upload-bucket.
Try this code, I think you are specifying the name twice.
resources:
- type: storage.v1.bucket
name: <unique-bucket-name>
properties:
project: <project-id>
I recently run into similar issue, where Deployment Manager failed to create the bucket.
I have verified that:
the permissions are not an issue as the same deployment contained other bucket that was created.
the bucket name is not an issue as I was able to create the bucket manually.
After some googling I found there is other way to create the bucket. Instead of using type: storage.v1.bucket you can also use type: gcp-types/storage-v1:buckets.
So my final solution was to create the bucket like this:
- name: images-bucket
type: gcp-types/storage-v1:buckets
properties:
name: images-my-project-name
location: "eu"