How to provision a bunch of resources using the pyplate macro - aws-cloudformation

In this learning exercise I want to use a PyPlate script to provision the BucketA, BucketB and BucketC buckets in addition to the TestBucket.
Imagine that the BucketNames parameter could be set by a user of this template who would specify a hundred bucket names using UUIDs for example.
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Resources:
TestBucket:
Type: "AWS::S3::Bucket"
#!PyPlate
output = []
bucket_names = params['BucketNames']
for name in bucket_names:
output.append('"' + name + '": {"Type": "AWS::S3::Bucket"}')
The above when deployed responds with a Template format error: YAML not well-formed. (line 15, column 3)

Although the accepted answer is functionally correct, there is a better way to approach this.
Essentially PyPlate code recursively reads through all the key-value pairs of the stack and replaces the values with their Python computed values (ie they match the #!PyPlate regex). So we need to have a corresponding key to the PyPlate code.
Here's how a combination of PyPlate and Explode would solve the above problem.
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate, Explode]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Mappings: {}
Resources:
MacroWrapper:
ExplodeMap: |
#!PyPlate
param = "BucketNames"
mapNamespace = param + "Map"
template["Mappings"][mapNamespace] = {}
for bucket in params[param]:
template["Mappings"][mapNamespace][bucket] = {
"ResourceName": bucket
}
output = mapNamespace
Type: "AWS::S3::Bucket"
TestBucket:
Type: "AWS::S3::Bucket"
This approach is powerful because:
You can append resources to an existing template, because you won't tamper with the whole Resources block
You don't need to rely on hardcoded Mappings, as required by Explode. You can drive dynamic logic in Cloudformation.
Most of the Cloudformation props/KV's remain in the YAML part, and there is minimal python part which augments to the CFT functionality.
Please be aware of the macro order through - PyPlate needs to be executed before Explode, which is why the order [PyPlate, Explode]. The execution is sequential.
If we walk through the source code of PyPlate, it gives us control of more template related variables to work with, namely
params (stack parameters)
template (the entire template)
account_id
region
I utilised the template variable in this case.
Hope this helps

This works for me:
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Resources:
|
#!PyPlate
output = {}
bucket_names = params['BucketNames']
for name in bucket_names:
output[name] = {"Type": "AWS::S3::Bucket"}
Explanation:
The python code outputs a dict object where the key is the bucket name and the value is its configuration:
{'BucketA': {'Type': 'AWS::S3::Bucket'}, 'BucketB': {'Type': 'AWS::S3::Bucket'}}
Prior to Macro execution, the YAML template is transformed to JSON format, and because the above is valid JSON data, I can plug it in as value of Resources.
(Note that having the hardcoded TestBucket won't work with this and I had to remove it)

Related

How to create some random or unique value in a CloudFormation template?

Is there a way to create some kind of random or unique value in a CloudFormation template?
Why I need this. In our templates we have a number of custom-named resources, for instance AWS::AutoScaling::LaunchConfiguration with specified LaunchConfigurationName or AWS::AutoScaling::AutoScalingGroup with specified AutoScalingGroupName.
When updating stacks, we often get the following error:
CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename some-stack-launch-configuration and update the stack again.
We don't want to rename resources just because we need to update them.
We also don't want to drop custom names in our resources. We won't mind however having some random suffix in our custom names.
With a "random generator" the solution might look something like:
MyAutoScalingGroup:
Type: 'AWS::AutoScaling::AutoScalingGroup'
Properties:
AutoScalingGroupName: !Sub 'my-auto-scaling-group-${AWS::Random}'
If you just need a random ID (no passwords, no fancy requirements), the way I'd recommend is using a portion of AWS::StackId, which is in the following format:
arn:aws:cloudformation:us-west-2:123456789012:stack/teststack/51af3dc0-da77-11e4-872e-1234567db123
So in order to get the last portion, you would need two splits, e.g.:
AutoScalingGroupName:
Fn::Join:
- '-'
- - my-auto-scaling-group
- Fn::Select:
- 4
- Fn::Split:
- '-'
- Fn::Select:
- 2
- Fn::Split:
- /
- Ref: AWS::StackId
Equivalent shorter syntax:
AutoScalingGroupName: !Join ['-', ['my-auto-scaling-group', !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]]]
Meaning:
Start with AWS::StackId, e.g.: arn:aws:cloudformation:us-west-2:123456789012:stack/teststack/51af3dc0-da77-11e4-872e-1234567db123
Split on / and select 2th portion (0-indexed): 51af3dc0-da77-11e4-872e-1234567db123
Split on - and select 4th portion (0-indexed): 1234567db123
Join with your fixed portion name: my-auto-scaling-group-1234567db123.
Advantages: I prefer this way than creating a CustomResource, because for large AWS environments and many stacks, you might end up with several lambdas, making governance a bit harder.
Disadvantages: It's more verbose (Fn::Join, Fn::Select, and Fn::Split).
EDIT 2022-02-17:
As observed by #teuber789's comment, if you need multiple resources of the same type, e.g.: my-auto-scaling-group-<random_1> and my-auto-scaling-group-<random_2>, this approach won't work as AWS::StackId is the same for whole stack.
this is similar to https://stackoverflow.com/a/67162053/2660313 but shorter:
Value: !Select [2, !Split ['/', !Ref AWS::StackId]]
In my opinion, the most elegant way to implement such logic (if you don't want to rename resources) is to use Cloudformation Macros. They're like a custom resource, but you call them implicitly during template transformation.
So, I will try to provide some example, but you can investigate more in AWS Documentation.
First of all, you create the function (with all required permissions and so on) that will do the magic (something like LiuChang mentioned).
Then, you should create a macro from this Function:
Resources:
Macro:
Type: AWS::CloudFormation::Macro
Properties:
Name: <MacroName>
Description: <Your description>
FunctionName: <Function ARN>
And then use this Macro in your resources definition:
MyAutoScalingGroup:
Type: 'AWS::AutoScaling::AutoScalingGroup'
Properties:
AutoScalingGroupName:
'Fn::Transform':
- Name: <MacroName>
Parameters:
InputString: <Input String>
...<Some other parameters like operation type or you can skip this>
Also, to use macros, you should specify the CAPABILITY_AUTO_EXPAND capability during stack creation/updation.
And that's it. It should just work, but of course one of the drawbacks of this approach - you should maintain additional lambda function.
I think you need to create a Lambda function to do this.
Here's a GitHub project cloudformation-random-string, which has a Lambda function and a simple tutorial.
Here's another tutorial Generate Passwords in AWS CloudFormation Template.
You can refer to the Lambda function above and make it work for you.
I'm using the AWS Java SDK to run some Stack Update command.
I generate random value using Java, then I pass it as parameter.

Cloudformation nested stack naming

I'm building a nested Cloudformation stack with a subtemplate used multiple times.
I would like the different resources (S3 buckets, target groups, etc.) to use the AWS::StackName as part of their name.
!Sub ${AWS::StackName}-s3bucket
The nested stack names usually include an AWS-generated string that is in uppercase like:
foobar-vpcstack-YN4842UYLUFL
However, some resources only allow a name that is all lowercase.
Is there a way to ensure that the nested stack names are all lowercase?
Or is there a better way to handle the naming of the nested stacks and its resources?
I use the following rules to set the BucketName for s3 bucket in the nested Cloudformation stacks. It makes sure the bucket name all lowercase, and unique.
"BucketName": {"Fn::Join": ["-", [
"foo", {"Fn::Select": ["2", {"Fn::Split": ["/", {"Ref": "AWS::StackId"}]}]}
]]}
foo: is the prefix for the bucket name.
Fn::Select ...: The function gets the last part of the StackId.
Given the sample StackId: arn:aws:cloudformation:ap-northeast-1:888888:stack/ParentStack-ChildStack-11MKHI1KPKH7O/1f29c920-82d1-11eb-85b7-0ebec92bca7d
The code above returns the following string for BucketName: foo-1f29c920-82d1-11eb-85b7-0ebec92bca7d

How to note a calculated default value in OAS3

I'm updating my API spec (OAS 3.0.0), and am having trouble understanding how to properly model a "complex" default value.
In general, default values for parameters are scalar values (i.e. the field offset has a default value of 0). But in the API I'm spec'ing, the default value is actually calculated based on other provided parameters.
For example, what if we take the Pet model from the example documentation, and decide that all animals need to be tagged. If the user of the API wants to supply a tag, great. If not, it will be equal to the name.
One possibility:
Pet:
required:
- id
- name
properties:
id:
type: integer
format: int64
name:
type: string
tag:
type: string
default: '#/components/schemas/Pet/name'
This stores the path value as the default, but I'd like to have it explain that the default value will be calculated.
Bonus points if I can encode information from a parent schema.
Is the alternative to just describe the behavior in a description field?
OpenAPI Specification does not support dynamic/conditional defaults. You can only document the behavior verbally in the description.
That said, you can use specification extensions (x-...) to add custom information to your definitions, like so:
tag:
type: string
x-default: name
or
tag:
type: string
x-default:
propertyName: name
# or similar
and extend the tooling to support your custom extensions.

Can a Cloudformation stack know whether it's being created or updated?

I'm trying to create a resource with one of its properties not being a constant value. Sounds like a job for a stack parameter, except that it's a string that can take form of a Ref function in some cases. Specifically, if it's the initial creation, I want the parameter value to be a Ref to another resource, and if it's a subsequent update, I want it to be a Ref to a stack parameter. Is this possible? Is there a function or a pseudo parameter, like AWS::CurrentAction that can take values like create and update, or anything of that kind?
I think it's something to be avoided but if you can't find any other alternative I have a workaround.
Here's an exemple with a bucket name:
Parameters:
ExternalBucketName:
Type: String
Default: ''
Conditions:
ExternalBucketNameSpecified:
!Not [!Equals [!Ref ExternalBucketName, '']]
Resources:
CFManagedBucket:
Type: AWS::S3::Bucket
SomeResource:
Type: AWS::Resource::XYZ
Properties:
BucketName: !If [ExternalBucketNameSpecified, !Ref ExternalBucketNameSpecified, !Ref CFManagedBucket]
When it's time to use the bucket created by this stack just set the ExternalBucketName empty and the stack will adapt automatically.

yaml safe_load of many different objects

I have a huge YAML file with tag definitions like in this snippet
- !!python/object:manufacturer.Manufacturer
name: aaaa
address: !!python/object:address.BusinessAddress {street: bbbb, number: 123, city: cccc}
And I needed to load this, first to make sure that the file is correct YAML, second to extract information at a certain tree-dept given a certain context. I had this all as nested dicts, lists and primitives that would be straightforward to do. But I cannot load the file as I don't have the original Python sources and class defines, so yaml.load() is out.
I have tried yaml.safe_load() but that throws and exception.
The BaseLoader loads the file, so it is correct. But that jumbles all primitive information (number, datetime) together as strings.
Then I found How to deserialize an object with PyYAML using safe_load?, since the file has over 100 different tags defined, the solutions presented there is impractical.
Do I have to use some other tools to strip the !!tag definitions (there is at least one occasion where !! occurs inside a normal string), so I can use safe_load. Is there simpler way to do solve this that I am not aware of?
If not I will have to do some string parsing to get the types back, but I thought I ask here first.
There is no need to go the cumbersome route of adding any of the classes if you want to use the safe_loader() on such a file.
You should have gotten an ConstructorError thrown in SafeConstructor.construct_undefined() in constructor.py. That method gets registered for the fall through case 'None' in the constructor.py file.
If you combine that info with the fact that all such tagged "classes" are mappings (and not lists or scalars), you can just copy the code for the mappings in a new function and register that as the fall-through case.
import yaml
from yaml.constructor import SafeConstructor
def my_construct_undefined(self, node):
data = {}
yield data
value = self.construct_mapping(node)
data.update(value)
SafeConstructor.add_constructor(
None, my_construct_undefined)
yaml_str = """\
- !!python/object:manufacturer.Manufacturer
name: aaaa
address: !!python/object:address.BusinessAddress {street: bbbb, number: 123, city: cccc}
"""
data = yaml.safe_load(yaml_str)
print(data)
should get you:
[{'name': 'aaaa', 'address': {'city': 'cccc', 'street': 'bbbb', 'number': 123}}]
without an exception thrown, and with "number" as integer not as string.