Cloudformation nested stack naming - aws-cloudformation

I'm building a nested Cloudformation stack with a subtemplate used multiple times.
I would like the different resources (S3 buckets, target groups, etc.) to use the AWS::StackName as part of their name.
!Sub ${AWS::StackName}-s3bucket
The nested stack names usually include an AWS-generated string that is in uppercase like:
foobar-vpcstack-YN4842UYLUFL
However, some resources only allow a name that is all lowercase.
Is there a way to ensure that the nested stack names are all lowercase?
Or is there a better way to handle the naming of the nested stacks and its resources?

I use the following rules to set the BucketName for s3 bucket in the nested Cloudformation stacks. It makes sure the bucket name all lowercase, and unique.
"BucketName": {"Fn::Join": ["-", [
"foo", {"Fn::Select": ["2", {"Fn::Split": ["/", {"Ref": "AWS::StackId"}]}]}
]]}
foo: is the prefix for the bucket name.
Fn::Select ...: The function gets the last part of the StackId.
Given the sample StackId: arn:aws:cloudformation:ap-northeast-1:888888:stack/ParentStack-ChildStack-11MKHI1KPKH7O/1f29c920-82d1-11eb-85b7-0ebec92bca7d
The code above returns the following string for BucketName: foo-1f29c920-82d1-11eb-85b7-0ebec92bca7d

Related

How to provision a bunch of resources using the pyplate macro

In this learning exercise I want to use a PyPlate script to provision the BucketA, BucketB and BucketC buckets in addition to the TestBucket.
Imagine that the BucketNames parameter could be set by a user of this template who would specify a hundred bucket names using UUIDs for example.
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Resources:
TestBucket:
Type: "AWS::S3::Bucket"
#!PyPlate
output = []
bucket_names = params['BucketNames']
for name in bucket_names:
output.append('"' + name + '": {"Type": "AWS::S3::Bucket"}')
The above when deployed responds with a Template format error: YAML not well-formed. (line 15, column 3)
Although the accepted answer is functionally correct, there is a better way to approach this.
Essentially PyPlate code recursively reads through all the key-value pairs of the stack and replaces the values with their Python computed values (ie they match the #!PyPlate regex). So we need to have a corresponding key to the PyPlate code.
Here's how a combination of PyPlate and Explode would solve the above problem.
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate, Explode]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Mappings: {}
Resources:
MacroWrapper:
ExplodeMap: |
#!PyPlate
param = "BucketNames"
mapNamespace = param + "Map"
template["Mappings"][mapNamespace] = {}
for bucket in params[param]:
template["Mappings"][mapNamespace][bucket] = {
"ResourceName": bucket
}
output = mapNamespace
Type: "AWS::S3::Bucket"
TestBucket:
Type: "AWS::S3::Bucket"
This approach is powerful because:
You can append resources to an existing template, because you won't tamper with the whole Resources block
You don't need to rely on hardcoded Mappings, as required by Explode. You can drive dynamic logic in Cloudformation.
Most of the Cloudformation props/KV's remain in the YAML part, and there is minimal python part which augments to the CFT functionality.
Please be aware of the macro order through - PyPlate needs to be executed before Explode, which is why the order [PyPlate, Explode]. The execution is sequential.
If we walk through the source code of PyPlate, it gives us control of more template related variables to work with, namely
params (stack parameters)
template (the entire template)
account_id
region
I utilised the template variable in this case.
Hope this helps
This works for me:
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Resources:
|
#!PyPlate
output = {}
bucket_names = params['BucketNames']
for name in bucket_names:
output[name] = {"Type": "AWS::S3::Bucket"}
Explanation:
The python code outputs a dict object where the key is the bucket name and the value is its configuration:
{'BucketA': {'Type': 'AWS::S3::Bucket'}, 'BucketB': {'Type': 'AWS::S3::Bucket'}}
Prior to Macro execution, the YAML template is transformed to JSON format, and because the above is valid JSON data, I can plug it in as value of Resources.
(Note that having the hardcoded TestBucket won't work with this and I had to remove it)

How to create some random or unique value in a CloudFormation template?

Is there a way to create some kind of random or unique value in a CloudFormation template?
Why I need this. In our templates we have a number of custom-named resources, for instance AWS::AutoScaling::LaunchConfiguration with specified LaunchConfigurationName or AWS::AutoScaling::AutoScalingGroup with specified AutoScalingGroupName.
When updating stacks, we often get the following error:
CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename some-stack-launch-configuration and update the stack again.
We don't want to rename resources just because we need to update them.
We also don't want to drop custom names in our resources. We won't mind however having some random suffix in our custom names.
With a "random generator" the solution might look something like:
MyAutoScalingGroup:
Type: 'AWS::AutoScaling::AutoScalingGroup'
Properties:
AutoScalingGroupName: !Sub 'my-auto-scaling-group-${AWS::Random}'
If you just need a random ID (no passwords, no fancy requirements), the way I'd recommend is using a portion of AWS::StackId, which is in the following format:
arn:aws:cloudformation:us-west-2:123456789012:stack/teststack/51af3dc0-da77-11e4-872e-1234567db123
So in order to get the last portion, you would need two splits, e.g.:
AutoScalingGroupName:
Fn::Join:
- '-'
- - my-auto-scaling-group
- Fn::Select:
- 4
- Fn::Split:
- '-'
- Fn::Select:
- 2
- Fn::Split:
- /
- Ref: AWS::StackId
Equivalent shorter syntax:
AutoScalingGroupName: !Join ['-', ['my-auto-scaling-group', !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]]]
Meaning:
Start with AWS::StackId, e.g.: arn:aws:cloudformation:us-west-2:123456789012:stack/teststack/51af3dc0-da77-11e4-872e-1234567db123
Split on / and select 2th portion (0-indexed): 51af3dc0-da77-11e4-872e-1234567db123
Split on - and select 4th portion (0-indexed): 1234567db123
Join with your fixed portion name: my-auto-scaling-group-1234567db123.
Advantages: I prefer this way than creating a CustomResource, because for large AWS environments and many stacks, you might end up with several lambdas, making governance a bit harder.
Disadvantages: It's more verbose (Fn::Join, Fn::Select, and Fn::Split).
EDIT 2022-02-17:
As observed by #teuber789's comment, if you need multiple resources of the same type, e.g.: my-auto-scaling-group-<random_1> and my-auto-scaling-group-<random_2>, this approach won't work as AWS::StackId is the same for whole stack.
this is similar to https://stackoverflow.com/a/67162053/2660313 but shorter:
Value: !Select [2, !Split ['/', !Ref AWS::StackId]]
In my opinion, the most elegant way to implement such logic (if you don't want to rename resources) is to use Cloudformation Macros. They're like a custom resource, but you call them implicitly during template transformation.
So, I will try to provide some example, but you can investigate more in AWS Documentation.
First of all, you create the function (with all required permissions and so on) that will do the magic (something like LiuChang mentioned).
Then, you should create a macro from this Function:
Resources:
Macro:
Type: AWS::CloudFormation::Macro
Properties:
Name: <MacroName>
Description: <Your description>
FunctionName: <Function ARN>
And then use this Macro in your resources definition:
MyAutoScalingGroup:
Type: 'AWS::AutoScaling::AutoScalingGroup'
Properties:
AutoScalingGroupName:
'Fn::Transform':
- Name: <MacroName>
Parameters:
InputString: <Input String>
...<Some other parameters like operation type or you can skip this>
Also, to use macros, you should specify the CAPABILITY_AUTO_EXPAND capability during stack creation/updation.
And that's it. It should just work, but of course one of the drawbacks of this approach - you should maintain additional lambda function.
I think you need to create a Lambda function to do this.
Here's a GitHub project cloudformation-random-string, which has a Lambda function and a simple tutorial.
Here's another tutorial Generate Passwords in AWS CloudFormation Template.
You can refer to the Lambda function above and make it work for you.
I'm using the AWS Java SDK to run some Stack Update command.
I generate random value using Java, then I pass it as parameter.

Can positively check S3 bucket existence but won't be able to list its objects

I'm building my S3 buckets like nested directories in order to store files by some criteria, and the first thing that I noticed is that, for example, take into account that sample only stores those test=xxx keys which are the ones that in final instance store files.
s3Client.doesBucketExistV2("one/two/sample") // true
s3Client.doesBucketExistV2("one/two/sample/test=123") // false
Both exist and test=xxx contains files but they weren't created manually by me in AWS, but by a program.
Why test=xxx doesn't returntrueinstead offalse?
And my second doubt is when trying to list objects for a given bucket...
s3Client.listObjects(new ListObjectsRequest()
.withBucketName("one/")
.withPrefix("two/")
.withPrefix("sample/")) // The specified key does not exist. [404]
Why can't I list the objects of a given bucket that exists?
the second .withPrefix("sample/") overrides .withPrefix("two/"). It does not concatenate the strings.
The bucketname, prefix and key are separate things. So doesBucketExistV2() proofs in the last case on a key.
Your bucketname is: one
Your prefix is: /two/
or Another prefix is: /two/sample/
with the key: "test=xxx"
s3Client.listObjects(new ListObjectsRequest()
.withBucketName("one")
.withPrefix("/two/sample/");
Maybe try this:
s3 = boto3.client("s3")
list_of_files = s3.list_objects_v2(Bucket=your-bucket)['Contents']

Parameterizing Resource Names in CloudFormation Template?

This answer here: Is there a way to parameterize cloud formation resource names? didn't really help as I am looking to set the physical name, not the logical one. I was hoping for something along the lines of setting a parameter in the parameters list like:
"ELBName": {
"Type": "String",
"Default": "xxx",
"Description": "The Production Number for this stack (e.g. xxx)"
}
and then
"LoadBalancerName": "prod" + {Ref: "ELBName"}
although that concatenation directly is not possible. Is there any way to do what I want? My end goal is to take a template I've created and use it to create many copies of itself, each with the same resources, but different names, possibly through a nested stack.
Use Fn::Join function to do this:
"LoadBalancerName":{
"Fn::Join":[
"",
[
"prod",
{
"Ref":"ELBName"
}
]
]
}
This will give the name as prod01 assuming ELBName parameter has been passed the value 01

yaml safe_load of many different objects

I have a huge YAML file with tag definitions like in this snippet
- !!python/object:manufacturer.Manufacturer
name: aaaa
address: !!python/object:address.BusinessAddress {street: bbbb, number: 123, city: cccc}
And I needed to load this, first to make sure that the file is correct YAML, second to extract information at a certain tree-dept given a certain context. I had this all as nested dicts, lists and primitives that would be straightforward to do. But I cannot load the file as I don't have the original Python sources and class defines, so yaml.load() is out.
I have tried yaml.safe_load() but that throws and exception.
The BaseLoader loads the file, so it is correct. But that jumbles all primitive information (number, datetime) together as strings.
Then I found How to deserialize an object with PyYAML using safe_load?, since the file has over 100 different tags defined, the solutions presented there is impractical.
Do I have to use some other tools to strip the !!tag definitions (there is at least one occasion where !! occurs inside a normal string), so I can use safe_load. Is there simpler way to do solve this that I am not aware of?
If not I will have to do some string parsing to get the types back, but I thought I ask here first.
There is no need to go the cumbersome route of adding any of the classes if you want to use the safe_loader() on such a file.
You should have gotten an ConstructorError thrown in SafeConstructor.construct_undefined() in constructor.py. That method gets registered for the fall through case 'None' in the constructor.py file.
If you combine that info with the fact that all such tagged "classes" are mappings (and not lists or scalars), you can just copy the code for the mappings in a new function and register that as the fall-through case.
import yaml
from yaml.constructor import SafeConstructor
def my_construct_undefined(self, node):
data = {}
yield data
value = self.construct_mapping(node)
data.update(value)
SafeConstructor.add_constructor(
None, my_construct_undefined)
yaml_str = """\
- !!python/object:manufacturer.Manufacturer
name: aaaa
address: !!python/object:address.BusinessAddress {street: bbbb, number: 123, city: cccc}
"""
data = yaml.safe_load(yaml_str)
print(data)
should get you:
[{'name': 'aaaa', 'address': {'city': 'cccc', 'street': 'bbbb', 'number': 123}}]
without an exception thrown, and with "number" as integer not as string.