cdk : how to stop generating random suffix in nested stack name - aws-cloudformation

I am using CDK with Python where I have a nested stack with a fixed id/name calling a custom construct also with a fixed id/name. Why does it still get a random string generated at the end of the name still? Is there no way to stop it?
The custom construct is creating a DynamoDB table and due to the random suffix generated at the end of the stack name, the stack fails when it runs the second time saying table already exists. I need the table to be created with a retain policy so don't want it deleted every time the stack is executed. The table name also has to be consistent and can't change to include any random autogenerated string. The name for the table comes from a configuration fed into the stack which is referenced in an application elsewhere that I can't modify.
Here is the nested stack code
from aws_cdk import (
NestedStack,
)
from myconstructs import StepFunctionConstruct
class MyInfraStack(NestedStack):
def __init__(
self,
scope: Construct,
construct_id: str,
**kwargs,
) -> None:
super().__init__(scope, construct_id, **kwargs)
sf_const = StepFunctionConstruct(
self,
id="dev-StepFunctionConstruct",
state_machine_name="dev-sf"
)
This is custom construct code
class StepFunctionConstruct(Construct):
def __init__(
self,
scope: Construct,
id: str,
state_machine_name: str
):
super().__init__(scope, id)
# code here to create dynamo db table
How to get rid of the circled in red suffix from the image above when deploying this stack?

The Stack name doesn't change on any update. Moreover, that's not the issue here.
Also, the construct ID isn't the resource name, it being static doesn't mean the name will be. Read more in the docs.
Here's what's happening here:
You deploy the stack, it's created with an auto-generated name.
The stack deploys a DDB table with an explicitly specified name and a retention policy of Retain, meaning it is not deleted on stack destruction.
If you deploy the stack again with any changes at this point, the stack's name will not change.
Then you destroy the stack, but the table still exists due to the retention policy.
You deploy the stack again, the stack's auto-generated name changes. That's not the issue, though - the issue is that it's trying to deploy a DDB table with an explicitly specified name, but it cannot do that because a table with that name already exists.
Even without destroying the stack between deploys, you can run into this issue if you introduce a change to the DDB resource that requires replacement. The way CloudFormation handles replacements is by first creating the new version of the resource and then deleting the old one. But it cannot create the new version, because the name would be the same.
This is why it's recommended to let CloudFormation generate the names for you (like you do for the stack).
Reference: https://docs.aws.amazon.com/cdk/v2/guide/best-practices.html

Related

What's the best way to consume Parameter Store value in AWS CDK

I am having problems using SSM valueForStringParameter method in CDK. It's working the first time I deploy the stack, but it is not picking up updates to the parameter value when I redeploy the stack because CloudFormation template hasn't changed and so CloudFormation thinks there were no updates, even if SSM parameter has changed.
For the context, I am deploying stack via CodePipeline, where I run cdk synth first, and then use CloudFormationCreateUpdateStackAction action to deploy template.
Anyone knows how to work around that? The only other option that I know will work is to switch to a custom resource lambda that calls SSM and returns value using aws-sdk, but that feels like a overly complicated option.
Update 1
I cannot use ValueFromLookupbecause value is only updated at runtime as part of cloudformation deployment by another stack (I deploy both stacks in CodePipeline, in 2 different regions), so synthesis time lookup would result in stale value.
All the valueOf* and from* methods work by adding a CloudFormation parameter. As you figured out already, changing the parameter value does not change the template and no change will be triggered.
What you probably want to use instead is the method valueFromLookup. Lookups are executed during synth and the result is put into the generated CFN template.
ssm.StringParameter.valueFromLookup(this, 'param-name');
But be aware, lookups are stored in the cdk.context.json. If you have commited that file to your repo, you need to erase that key via cdk context -e ... before synth/diff/deploy.
Since you cannot use lookup functions and the most common way to pass config to cdk is through context variables, I can only suggest dirty workarounds.
For example, you could create a dummy parameter in your stack to bump every time there's deployment.
var deploymentId = new CfnParameter(this, "deploymentId", new CfnParameterProps() { Type = "String", Description = "Deployment Id" });
SetParameterValue(deploymentId, this.Node.GetContext("deploymentId").ToString());
and when you synthesize the CF, you could generate an ID:
cdk synth -c deploymentId=$(uuidgen)
If you can avoid the "environment agnostic" syth and you really need an immutable artifact to deploy across multiple environments, you could use the built package from your cdk, for example, the npm package containing your cdk. Therefore, you could deploy it in each environment by overwriting the context parameters instead of using ssm parameters store.
See https://docs.aws.amazon.com/cdk/latest/guide/get_ssm_value.html, you can use method valueFromLookup which gets you parameter store value at synthesis time, when value is different from previous one, this shall trigger CF stack update.
However, I was under impression that valueForStringParameter should work on updated ssm parameter values as well, based on https://aws.amazon.com/blogs/mt/integrating-aws-cloudformation-with-aws-systems-manager-parameter-store/ Example 2:

Fully qualified namespace within entities in enterprise architect

I am currently working with EA, and have some issues with it not showing the full name space but rather only scope it up to its parent. There is a checkmark which include it, but the name space is then displayed outside the entity and not within, how do I make it appear within the entity and with the full namespace?
To give you sort of an answer: to my knowledge this is not possible natively in EA. The diagram property Show FQN is only working on embedded elements, not on the code engineering name space.
What you can do is to write an add-in to retrieve that name space by traversing the package structure up and building the FQN to the package which has set a code engineering name space root. This add-in can be called from within a shape script to print that name like print("#addin:myAddIn,pFunc1#") where myAddIn is the name of your add-in function and pFunc1 is an optional parameter (list separated with comma) being passed.
I don't think it's worth the effort since that name can get quite long and unreadable in a class compartment. Alternatively think of the packages being shown nested and/or just a simple text/note to show the context name space.

How to control resource creation order in Pulumi

I'm trying to create some resources and need to enforce some sort of creation order. e.g. creating an aws.s3.Bucket for storing the logs before it can be used as an input to aws.cloudfront.Distribution.
How do I control resource creation order when using Pulumi?
Generally, Pulumi handles the ordering of resource creation automatically. In TypeScript this is even enforced by the language's type system via pulumi.Input<T> and pulumi.Output<T> types. But understanding the details of those types isn't actually necessary.
The Pulumi engine will resolve all "parameters" or "inputs" to a resource. So if you use one resource as a parameter in configuring another, the dependent resource will be created first. i.e. it works the way you would want it to.
However, there are situations where you need to explicitly mark one resource as being dependent upon another. This will happen when there is some sort of coupling that exists outside of the Pulumi program.
To specify an explicit dependency, you can provide an instance of pulumi.ResourceOptions to the resource, and set its dependsOn property. The Pulumi engine will resolve all of the resources in the dependsOn array before processing the resource.
Here's a simple example showing these two ways the Pulumi determines ordering. An AWS S3 bucket is a resource that contains files, called objects. The bucket must be created before any objects can be created inside of it.
// Create a bucket named "example-bucket", available at s3://example-bucket.
let bucket = new aws.s3.Bucket("bucket",
{
bucket: "example-bucket",
});
let file1 = new aws.s3.BucketObject("file1", {
// The bucket field of BucketObjectArgs is an instance of
// aws.s3.Bucket. Pulumi will know to create the "bucket"
// resource before this BucketObject resource.
bucket: bucket,
});
let file2 = new aws.s3.BucketObject("file2",
{
// The bucket field of BucketObjectArgs is a string. So
// Pulumi does not know to block creating the file2 resource
// until the S3 bucket exists.
bucket: "example-bucket",
} as aws.s3.BucketArgs,
{
// By putting "bucket" in the "dependsOn" array here,
// the Pulumi engine will create the bucket resource before
// this file2 resource.
dependsOn: [ bucket ],
} as pulumi.ResourceOptions);
Simple answer
The official docs are quite informative about this option:
The dependsOn option provides a list of explicit resource dependency
resources.
Pulumi automatically tracks dependencies between resources when you
supply an input argument that came from another resource’s output
properties. In some cases, however, you may need to explicitly
specify additional dependencies that Pulumi doesn’t know about, but
must respect. This might happen if a dependency is external to
the infrastructure itself — such as an application dependency — or is
implied due to an ordering or eventual consistency requirement.
These dependencies ensure that resource creation, update, and deletion
is done in the correct order.
The examples below demonstrates making res2 dependent on res1, even if
there is no property-level dependency:
#Python
res1 = MyResource("res1");
res2 = MyResource("res2", opts=ResourceOptions(depends_on=[res1]));
#Golang
res1, _ := NewMyResource(ctx, "res1", &MyResourceArgs{/*...*/})
res2, _ := NewMyResource(ctx, "res2", &MyResourceArgs{/*...*/}, pulumi.DependsOn([]Resource{res1}))
#JS
let res1 = new MyResource("res1", {/*...*/});
let res2 = new MyResource("res2", {/*...*/}, { dependsOn: [res1] });
If you want to understand what is happening under the hood
Read about creation and deletion order:
Pulumi executes resource operations in parallel whenever possible, but
understands that some resources may have dependencies on other
resources. If an output of one resource is provided as an input
to another, the engine records the dependency between these two
resources as part of the state and uses these when scheduling
operations. This list can also be augmented by using the
dependsOn resource option.
By default, if a resource must be replaced, Pulumi will attempt to
create a new copy the the resource before destroying the old one.
This is helpful because it allows updates to infrastructure to happen
without downtime. This behavior can be controlled by the
deleteBeforeReplace option. If you have disabled auto-naming by
providing a specific name for a resource, it will be treated as if it
was marked as deleteBeforeReplace automatically (otherwise the
create operation for the new version would fail since the name is in
use).

Issue with spring cloud config property file order

I am using spring cloud config for loading properties file for my application. I have multiple environments. I notice that the property files are loaded in wrong order. This is what i see in my logs
Located property source: CompositePropertySource [name='configService', propertySources=[MapPropertySource [name='https://github.com/xyz/configrepo.git/gatekeeper-dev.properties'], MapPropertySource [name='https://github.com/xyz/configrepo.git/gatekeeper.properties']]]
It seems that the environment specific property file is loaded first and overridden by the default property file. Is there any way i can control the order in which they are loaded and processed ?
That is the expected order (for good reasons so I am surprised you found a use case where it wasn't convenient). You can't control it except by changing the names of the files and listing them in a comma separated form. For the sake of clarity: profile specific properties always override default ones. Possibly the logs have confused you.

Pyramid traversal for PUT requests

I am trying to create a Pyramid route for a PUT request in a RESTful API to create a new resource. My application uses traversal, which works great for GET and POST:
config.add_route('myroute', '/resources/*traverse')
Since PUT should have the new resource name in the URL this obviously doesn't work with PUT since there is an unknown resource at the end so the traversal fails. I tried to create a new route for PUT using a hybrid URL dispatch and traversal approach:
config.add_route('myroute_put', '/resources/{path}/{new}', traverse='/{path}', request_method='PUT')
This works great if and only if there is only path segment to traverse. The name of the new resource is available as request.matchdict['new'] If we are at the root level, with nothing to traverse, we can still get this to work by making an auxiliary route:
config.add_route('myroute_put_root', '/resources/{new}', reqeust_method='PUT')
However, that's not a real solution because myroute_put still doesn't match if there are more then one path segments that need to be traversed, such as for the URL: /resources/path1/path2/new_resource
This Stack Overflow question: Pyramid traversal HTTP PUT to a URI that doesn't exist proposes a solution to create a different NewResource context type to represent new resources. The __getitem__() method of the Resource class can then always return a NewResource if it can't find the requested child. Then, a view configuration can be setup for the NewResource context and PUT request_method.
This almost works, except by always returning a NewResource when a child isn't found instead of raising KeyError it breaks the ability to use named views as URL subordinates. For example the URL: /resources/path1/path2/my_view would mistakenly return a NewResource context for my_view instead of using that as a view_name if it exists.
The best workaround to this problem I found so far was to create a custom Pyramid Traversal algorithm that first uses the default traversal algorithm, but then if that fails it checks if the request.method is a PUT. If so, then it returns a context of a NewResource, otherwise it returns the results of the traversal as-is.