What's the best way to consume Parameter Store value in AWS CDK - aws-cloudformation

I am having problems using SSM valueForStringParameter method in CDK. It's working the first time I deploy the stack, but it is not picking up updates to the parameter value when I redeploy the stack because CloudFormation template hasn't changed and so CloudFormation thinks there were no updates, even if SSM parameter has changed.
For the context, I am deploying stack via CodePipeline, where I run cdk synth first, and then use CloudFormationCreateUpdateStackAction action to deploy template.
Anyone knows how to work around that? The only other option that I know will work is to switch to a custom resource lambda that calls SSM and returns value using aws-sdk, but that feels like a overly complicated option.
Update 1
I cannot use ValueFromLookupbecause value is only updated at runtime as part of cloudformation deployment by another stack (I deploy both stacks in CodePipeline, in 2 different regions), so synthesis time lookup would result in stale value.

All the valueOf* and from* methods work by adding a CloudFormation parameter. As you figured out already, changing the parameter value does not change the template and no change will be triggered.
What you probably want to use instead is the method valueFromLookup. Lookups are executed during synth and the result is put into the generated CFN template.
ssm.StringParameter.valueFromLookup(this, 'param-name');
But be aware, lookups are stored in the cdk.context.json. If you have commited that file to your repo, you need to erase that key via cdk context -e ... before synth/diff/deploy.

Since you cannot use lookup functions and the most common way to pass config to cdk is through context variables, I can only suggest dirty workarounds.
For example, you could create a dummy parameter in your stack to bump every time there's deployment.
var deploymentId = new CfnParameter(this, "deploymentId", new CfnParameterProps() { Type = "String", Description = "Deployment Id" });
SetParameterValue(deploymentId, this.Node.GetContext("deploymentId").ToString());
and when you synthesize the CF, you could generate an ID:
cdk synth -c deploymentId=$(uuidgen)
If you can avoid the "environment agnostic" syth and you really need an immutable artifact to deploy across multiple environments, you could use the built package from your cdk, for example, the npm package containing your cdk. Therefore, you could deploy it in each environment by overwriting the context parameters instead of using ssm parameters store.

See https://docs.aws.amazon.com/cdk/latest/guide/get_ssm_value.html, you can use method valueFromLookup which gets you parameter store value at synthesis time, when value is different from previous one, this shall trigger CF stack update.
However, I was under impression that valueForStringParameter should work on updated ssm parameter values as well, based on https://aws.amazon.com/blogs/mt/integrating-aws-cloudformation-with-aws-systems-manager-parameter-store/ Example 2:

Related

Azure DevOps: Set a conditional variable for Run only

I am trying to look for a solution where our developers want to deploy a different version of their build. My train of thought was to add an env variable to our cicd configs, and then in our cd playbook evaluate if that var is None, and if so override the version we normally get from the package or pom file.
This is turn will grab that XXX version from our helm and docker container registry allowing them to roll back without or redeploy an older version quickly and efficiently.
I see that Azdo provides environment variables, but I wanted something that would only set or store the env variable for the run from the pipelines and would not be persistant.
You can configure so-called queue time variables. The idea is that you set a default value, like latest, and check the for to change the value at queue time:
If a variable appears in the variables block of a YAML file, its value is fixed and can't be overridden at queue time. Best practice is to define your variables in a YAML file but there are times when this doesn't make sense. For example, you may want to define a secret variable and not have the variable exposed in your YAML. Or, you may need to manually set a variable value during the pipeline run.
You have two options for defining queue-time values. You can define a variable in the UI and select the option to Let users override this value when running this pipeline or you can use runtime parameters instead. If your variable is not a secret, the best practice is to use runtime parameters.
To set a variable at queue time, add a new variable within your pipeline and select the override option.
Set a variable at queue time.
To allow a variable to be set at queue time, make sure the variable doesn't also appear in the variables block of a pipeline or job. If you define a variable in both the variables block of a YAML and in the UI, the value in the YAML will have priority.
See:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?WT.mc_id=DOP-MVP-5001511&view=azure-devops&tabs=yaml%2cbatch#allow-at-queue-time

Data Fusion: Pass runtime argument from one pipeline to another

I am having a runtime argument set at namespace which is business_date: ${logicalStartTime(yyyy-MM-dd)} . I am using this argument in my pipeline and want to use the same in other pipeline.
There are many pipelines back to back and I want to the value to be same throughout the pipelines once calculated in the first pipeline.
suppose the value is calculates as '2020-08-20 20:14:11' and once the pipeline one succeeded i am passing this argument to pipeline 2, but as this arguments are defined at namespace level it is getting overrided when pipeline2 starts.
How can I have prevent this value to be calculated again ?
As it was commented before, you could set up one pipeline to trigger another pipeline; you can set a runtime variable in the first pipeline and this variable will be set in the triggered pipelines. You can create inbound trigger by following the next steps:
Once you have created your pipelines, select the last pipeline you want to be run. In my case I have DataFusionQuickstart2 pipeline.
Into the pipeline application, on the left side, click on "Inbound triggers" -> "Set pipeline triggers" and you will see the pipes which you can be triggered. Check the event that will trigger the DataFusionQuickstart2 pipeline from DataFusionQuickstart and enable it.
If you take a look to the previous pipeline DataFusionQuickstar you will see, into the outbound trigger option (right side), the pipelines that will be triggered by DataFusionQuickstar.
Finally set your runtime argument.
Additional information
In this post, it was mentioned that there are three ways you can set the runtime argument of a pipeline:
Argument Setter plugin (You can write down that value in a file into the first pipeline. In all subsequent pipelines, create a parameter to read that file.)
Passing runtime argument when starting a pipeline (The one described above)
Setting Preferences (It provides the ability to save configuration information at various levels of the system, including the CDAP instance, namespace, application, and program levels.)
You can write down that value in a file in first pipeline. In all subsequent pipelines, create a parameter to read that file. That way, objective should be achieved.
#Sudhir, you may explore PREFERENCES. https://cdap.atlassian.net/wiki/spaces/DOCS/pages/477561058/Preferences+HTTP+RESTful+API
You have set the variable at namespace level and as per your finding it is getting evaluated each time its being used.
Can you try setting it at Application level?
And pass it to next pipeline. I believe in that case, it should be evaluated only once in that specific application (pipeline) and thereafter value would be passed.
Preference is also available at program level.

Secrets as environmental variables in vstest

In my tests I'm using environmental variable. Due to security reasons, when setting it in azure devops pipeline I need to mark it as secret. Is it possible to pass it as argument, within vstest task?
Apparently lacking enough reputation to comment on the answer by #daniel-mann, I need to follow-up through an answer myself.
Regarding the use of a runsettings file;
Yes, I'm sure you can do it like this, but there's a much simpler way. In the task definition, you have the option to override test run parameters.
For a non-YAML pipeline, you do this in the "Override test run parameters" option (the tooltip says "Override parameters defined in the TestRunParameters section of runsettings file or Properties section of testsettings file. For example: -key1 value1 -key2 value2."), so like this then:
-SomeSecret $(SomeSecret)
For a YAML-pipeline, you can do this:
overrideTestrunParameters: '-SomeSecret $(SomeSecret)'
The nice thing is that the test run parameter that you'd like to override DOES NOT need to exist in your runsettings file. In fact, there's not even need for a runsettings file at all!
The bad thing about this approach in general is that you'll have to access your secrets through the "TestContext.Properties" collection (for NUnit), which really sucks if you want a transparent solution that works equally well both for local development (using "user secrets") and in an Azure pipeline.
Another potentially bad thing about this is that these "overridden" test run parameters are just that - parameters for THAT test run only. If you happen to have a mixture of .NET Framework and .NET Core tests, you would want to execute those in two different test runs (for it to work at all...), and then you'd need to duplicate those overrides (given that some of the same secrets are needed, that is).
Regarding adding an additional task in your pipeline to set the appropriate environment variables;
Absolutely. Using the "Batch script" task is well suited for this, where you just pass your secret-based variables as parameters to the script and pick them up inside the script file.
For this to work as expected, though, you will need to allow the task to modify the environment.
For a non-YAML pipeline, this is done by ticking off the "Modify Environment" checkbox.
For a YAML pipeline, you could do it like this:
- task: BatchScript#1
displayName: 'Export key vault vars as env. vars (for tests)'
inputs:
filename: 'ExportKeyVaultEnvironmentVariables.cmd'
arguments: '$(SomeSecret)'
modifyEnvironment: true
Where in the "ExportKeyVaultEnvironmentVariables.cmd" script, you just do this:
set SomeSecret=%1
Note: If your secret by chance has some funny characters, especially having a trailing "=" character, you might experience that what you get when collecting the parameter inside the script is NOT what you sent in.
You can avoid this problem by enclosing the parameters in double quotes, like this:
arguments: '"$(SomeSecret)"'
And then collect the parameter by removing those surrounding quotes by using the "~" parameter modifier, like this:
set SomeSecret=%~1
A nice bonus effect of this approach is that your shiny new full-blown environment variables persist for the remainder of your pipeline. Referencing back to my "bad thing" about having to potentially duplicate the test run parameter overrides, that would not be needed here.
Regarding the additional option mentioned by the OP;
Absolutely, in which case you wouldn't need a "Batch script" task (that needs to call a script file), but just a "Command line" task.
BUT, be aware of a possible gotcha! Yes, this will create an environment variable for your test code to pick up, if you access it through "Environment.GetEnvironmentVariable".
In my case, I was building an "IConfigurationRoot" instance, like this:
/// <summary>
/// Gets a configuration instance, based on user secrets (for local test execution) and environment variables (for Azure pipeline execution).
/// </summary>
/// <typeparam name="T">The type of the class that represents the "runtime" (and thus is able to get hold of any configuration).</typeparam>
/// <returns>An <see cref="IConfigurationRoot"/> instance.</returns>
public static IConfigurationRoot GetConfigurationRoot<T>()
where T : class =>
new ConfigurationBuilder()
//// Note: The "AddUserSecrets" method requires the "Microsoft.Extensions.Configuration.UserSecrets" package
.AddUserSecrets<T>()
//// Note: The "AddEnvironmentVariables" method requires the "Microsoft.Extensions.Configuration.EnvironmentVariables" package
.AddEnvironmentVariables()
.Build();
This works by adding various "configuration providers", which eventually allows you to access them all seamlessly through "configuration["SomeSecret"]".
What I found, though, if I'm not seriously mistaken, is that "SomeSecret" was still not available in the "EnvironmentVariablesConfigurationProvider" that's added, even though I could perfectly fine access it directly with the above mentioned method. Go figure (but I might be mistaken...).
Possible alternative approach (but for YAML pipelines only?);
It seems that for a YAML pipeline, you can explicitly set environment variables for a task, like this:
- task: VSTest#2
env:
SomeSecret: $(SomeSecret)
[...]
I haven't tested this myself, but seen a colleague do it (myself, I currently don't have a YAML pipeline). At least this variable can be picked up with "Environment.GetEnvironmentVariable", but I don't know if this can be picked up through an "IConfigurationRoot" instance.
I haven't seen any option to achieve the same in a non-YAML pipeline.
But again, this also suffers from the same possible "having to duplicate the environment variables across several test runs" problem.
See also my solution to my own question over at the Azure DevOps guys
I've been through this. There's no way to pass variables to VSTest on the command line, which means you have to jump through a few hoops.
You have a few options:
Use a runsettings file with a TestRunParameters section, then access it via TestContext.Properties["variableName"] within the tests themselves. You can use standard token replacement patterns to transform the XML file.
Use an app.config or appsettings.json (depending on your platform). This works pretty much the same as above, except, of course, you use the standard configuration classes to retrieve the values.
Add a step to your pipeline that sets the appropriate environment variables. Secrets don't get automatically mapped to environment variables for security purposes, but there's nothing that's stopping you from doing it yourself.
Move the secret values into a keyvault or some other sort of external secret storage and configure the test to pull the secrets at runtime.

How do I make Cloudformation reprocess a template using a macro when parameters change?

I have a Cloudformation template that uses a custom Macro to generate part of the template. The lambda for the Macro uses template parameters (via the templateParameterValues field in the incoming event) to generate the template fragment.
When I change the Cloudformation Stack's parameters, I get an error:
The submitted information didn't contain changes. Submit different information to create a change set.
If I use the CLI I get a similar error:
An error occurred (ValidationError) when calling the UpdateStack operation: No updates are to be performed.
The parameters I am changing are only used by the Macro, not the rest of the template.
How can I make Cloudformation reprocess the template with the macro when I update these parameters?
After working with AWS Support I learned that you must supply the template again in order for the macro to be re-processed.
Even if it is the same exact template it will cause the macros to be reprocessed.
You can do this via the Console UI (by uploading the template file again) or the CLI (by passing the template / template URL again).
I recently came across this when using the count macro to create instances.
I found i was able to modify just the parameters used just by the macro by moving that part of the template to a nested stack and passing the parameters through.
It does involve a bit more work to setup by having the separate stacks, but it did allow me to modify just the parameters of the parent stack how i wanted.

How to pass nested Stack outputs to another step in Octopus Deploy

In my Octopus project, the first step launches a bunch of nested stacks implemented with cloudformation.
I need to share the outputs of the master stack launched from Octopus, how can I do that?
Thanks.
The output variables from the CloudFormation template will be available to later steps the same as any other Octopus output variable, this is mentioned in the first paragraph of the documentation page.
Output variables can be accessed a number of different ways, depending on where you are accessing them, for example, in Powershell they can be accessed via the parameters dictionary $OctopusParameters["Octopus.Action[Step Name].Output.VariableName"].
You can also access them using the Variable Binding syntax, #{Octopus.Action[Step Name].Output.VariableName}
More information about output variables is available in the docs.