Order the parameters as mentioned in the template - aws-cloudformation

When you create stacks in the console, the console lists input parameters in alphabetical order by their logical IDs. There is way to customize the order using Interface.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cloudformation-interface.html
But is there any way to order the parameters as mentioned in the template?

Use AWS::CloudFormation::Interface which allows you to set the order, and also additionally you can group your parameters together. The order you specify the parameters in the Parameters list, will be the order they appear in the console.
Example below, taken from the aws docs
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
-
Label:
default: "Network Configuration"
Parameters:
- VPCID
- SubnetId
- SecurityGroupID
-
Label:
default: "Amazon EC2 Configuration"
Parameters:
- InstanceType
- KeyName
ParameterLabels:
VPCID:
default: "Which VPC should this be deployed to?"

Related

Custom resource outputs not found

I've written a custom resource in Go using cloudformation-cli-go-plugin, it's failing when I try and use it in a stack with
Unable to retrieve Guid attribute for MyCo::CloudFormation::Workloads, with error message NotFound guid not found.
The stack:
AWSTemplateFormatVersion: 2010-09-09
Description: Sample MyCo Workloads Template
Resources:
Resource1:
Type: 'MyCo::CloudFormation::Workloads'
Properties:
APIKey: ""
AccountID: ""
Workload: >-
workload: {entityGuids: "", name: "CloudFormationTest-Create"}
Outputs:
CustomResourceAttribute1:
Value: !GetAtt Resource1.Guid
If I remove the Outputs stanza the stack runs successfully and I can see the created resource.
Running with SAM locally I've verified that Guid is in fact always returned. FWIW the resource passes all of the contract tests, Guid is the primaryIdentifier, and is a readOnlyProperties.
I've tried several tests playing with the !GetAtt definition, all of which fail with schema errors so it appears the CF is aware of the format of the resource's properties.
Suggestions and/or pointers would be appreciated.
The issue here is Read failing due to CloudFormation behaving differently than the Contract Tests. The Contract Tests do not follow the CloudFormation model rules, they are more permissive.
There are a number of differences in how the Contract Tests and CloudFormation behave- passing the Contract Tests does not guarantee CloudFormation compatibility. Two for instances:
The contract tests allow returning a temporary primaryIdentifier that can change between handler.InProgress and handler.Success
The contract tests pass the entire model to all events. CloudFormation only passes the primaryIdentifier to Read and Delete.

How to pass multiple secret nicknames in the ecs annotation

I am using SCDF UI to launch pods on ECS cluster and I am passing all the required parameters from the task itself. My application uses two secrets , one for oracle and one for mongo.
I need to pass both the secret nicknames in job annotation.
I tried below approach but none of them worked.
a. deployer.app-test.kubernetes.jobAnnotations=ecs.o2c.secretsnicknames: 'MONGO_NICKNAME,ORACLE_NICKNAME'
b. deployer.app-test.kubernetes.jobAnnotations=ecs.o2c.secretsnicknames: "MONGO_NICKNAME,ORACLE_NICKNAME"
c. deployer.app-test.kubernetes.jobAnnotations=ecs.o2c.secretsnicknames: 'MONGO_NICKNAME+ORACLE_NICKNAME'
Please suggest how to do that using annotation.
for me having same issue, trying to get two different secrets, i've used the following annotation:
ecs.o2c.secretsnicknames: "${SECRET1}, ${SECRET2}"
It should work. You only need to define the variables in the parameters section, such as:
parameters:
- name: SECRET1
required: true
- name: SECRET2
required: true
For your example, providing you have defined the MONGO_NICKNAME and the ORACLE_NICKNAME in the parameters section, you should have the following syntax:
ecs.o2c.secretsnicknames: "${MONGO_NICKNAME},${ORACLE_NICKNAME}"
If you inject somehow the strings directly, you should have:
ecs.o2c.secretsnicknames: "MongoNicknameString, OracleNicknameString"

Access to output data from stack

I am creating a REST API using CloudFormation. In an other CloudFormation stack I would like to have access to values that are in the ouput section (the invoke URL) of that CloudFormation script.
Is this possible, and if so how?
You can export your outputs. Exporting makes them accessible to other stacks.
From the AWS Docs:
To export a stack's output value, use the Export field in the Output section of the stack's template. To import those values, use the Fn::ImportValue function in the template for the other stacks
The following exports an API Gateway Id.
Description: API for interacting with API resources
Parameters:
TargetEnvironment:
Description: 'Examples can be dev, test or prod'
Type: 'String'
ProductName:
Description: 'Represents the name of the product you want to call the deployment'
Type: 'String'
Resources:
MyApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapi'
Outputs:
MyApiId:
Description: 'Id of the API'
Value: !Ref MyApi
Export:
Name: !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapi'
MyApiRootResourceId:
Description: 'Id of the root resource on the API'
Value: !GetAtt MyApi.RootResourceId
Export:
Name: !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapirootresource'
The Export piece of the Output is the important part here. If you provide the Export then other Stacks can consume from it.
Now, in another file I can import that MyApiId value by using the Fn::Import intrinsic function, importing the exported name. I can also import it's root resource and consume both of these values when creating a child API resource.
From the AWS Docs:
The intrinsic function Fn::ImportValue returns the value of an output exported by another stack. You typically use this function to create cross-stack references.
Description: Resource endpoints for interacting with the API
Parameters:
TargetEnvironment:
Description: 'Examples can be dev, test or prod'
Type: 'String'
ProductName:
Description: 'Represents the name of the product you want to call the deployment'
Type: 'String'
Resources:
MyResource:
Type: AWS::ApiGateway::Resource
Properties:
ParentId: {'Fn::ImportValue': !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapirootresource' }
PathPart: foobar
RestApiId: {'Fn::ImportValue': !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapi' }
These are two completely different .yaml files and can be deployed as two independent stacks but now they depend on each other. If you try to delete the MyApi API Gateway stack before deleting the MyResource stack the CloudFormation delete operation will fail. You must delete the dependencies first.
One thing to keep in mind is that in some cases you might want to have the flexability to delete the root resource without worrying about dependencies. The delete operation could in some cases be done without any side-effects. For instance, deleting an SNS topic won't break a Lambda - it's prevents it from running. There's no reason to delete the Lambda just to re-deploy a new SNS topic. In that scenario I utilize naming conventions and tie things together that way instead of using exports. For example - the above AWS::ApiGateway::Resource can be tied to an environment specific API Gateway based on the naming convention.
Parameters:
TargetEnvironment:
Description: 'Examples can be dev, test or prod'
Type: 'String'
ProductName:
Description: 'Represents the name of the product you want to call the deployment'
Type: 'String'
Resources:
MyResource:
Type: AWS::ApiGateway::Resource
Properties:
ParentId: {'Fn::ImportValue': !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapirootresource' }
PathPart: foobar
RestApiId: !Sub '${ProductName}-${TargetEnvironment}-apigw-primaryapi'
With this there's no need to worry about the export/import as long as the last half of the resource is named the same across all environments. The environment can change via the TargetEnvironment parameter so this can be re-used across dev, test and prod.
One caveat to this approach is that naming conventions only work for when you want to access something that can be referenced by name. If you need a property, such as the RootResource in this example, or EC2 size, EBS Volume size, etc then you can't just use a naming convention. You'll need to export the value and import it. In the example above I could replace the RestApiId import usage with a naming convention but I could not replace the ParentId with a convention - I had to perform an import.
I use a mix of both in my templates - you'll find when it makes sense to use one approach over the other as you build experience.

Azure Pipelines parameter value from variable template

We would like to deploy components of our application to developer's local machines and want it to be easy enough for our co-workers to use and easy enough for us to maintain. These are virtual machines with a certain naming convention, for instance: VM001, VM002, and so on.
I can define these machines, and use the value later on in the pipeline, in a parameter in YAML like this:
parameters:
- name: stage
displayName: Stage
type: string
values:
- VM001
- VM002
- And so on...
I then only have to maintain one stage, because the only thing that really differs is the stage name:
stages:
- stage: ${{ parameters.stage }}
displayName: Deploy on ${{ parameters.stage }}
- jobs:
...
The idea behind defining the machines in the parameters like this is that developers can choose their virtual machine from the 'Stage' dropdown when they want to deploy to their own virtual machine. By setting the value of the parameter to the virtual machine, the stage is named and the correct library groups will also be linked up to the deployment (each developer has their own library groups where we store variables such as accounts and secrets).
However, we have multiple components that we deploy through multiple pipelines. So each component gets its own YAML pipeline and for each pipeline we will have to enter and maintain the same list of virtual machines.
We already use variable and job templates for reusability. I want to find a way to create a template with the list of machines and pass it to the parameter value. This way, we only need to maintain one template so whenever someone new joins the team or someone leaves, we only need to update one file instead of updating all the pipelines.
I've tried to pass the template to the parameter value using an expression like this:
variables:
- name: VirtualMachinesList
value: VirtualMachinesList.yml
parameters:
- name: stage
displayName: Stage
type: string
values:
- ${{ variables.VirtualMachinesList }}
The VirtualMachinesList.yml looks like this:
variables:
- name: VM001
value: VM001
- name: VM002
value: VM002
- And so on...
This gives the following error when I try to run the pipeline:
A template expression is not allowed in this context
I've also tried changing the parameter type to object. This results in a text field with a list of all the virtual machines and you can select the ones you don't want to deploy to and remove them. This isn't very user-friendly and also very error-prone, so not a very desirable solution.
Is there a way to pass the list of virtual machines to the parameter value from a single location, so that developers can choose their own virtual machine to deploy to?
I know you want to maintain the list of virtual machines in one place, and also keep the function that developers can choose the vm from the dropdown to deploy to. But i am afraid it cannot be done currently. Runtime parameters doesnot support template yet. You can submit a user voice here regarding this issue.
Currently you can keep only one function, either maintain the vms in one place or developer can choose their vm from the dropdown.
1, To maintain the virtual machines in one place. You can define a variable template to hold the virtual machines. And make the developer to type their vm to deploy to. See below:
Define an empty runtime parameter to let the developer to type in.
parameters:
- name: vm
type: string
default:
Define the variable template to hold the VMS
#variable.yml template
variables:
vm1: vm1
vm2: vm2
...
Then in the pipeline define a variable to refer to the vm variable in the variables template. See below
variables:
- template: variables.yml
- name: vmname
value: $[variables.${{parameters.vm}}]
steps:
- powerhsell: echo $(vmname)
2, To make the developer have the convenience to choose their vm from the dropdown. You have to define these machines parameters in all pipeline.
You're really close. You'll want to update how you're consuming your variable template to:
variables:
- template: variable-template.yml
Here's a working example (assuming both the variable template and consuming pipeline are within the same directory of a repository):
variable-template.yml:
variables:
- name: VM001
value: VM001
- name: VM002
value: VM002
example-pipeline.yml:
name: Stackoverflow-Example-Variables
trigger:
- none
variables:
- template: variable-template.yml
stages:
- stage: StageA
displayName: "Stage A"
jobs:
- job: output_message_job
displayName: "Output Message Job"
pool:
vmImage: "ubuntu-latest"
steps:
- powershell: |
Write-Host "Root Variable: $(VM001), $(VM002)"
For reference, here's the MS documentation on variable template usage:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops#variable-reuse

Create cloudformation resource multiply times

I've just moved to cloud formation and I am starting with creating ECR repositories for docker,
I need all repositories to have the same properties except the repository name.
Since this is micro-services I will need at least 40 repo's so I want to create a stack that will create the repo's for me in a loop, and just change the name.
I started looking at nested stacks and this is what I got so far:
ecr-root.yaml:
---
AWSTemplateFormatVersion: '2010-09-09'
Description: ECR docekr repository
Parameters:
ECRRepositoryName:
Description: ECR repository name
Type: AWS::ECR::Repository::RepositoryName
Resources:
ECRStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://cloudformation.s3.amazonaws.com/ecr-stack.yaml
TimeoutInMinutes: '20'
Parameters:
ECRRepositoryName: !GetAtt 'ECRStack.Outputs.ECRRepositoryName'
And ecr-stack.yaml:
---
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
ECRRepositoryName:
Description: ECR repository name
Default: panpwr-mysql-base
Type: String
Resources:
MyRepository:
Type: AWS::ECR::Repository
Properties:
RepositoryName:
ref: ECRRepositoryName
RepositoryPolicyText:
Version: "2012-10-17"
Statement:
-
Sid: AllowPushPull
Effect: Allow
Principal:
AWS:
- "arn:aws:iam::123456789012:user/Bob"
- "arn:aws:iam::123456789012:user/Alice"
Action:
- "ecr:GetDownloadUrlForLayer"
- "ecr:BatchGetImage"
- "ecr:BatchCheckLayerAvailability"
- "ecr:PutImage"
- "ecr:InitiateLayerUpload"
- "ecr:UploadLayerPart"
- "ecr:CompleteLayerUpload"
RepositoryNameExport:
Description: RepositoryName for export
Value:
Ref: ECRRepositoryName
Export:
Name:
Fn::Sub: "ECRRepositoryName"
Everything is working fine,
But when I'm running the stack it asks me for the repository name I want to give it, and it creates one repository.
And then I can have as many stacks that I want with a different name but that is not my purpose.
How do I get it all in one stack that creates as many repositories that I want?
Sounds like you want to loop through a given list of parameters. Looping is not possible in a CloudFormation template. Few things you can try
You could programmatically generate a template. The troposphere Python library provides a nice abstraction to generate templates.
Write custom resource backed by AWS lambda. You can handle your custom logic in the AWS lambda function .
The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define cloud infrastructure in code and provision it through AWS CloudFormation. Use AWS CDK to write custom script for your usecase.