CloudFormation for multiple parameter files and a single template - aws-cloudformation

I am currently storing all my parameters in Systems Manager Parameter Store and referencing them in CloudFormation stack.
I am now stuck in a scenario where the parameters vary for the same Cloudformation template.
For instance server A, has parameters m5.large instance type, subnet 1, host name 1 and likewise server B can have m5.xlarge, subnet 2, host name 2 and so on. These 2 parameters are for the same CFN template.
How can I handle this situation in a CI/CD manner?
My current setup involves SSM Parameter store -> CloudWatch Events -> CodePipeline -> Cloudformation.

I am Assuming you use AWS CodePipeline. Each CodePipeline stage consists of multiple stage actions. On of the action configuration properties is the CloudFormation template, but also the The action can be configured to include the CloudFormation template, but also a template configuration can be provided. If you define the server name as a parameter in the CloudFormation stack then you can provide a different configuration for each CloudFormation parameter.
Assuming you define only one server in the CloudFormation stack and use the template twice in your codepipeline, then you can provide a different configuration to both stage actions . Based on this configuration you can decide which parameter in the parameter store you want to retrieve. Of course this implies that your parameter store should be parameterized as well e.g. instead of parameter instancetype you might have parameter servera/instancetype and serverb/instancetype
However I think it is best if you just define the parameter in the Template Configuration file provided to the action declaration. So for example define the parameter instancetype in your CloudFormation template and use two different configuration files ( one for each stack) where the first Template Configuration file might say instancetype: m5.large and the second configuration file instancetype: m5.xlarge. This makes your CloudFormation stack history more explicit, easier to read, and makes the use of the parameter store for non-secrets no longer necessary.

Related

Services section of Azure YAML pipeline

I'm looking at an example of the YAML pipeline with a services section. Here is a sample:
The YAML schema doesn't have services defined.
Where can I get information about the services section of the pipeline?
Update: Per Bowman's answer, the services section is part of the job step. In this scenario, there is only one job so the job step is omitted.
In the simplest case, a pipeline has a single job. In that case, you do not have to explicitly use the job keyword unless you are using a template. You can directly specify the steps in your YAML file.
here is the reference
There is in the official document:
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/jobs-job?view=azure-pipelines
services: # Container resources to run as a service container.
I think you directly check it in the top level, right? In fact, in this situation, there is a hidden default job, the definition of the job also be hidden. services section is under the job definition of that hidden job, not the top level.

Can you create an AWS CloudFormation stack on a t2.micro instance?

I am trying to do MongoDB on AWS following the AWS deployment guide. It is defaulted to boot up m5.large EC2s. However, I am only experimenting so I want to use a free tier EC2. When I add t2.micro to the allowed values and set it as default I get an error as pictured below.
Is there anyway I can get MongoDB running on AWS with 3 replications using the cloudformation method with free tier t2.micro instances.? If not, any better methods?
The MongoDB on AWS - Quick Start has multiple templates that are deployed.
I notice that the NodeInstanceType is used and defined in multiple templates, presumably with the values passed from the master template to the node templates. Therefore, your changes will probably need to be made on any template that defines the NodeInstanceType parameter. I recommend you check all of the templates for such references.

Can I use an AWS Cloudformation template to create and modify tables in AWS Aurora (Postgres flavour)?

I am looking for a way to manage schema changes to my AWS Aurora Postgres instance.
My whole AWS stack is set up using a Cloudformation template which is used to automatically deploy the stack when a change is detected in the source control. The Cloudformation template is built, a change set is prepared and finally excecuted on the stack.
I was hoping that the table definition of my Aurora instance could go inside the Cloudformation template somehow, so the schema migrations could be a part of the change set. Is this possible?
Note, I have seen this recommendation: https://aws.amazon.com/blogs/opensource/rds-code-change-deployment/
For anything custom like that use a Custom Resource Lambda that you can include in your Cloud Formation stack. The Lambda will need a layer for your postgress driver and it needs to include the migration script in the Lambda.
See the answer at this link, you will get 3 different options how you can trigger the Lambda.
Is it possible to trigger a lambda on creation from CloudFormation template

Move variable groups to the code repository and reference it from YAML pipelines

We are looking for a solution how to move the non-secret variables from the Variable groups into the code repositories.
We would like to have the possibilities:
to track the changes of all the settings in the code repository
version value of the variable together with the source code, pipeline code version
Problem:
We have over 100 variable groups defined which are referenced by over 100 YAML pipelines.
They are injected at different pipeline/stage/job levels depends on the environment/component/stage they are operating on.
Example problems:
some variable can change their name, some variable can be removed and in the pipeline which targets the PROD environment it is still referenced, and on the pipeline which deploys on DEV it is not there
particular pipeline run used the version of the variables at some date in the past, it is good to know with what set of settings it had been deployed in the past
Possible solutions:
It should be possible to use the simple yaml template variables file to mimic the variable groups and just include the yaml templates with variable groups into the main yamls using this approach: Variable reuse.
# File: variable-group-component.yml
variables:
myComponentVariable: 'SomeVal'
# File: variable-group-environment.yml
variables:
myEnvVariable: 'DEV'
# File: azure-pipelines.yml
variables:
- template: variable-group-component.yml # Template reference
- template: variable-group-environment.yml # Template reference
#some stages/jobs/steps:
In theory, it should be easy to transform the variable groups to the YAML template files and reference them from YAML instead of using a reference to the variable group.
# Current reference we use
variables:
- group: "Current classical variable group"
However, even without implementing this approach, we hit the following limit in our pipelines: "No more than 100 separate YAML files may be included (directly or indirectly)"
YAML templates limits
Taking into consideration the requirement that we would like to have the variable groups logically granulated and separated and not stored in one big yml file (in order to not hit another limit with the number of variables in a job agent) we cannot go this way.
The second approach would be to add a simple script (PowerShell?) which will consume some key/value metadata file with variables (variableName/variableValue) records and just execute job step with a command to
##vso[task.setvariable variable=one]secondValue.
But it could be only done at the initial job level, as a first step, and it looks like the re-engineering variable groups mechanism provided natively in Azure DevOps.
We are not sure that this approach will work everywhere in the YAML pipelines when the variables are currently used. Somewhere they are passed as arguments to the tasks. Etc.
Move all the variables into the key vault secrets? We abandoned this option at the beginning as the key vault is a place to store sensitive data and not the settings which could be visible by anyone. Moreover storing it in secrets cause the pipeline logs to put * instead of real configuration setting and obfuscate the pipeline run log information.
Questions:
Q1. Do you have any other propositions/alternatives on how the variables versioning/changes tracking could be achieved in Azure DevOps YAML pipelines?
Q2. Do you see any problems in the 2. possible solution, or have better ideas?
You can consider this as alternative:
Store your non-secrets variable in json file in a repository
Create a pipeline to push variables to App Configuration (instead a Vault)
Then if you need this settings in your app make sure that you reference to app configuration from the app instead running replacement task in Azure Devops. Or if you need this settings directly by pipelines Pull them from App Configuration
Drawbacks:
the same as one mentioned by you in Powershell case. You need to do it job level
What you get:
track in repo
track in App Configuration and all benefits of App Configuration

Azure DevOps passing Dynamically assigned variables between build hosts

I'm using Azure DevOps on a vs2017-win2016 build agent to provision some infrastructure using Terraform.
What I want to know is it possible to pass the Terraform Output of a hosts dynamically assigned IP address to a
2nd Job running a different build agent.
I'm able to pass these to build variables in the first Job
BASTION_PRIV_IP=x.x.x.x
BASTION_PUB_IP=1.1.1.1
But un-able to get these variables to appear to be consumed with the second build agent running ubuntu-16.04
I am able to pass any static defined parameters like Azure Resource Group name that I define before the job start, its just the
dynamically assigned ones.
This is pretty easily done when you are using the YAML based builds.
It's important to know that variables are only available within the scope of current job by default.
However you can set a variable as an output variable for your job.
This output variable can then be mapped to a variable within second job (do note that you need to set the first job as a dependency for the second job).
Please see the following link for an example of how to get this to work
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-multi-job-output-variable
It may also be doable in the visual designer type of build, but i couldn't get that to work in the quick test i did, maybe you can get something to work inspired on the linked example.