Have a fallback for ImportValue on a CloudFormation template? - aws-cloudformation

I've got two projects (backoffice and frontoffice) deployed using CloudFormation.
In the frontoffice, I import some DynamoDB table names from the backoffice stack as Environment variables for my Lambdas.
To run some acceptance tests I need to sometimes deploy the frontoffice withtout deploying the backoffice. Therefore, the frontoffice will try to do an ImportValue of an Export that doesn't exists.
Is there any pattern that would allow me to get the Frontend deployed anyway - and then handle the lack of value in my code ?

You could pass an additional Parameter to frontoffice indicating whether you are going to deploy with backoffice or not.
Based on the value of the parameter, you could use DependsOn and/or Fn::If to either import or not the DynamoDB table names.
For a fully automated solution without any extra Parameter, you would have to use custom resource. The resource would be a lambda function, which would use AWS SDK to query CloudFormation stacks and check for backoffice.

Related

Access agent hostname for a build variable

I've got release pipelines defined that have worked. I've got a config transform that will write a API url to a config file (currently with a hardcoded api url).
What I'd like to do is be able to have the config be re-written based on the agent its being deployed on.
eg. if the machine being deployed to is TEST-1, I'd like to write https://TEST-1.somedomain.com/api into a config using that transform step.
The .somedomain.com/api can be static.
I've tried modifying the pipeline variable's value to be https://${{Environment.Name}}.somedomain.com/api, but it just replaces the API_URL in the config with that literal string (does not populate machine name in that variable).
Being that variables are the source of value that is being written to configs during the transform, I'm struggling to see another way to do this.
some gotchas
Using non yaml pipeline definitions (I know I saw people put logic in variable definitions within yaml pipelines)
Can't just use localhost, as the configuration is being read into a javascript rich app that would have js trying to connect to localhost vs trying to connect to the server.
I'm interested in any ways I could solve this problem
${{Environment.Name}} is not valid syntax for either YAML or classic pipelines.
In classic pipelines it would be $(Environment.Name).
In YAML, $(Environment.Name) or ${{ variables['Environment.Name'] }} would work.

Updating a CloudFormation stack with a Cognito pool claims that we're adding attributes when we're not

Starting on Nov 7, 2018 we started getting the following error when updating our CloudFormation stacks:
Updating user pool schema is not allowed from cloudformation. Use the
AddCustomAttributes API or the AWS Cognito Console to update user pool
schema.
Our CF stacks don't have any changes to the custom attributes of the Cognito pool. They only have changes to the PostConfirmation and CustomMessage triggers, as well the addition of API Gateway responses.
Does anybody know why we might be seeing this? How can we avoid this error message?
We had the same problem with deployment. For now we are deploying it without CustomMessage trigger and setting CustomMessage trigger manually after deployment.
we removed the CustomMessage changes from our template and that seemed to do the trick.
Mostly by luck, I've found an answer that allows me to get around this in an automated manner.
How our scripts used to work
First, let me explain how this used to work. I used to have the following set of cloudFormation scripts:
cognitoSetup.template --> <Serverless Framework> --> <cognitoSetup.template updated with triggers>
So we'd setup the Cognito pool, run the Serverless Framework to add the Cognito Lambda functions, and then update the cognitoSetup.template file with the ARNs for the lambdas exported when the Serverless Framework ran.
The Fix
Now, we include the ARNs for the Lambdas in the cognitoSetup.template. So now cognitoSetup.template looks like this:
"CognitoUserPool": {
"Type": "AWS::Cognito::UserPool"
...
"Properties": {
...
"LambdaConfig": {
"CustomMessage": "arn:aws:lambda:<our aws region>:<our account#>:function:main-<our stage>-onCognitoCustomMessage"
}
}
Note, we're setting this trigger before the lambda even exists. The trigger just needs an ARN, and it doesn't seem to care that it's not there yet. Then we run sls deploy which creates the actual Lambda function and everything works fine.
Now our scripts look like this:
cognitoSetup.template --> <Serverless Framework>
Why does this fix this error? I don't actually know. CloudFormation seems to be fine with this modification but not okay with modifying the same file later in our process. But it works.

Taking over existing Domains (HostedZones) in CloudFormation

I've been setting up CloudFormation templates for some new infrastructure for a project and I've made it to Route 53 Hosted Zones.
Now ideally I'd like to create a "core-domains" stack with all our hosted zones and base configuration. Thing is we already have these created manually using the AWS console (and they're used for test/live infrastructure), is there any way to supply the existing "HostedZoneId" as a property to the resource definition and essentially have it introspect what we already have and then apply the diff? (If I've done my job there shouldn't be a diff hopefully so should just be a no-op!).
I can't see a "HostedZoneId" property in the docs: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-route53-hostedzone.html
Any suggestions?
PS. I'm assuming this isn't possible and I'll have to recreate all the HostedZones under CloudFormation but I thought I'd check :)
Found it. It's fine to make other HostedZones, they're issued with new Nameservers, just use the "HostedZoneConfig: Comment" field to note down which hosted zone is which and then you can switch the nameservers over when you're ready!

Get-AzureRmResourceGroupDeployment lists machines I cannot see in the web interface

I'm tasked with automating the creation of Azure VM's, and naturally I do a number of more or less broken iterations of trying to deploy a VM image. As part of this, I automatically allocate serial hostnames, but there's a strange reason it's not working:
The code in the link above works very well, but the contents of my ResourceGroup is not as expected. Every time I deploy (successfully or not), a new entry is created in whatever list is returned by Get-AzureRmResourceGroupDeployment; however, in the Azure web interface I can only see a few of these entries. If, for instance, I omit a parameter for the JSON file, Azure cannot even begin to deploy something -- but the hostname is somehow reserved anyway.
Where is this list? How can I clean up after broken deployments?
Currently, Get-AzureRmResourceGroupDeployment returns:
azure-w10-tfs13
azure-w10-tfs12
azure-w10-tfs11
azure-w10-tfs10
azure-w10-tfs09
azure-w10-tfs08
azure-w10-tfs07
azure-w10-tfs06
azure-w10-tfs05
azure-w10-tfs02
azure-w7-tfs01
azure-w10-tfs19
azure-w10-tfs1
although the web interface only lists:
azure-w10-tfs12
azure-w10-tfs13
azure-w10-tfs09
azure-w10-tfs05
azure-w10-tfs02
Solved using the code $siblings = (Get-AzureRmResource).Name | Where-Object{$_ -match "^$hostname\d+$"}
(PS. If you have tips for better tags, please feel free to edit this question!)
If you create a VM in Azure Resource Management mode, it will have a deployment attached to it. In fact if you create any resource at all, it will have a resource deployment attached.
If you delete the resource you will still have the deployment record there, because you still deployed it at some stage. Consider deployments as part of the audit trail of what has happened within the account.
You can delete deployment records with Remove-AzureRmResourceGroupDeployment but there is very little point, since deployments have no bearing upon the operation of Azure. There is no cost associated they are just historical records.
Querying deployments with Get-AzureRmResourceGroupDeployment will yield you the following fields.
DeploymentName
Mode
Outputs
OutputsString
Parameters
ParametersString
ProvisioningState
ResourceGroupName
TemplateLink
TemplateLinkString
Timestamp
So you can know whether the deployment was successful via ProvisioningState know the templates you used with TemplateLink and TemplateLinkString and check the outputs of the deployment etc. This can be useful to figure out what template worked and what didn't.
If you want to see actual resources, that you are potentially being charged for, you can use Get-AzureRmResource
If you just want to retrieve a list of the names of VMs that exist within an Azure subscription, you can use
(Get-AzureRmVM).Name

How do I use Puppet's ralsh with resource types provided by modules?

I have installed the postgresql module from Puppetforge.
How can I query Postgresql resources using ralsh ?
None of the following works:
# ralsh postgresql::db
# ralsh puppetlabs/postgresql::db
# ralsh puppetlabs-postgresql::db
I was hoping to use this to get a list of databases (including attributes such as character sets) and user names/passwords from the current system in a form that I can paste into a puppet manifest to recreate that setup on a different machine.
In principle, any puppet client gets the current state of your system from another program called Facter. You should create a custom Fact (a module of Facter), and then included into your puppet client. Afterwards, I think you could call this custom Fact from ralsh.
More information about creating a custom Fact can be found in here.
In creating your own Fact, you should execute your SQL query and then save the result into particular variable.