We are using AWS SAM to build and manage an AWS Layer. The same SAM template can easily associate the latest version to lambda functions which are also managed by this template. However, we have other Lambda functions that are managed by other CloudFormation/SAM templates and I don't know latest version (ARN)
This is what we use in the SAM template to associate the layer
Globals:
Function:
Layers:
- !Ref ToolkitLayer
How do I determine the latest version programmatically from a completely different CloudFormation/SAM template? I thought about using an SSM Parameter since it appears CloudFormation can dynamically pull a value. The issue here is that the SSM Parameter also has a version, same issue.
Have you thought about using a Cloudformation macro/transform (both refer to the same thing)?
Using a Cloudformation macro, CF can call a Lambda function with the snippet from your template, and your Lambda function returns the transformed snippet back to CF. In your Lambda function, you would query the latest version of your layer and return that result back to CF.
More details at:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-macros.html
Related
I've got release pipelines defined that have worked. I've got a config transform that will write a API url to a config file (currently with a hardcoded api url).
What I'd like to do is be able to have the config be re-written based on the agent its being deployed on.
eg. if the machine being deployed to is TEST-1, I'd like to write https://TEST-1.somedomain.com/api into a config using that transform step.
The .somedomain.com/api can be static.
I've tried modifying the pipeline variable's value to be https://${{Environment.Name}}.somedomain.com/api, but it just replaces the API_URL in the config with that literal string (does not populate machine name in that variable).
Being that variables are the source of value that is being written to configs during the transform, I'm struggling to see another way to do this.
some gotchas
Using non yaml pipeline definitions (I know I saw people put logic in variable definitions within yaml pipelines)
Can't just use localhost, as the configuration is being read into a javascript rich app that would have js trying to connect to localhost vs trying to connect to the server.
I'm interested in any ways I could solve this problem
${{Environment.Name}} is not valid syntax for either YAML or classic pipelines.
In classic pipelines it would be $(Environment.Name).
In YAML, $(Environment.Name) or ${{ variables['Environment.Name'] }} would work.
I have configured a webhook between github and terraform enterprise correctly, so each time I push a commit, the terraform module gets executed. Why I want to achieve is to use part of the branch name where the push was made and pass it as a variable in the terraform module.
I have read that the value of a variable can be a HCL code, but I am unable to find the correct object to access the payload (or at least, the branch name), so at this moment I think it is not possible to get that value directly from the workspace configuration.
if you get a workaround for this, it may also work from me.
At this point the only idea I get is to call the terraform we hook using an API Call
Thanks in advance
Ok, after several try and error I found out that it is not possible to get any information in the terraform module if you are using the VCS mode. So, in order to be able to get the branch, I got these options:
Use several workspaces
You can configure a workspace for each branch, so you may create a variable a select that branch in each workspace. The problem is you will be repeating yourself with this option
Use Terraform CLI and a GitHub action
I used these fine tutorial from Hashicorp for creating a Github action that uses Terraform Cloud. It gets you done the 99% of the job. For passing a varible you must be aware that there are two methods, using a file or using an enviromental variable (check that information on the Hashicorp site here). So using a:
terraform apply -var="branch=value"
won't work. In my case I used the tfvars approach, so in my Github Action I put this snippet:
- name: Setup Terraform variables
id: vars
run: |-
cat > terraform.auto.tfvars <<EOF
branch = "${GITHUB_REF#refs/*/}"
EOF
I defined a variable within terraform called branch, I was able to get and work with this value
I've got two projects (backoffice and frontoffice) deployed using CloudFormation.
In the frontoffice, I import some DynamoDB table names from the backoffice stack as Environment variables for my Lambdas.
To run some acceptance tests I need to sometimes deploy the frontoffice withtout deploying the backoffice. Therefore, the frontoffice will try to do an ImportValue of an Export that doesn't exists.
Is there any pattern that would allow me to get the Frontend deployed anyway - and then handle the lack of value in my code ?
You could pass an additional Parameter to frontoffice indicating whether you are going to deploy with backoffice or not.
Based on the value of the parameter, you could use DependsOn and/or Fn::If to either import or not the DynamoDB table names.
For a fully automated solution without any extra Parameter, you would have to use custom resource. The resource would be a lambda function, which would use AWS SDK to query CloudFormation stacks and check for backoffice.
The below is the part of CloudForamtion file loaded by Serverless.
# resource.yml
.
.
.
{"Fn::Sub": "arn:aws:sqs:*:${AWS::AccountId}:sqs-spoon-*-${env:SERVICE}"}
# serverless.yml
.
.
resources:
- ${file:resource.yml}
${AWS::AccountId} is CloudFormation Pseudo Parameter and ${env:SERVICE} is Serverless variable.
When I run sls deploy, it returns the error.
Invalid variable reference syntax for variable AWS::AccountId. You can only reference env vars, options, & files. You can check our docs for more info.
It seems to say that Serverless recognize ${AWS::AccountId} as Serverless variable, not as CloudFormation Pseudo Parameter.
Right?
If so, how to have Serverless not to parse Pseudo Parameter so that it will be parsed by CloudFormation later?
I can solve it with the plugin.
With the plugin, It cloud be solved by replacing ${AWS::AccountId} with #{AWS::AccountId}.
{"Fn::Sub": "arn:aws:sqs:*:#{AWS::AccountId}:sqs-spoon-*-${env:SERVICE}"}
You can accomplish support for the native AWS syntax with a single config line in serverless.yml to define the variableSyntax. Details can be found here https://github.com/serverless/serverless/pull/3694.
provider:
name: aws
runtime: nodejs8.10
variableSyntax: "\${((env|self|opt|file|cf|s3)[:\(][ :a-zA-Z0-9._,\-\/\(\)]*?)}"
Starting on Nov 7, 2018 we started getting the following error when updating our CloudFormation stacks:
Updating user pool schema is not allowed from cloudformation. Use the
AddCustomAttributes API or the AWS Cognito Console to update user pool
schema.
Our CF stacks don't have any changes to the custom attributes of the Cognito pool. They only have changes to the PostConfirmation and CustomMessage triggers, as well the addition of API Gateway responses.
Does anybody know why we might be seeing this? How can we avoid this error message?
We had the same problem with deployment. For now we are deploying it without CustomMessage trigger and setting CustomMessage trigger manually after deployment.
we removed the CustomMessage changes from our template and that seemed to do the trick.
Mostly by luck, I've found an answer that allows me to get around this in an automated manner.
How our scripts used to work
First, let me explain how this used to work. I used to have the following set of cloudFormation scripts:
cognitoSetup.template --> <Serverless Framework> --> <cognitoSetup.template updated with triggers>
So we'd setup the Cognito pool, run the Serverless Framework to add the Cognito Lambda functions, and then update the cognitoSetup.template file with the ARNs for the lambdas exported when the Serverless Framework ran.
The Fix
Now, we include the ARNs for the Lambdas in the cognitoSetup.template. So now cognitoSetup.template looks like this:
"CognitoUserPool": {
"Type": "AWS::Cognito::UserPool"
...
"Properties": {
...
"LambdaConfig": {
"CustomMessage": "arn:aws:lambda:<our aws region>:<our account#>:function:main-<our stage>-onCognitoCustomMessage"
}
}
Note, we're setting this trigger before the lambda even exists. The trigger just needs an ARN, and it doesn't seem to care that it's not there yet. Then we run sls deploy which creates the actual Lambda function and everything works fine.
Now our scripts look like this:
cognitoSetup.template --> <Serverless Framework>
Why does this fix this error? I don't actually know. CloudFormation seems to be fine with this modification but not okay with modifying the same file later in our process. But it works.