CloudFormation - Access Output of Parent Stack in Child Nested stack - aws-cloudformation

I have a master Cloudformation template which invokes two child templates. I have my first template run and the Outputs captured in the Outputs section of the resource. I have given lot of tries in using the ChildStack01 Output values in the Second Template which is nested and I am not sure why I get Template format error: Unresolved resource dependencies [XYZ] in the Resources block of the template. Here is my master template.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"LambdaStack": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "https://s3.amazonaws.com/bucket1/cloudformation/Test1.json",
"TimeoutInMinutes": "60"
}
},
"PermissionsStack": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "https://s3.amazonaws.com/bucket1/cloudformation/Test2.json",
"Parameters": {
"LambdaTest": {
"Fn::GetAtt": ["LambdaStack", "Outputs.LambdaTest"]
}
},
"TimeoutInMinutes": "60"
}
}
}
}
Here is my Test1.json Template
{
"Resources": {
"LambdaTestRes": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Description": "Testing AWS cloud formation",
"FunctionName": "LambdaTest",
"Handler": "lambda_handler.lambda_handler",
"MemorySize": 128,
"Role": "arn:aws:iam::3423435234235:role/lambda_role",
"Runtime": "python2.7",
"Timeout": 300,
"Code": {
"S3Bucket": "bucket1",
"S3Key": "cloudformation/XYZ.zip"
}
}
}
},
"Outputs": {
"LambdaTest": {
"Value": {
"Fn::GetAtt": ["LambdaTestRes", "Arn"]
}
}
}
}
Here is My Test2.json which has to use the output of Test1.json.
{
"Resources": {
"LambdaPermissionLambdaTest": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"Action": "lambda:invokeFunction",
"FunctionName": {
"Ref": "LambdaTest"
},
"Principal": "apigateway.amazonaws.com",
"SourceArn": {
"Fn::Join": ["", ["arn:aws:execute-api:", {
"Ref": "AWS::Region"
}, ":", {
"Ref": "AWS::AccountId"
}, ":", {
"Ref": "TestAPI"
}, "/*"]]
}
}
}
},
"Parameters": {
"LambdaTest": {
"Type": "String"
}
}
}

It is not enough to just have output, you need to export that output.
Look here: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-exports.html
So you need something like:
"Outputs": {
"LambdaTest": {
"Value": {
"Fn::GetAtt": ["LambdaTestRes", "Arn"]
}
"Export": {
"Name": "LambdaTest"
}
}
}

You have two unresolved Ref resource dependencies in Test2.json, one to LambdaTest and one to TestAPI.
For LambdaTest, it looks like you're trying to pass this as a parameter from the parent stack, but you haven't specified it as an input Parameter in the child Test2.json template. Add an entry in Test2.json's Parameters section, like this:
"Parameters": {
"LambdaTest": {
"Type": "String"
}
},
Regarding TestAPI, this reference doesn't seem to appear anywhere else in your templates, so you should either specify this as a fixed string directly, or add another input Parameter in your Test2.json stack (see above) and then provide it from the parent stack.

The error is coming from test1.json(LambdaStack).
Logical ID
An identifier for the current output. The logical ID must be alphanumeric (a-z, A-Z, 0-9) and unique within the template.
It seems you have two logical ID with the same name "LambdaTest", one in resource section and other in output section.

Related

Azure Data Factory Copy Data activity - Use variables/expressions in mapping to dynamically select correct incoming column

I have the below mappings for a Copy activity in ADF:
"translator": {
"type": "TabularTranslator",
"mappings": [
{
"source": {
"path": "$['id']"
},
"sink": {
"name": "TicketID"
}
},
{
"source": {
"path": "$['summary']"
},
"sink": {
"name": "TicketSummary"
}
},
{
"source": {
"path": "$['status']['name']"
},
"sink": {
"name": "TicketStatus"
}
},
{
"source": {
"path": "$['company']['identifier']"
},
"sink": {
"name": "CustomerAccountNumber"
}
},
{
"source": {
"path": "$['company']['name']"
},
"sink": {
"name": "CustomerName"
}
},
{
"source": {
"path": "$['customFields'][74]['value']"
},
"sink": {
"name": "Landlord"
}
},
{
"source": {
"path": "$['customFields'][75]['value']"
},
"sink": {
"name": "Building"
}
}
],
"collectionReference": "",
"mapComplexValuesToString": false
}
The challenge I need to overcome is that the array indexes of the custom fields of the last two sources might change. So I've created an Azure Function which calculates the correct array index. However I can't work out how to use the Azure Function output value in the source path string - I have tried to refer to it using an expression like #activity('Get Building Field Index').output but as it's expecting a JSON path, this doesn't work and produces an error:
JSON path $['customFields'][#activity('Get Building Field Index').outputS]['value'] is invalid.
Is there a different way to achieve what I am trying to do?
Thanks in advance
I have a slightly similar scenario that you might be able to work with.
First, I have a JSON file that is emitted that I then access with Synapse/ADF with Lookup.
I next have a For each activity that runs a copy data activity.
The for each activity receives my Lookup and makes my JSON usable, by setting the following in the For each's Settings like so:
#activity('Lookup').output.firstRow.childItems
My JSON roughly looks as follows:
{"childItems": [
{"subpath": "path/to/folder",
"filename": "filename.parquet",
"subfolder": "subfolder",
"outfolder": "subfolder",
"origin": "A"}]}
So this means in my copy data activity within the for each activity, I can access the parameters of my JSON like so:
#item()['subpath']
#item()['filename']
#item()['folder']
.. etc
Edit:
Adding some screen caps of the parameterization:
https://i.stack.imgur.com/aHpWk.png

Unable to parse template language expression 'encodeURIComponent([parameters('table_storage_name')])'

Hey I am doing a CI/CD deployment for a logic app, I have a table storage where I store some data, I have two table storage for test and prod environment. I created a parameter called *table_storage_name" in ARM template :
"parameters": {
// ....
"connections_azuretables_1_externalid": {
"defaultValue": "/subscriptions/e5..../resourceGroups/myrg.../providers/Microsoft.Web/connections/azuretables-1",
"type": "String"
},
"table_storage_name": {
"defaultValue": "testdevops",
"type": "String"
}
}
The error comes from when I reference the parameter here in template.json file:
// ...
"Insert_Entity": {
"runAfter": {
"Initialize_variable": [
"Succeeded"
]
},
"type": "ApiConnection",
"inputs": {
"body": {
"PartitionKey": "#body('Parse_JSON')?['name']",
"RowKey": "#body('Parse_JSON')?['last']"
},
"host": {
"connection": {
"name": "#parameters('$connections')['azuretables_1']['connectionId']"
}
},
"method": "post",
// problem occur after this line
"path": "/Tables/#{encodeURIComponent('[parameters('table_storage_name')]')}/entities"
}
}
but get this error:
InvalidTemplate: The template validation failed: 'The template action 'Insert_Entity' at line '1' and column '582' is not valid: "Unable to parse template language expression 'encodeURIComponent([parameters('table_storage_name')])': expected token 'Identifier' and actual 'LeftSquareBracket'.".'.
I tried escaping the quote with a backslash like: encodeURIComponent(\'[parameters('table_storage_name')]\') or encodeURIComponent('[parameters(''table_storage_name'')]') but all of them raise an error. How can I reference a paramter inside encodeURIComponent in an ARM template ?
As discussed in the comments. credits: #marone
"path": "/Tables/#{encodeURIComponent(parameters('table_storage_name'))}/entities"
Found the solution from this link https://platform.deloitte.com.au/articles/preparing-azure-logic-apps-for-cicd
but here are the steps to reference a parameter logic app:
create an ARM parameter table_storage_name_armparam in template.json, in order to use it's value to reference the value of the ARM parameter (yes it's confusing but follow along you'll understand):
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"table_storage_name_armparam": {
"type": "String"
}
},
"variables": {},
"resources": [
{
......
}
Now in the logic app parameter value (in the bottom of json file) create the logic app parameter table_storage_name and the value of this parameter will be the ARM parameter created in step 1:
.......
"parameters": {
"$connections": {
"value": {
"azuretables": {
"connectionId": "[parameters('connections_azuretables_externalid')]",
"connectionName": "azuretables",
"id": "/subscriptions/xxxxx-xxxx-xxxx-xxxxxxxx/providers/Microsoft.Web/locations/francecentral/managedApis/azuretables"
}
}
},
"table_storage_name": {
"value": "[parameters('table_storage_name_armparam')]"
}
}
}
}
]
}
finally, reference the logic app parameter value as follow:
"path": "/Tables/#{encodeURIComponent(parameters('table_storage_name'))}/entities"

JSON Schema - can array / list validation be combined with anyOf?

I have a json document I'm trying to validate with this form:
...
"products": [{
"prop1": "foo",
"prop2": "bar"
}, {
"prop3": "hello",
"prop4": "world"
},
...
There are multiple different forms an object may take. My schema looks like this:
...
"definitions": {
"products": {
"type": "array",
"items": { "$ref": "#/definitions/Product" },
"Product": {
"type": "object",
"oneOf": [
{ "$ref": "#/definitions/Product_Type1" },
{ "$ref": "#/definitions/Product_Type2" },
...
]
},
"Product_Type1": {
"type": "object",
"properties": {
"prop1": { "type": "string" },
"prop2": { "type": "string" }
},
"Product_Type2": {
"type": "object",
"properties": {
"prop3": { "type": "string" },
"prop4": { "type": "string" }
}
...
On top of this, certain properties of the individual product array objects may be indirected via further usage of anyOf or oneOf.
I'm running into issues in VSCode using the built-in schema validation where it throws errors for every item in the products array that don't match Product_Type1.
So it seems the validator latches onto that first oneOf it found and won't validate against any of the other types.
I didn't find any limitations to the oneOf mechanism on jsonschema.org. And there is no mention of it being used in the page specifically dealing with arrays here: https://json-schema.org/understanding-json-schema/reference/array.html
Is what I'm attempting possible?
Your general approach is fine. Let's take a slightly simpler example to illustrate what's going wrong.
Given this schema
{
"oneOf": [
{ "properties": { "foo": { "type": "integer" } } },
{ "properties": { "bar": { "type": "integer" } } }
]
}
And this instance
{ "foo": 42 }
At first glance, this looks like it matches /oneOf/0 and not oneOf/1. It actually matches both schemas, which violates the one-and-only-one constraint imposed by oneOf and the oneOf fails.
Remember that every keyword in JSON Schema is a constraint. Anything that is not explicitly excluded by the schema is allowed. There is nothing in the /oneOf/1 schema that says a "foo" property is not allowed. Nor does is say that "foo" is required. It only says that if the instance has a keyword "foo", then it must be an integer.
To fix this, you will need required and maybe additionalProperties depending on the situation. I show here how you would use additionalProperties, but I recommend you don't use it unless you need to because is does have some problematic properties.
{
"oneOf": [
{
"properties": { "foo": { "type": "integer" } },
"required": ["foo"],
"additionalProperties": false
},
{
"properties": { "bar": { "type": "integer" } },
"required": ["bar"],
"additionalProperties": false
}
]
}

AWS CloudFormation Parameter values specified for a template which does not require them

I porting code from ruby to Python for CloudFormation stack creation projects. Below is a stack that I just keep getting 'Parameter values specified for a template which does not require them.'
This really doesn't tell me anything.
I have checked the json against the schemas and all was ok, and checked against the stack created by the original code and it matches, so can someone see an issue here, or at least point me in the right direction.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "EcsStack-5ad0c44afbf508d0b5a158df0da307fca33f5f63",
"Outputs": {
"marc1EcsCluster": {
"Value": {
"Ref": "marc1EcsCluster"
}
},
"marc1EcsClusterArn": {
"Value": {
"Fn::GetAtt": [
"marc1EcsCluster",
"Arn"
]
}
}
},
"Parameter": {
"Vpc": {
"Description": "VPC ID",
"Type": "String"
}
},
"Resources": {
"CloudFormationDummyResource": {
"Metadata": {
"Comment": "Resource to update stack even if there are no changes",
"GitCommitHash": "5ad0c44afbf508d0b5a158df0da307fca33f5f63"
},
"Type": "AWS::CloudFormation::WaitConditionHandle"
},
"marc1EcsCluster": {
"Type": "AWS::ECS::Cluster"
}
},
"Transform": "AWS::Serverless-2016-10-31"
}
As more general advice, the CloudFormation Linter will catch these errors with messages like:
E1001: Top level item Parameter isn't valid
template.json:19

VPCDHCPOptionsAssociation encountered unsupported property DHCPOptionsId

When i'm trying to create the stack it throws the above error. I've checked and i'm using the correct property value.
I've tried adding a "DependsOn" for DHCPOptions. I've tried using the Fn:GetAtt for the DHCPOptions. None have proved successful.
"DHCPOptions": {
"Type": "AWS::EC2::DHCPOptions",
"Properties": {
"DomainName": { "Ref": "DNSName" },
"DomainNameServers": [ "AmazonProvidedDNS" ],
"Tags": [{
"Key": "Name",
"Value": {
"Fn::Sub": "${VPCStackName}-DHCPOPTS"
}
}]
}
},
"VPCDHCPOptionsAssociation": {
"Type": "AWS::EC2::VPCDHCPOptionsAssociation",
"DependsOn": "DHCPOptions",
"Properties": {
"VpcId": { "Ref": "TestVPC" },
"DHCPOptionsId": { "Ref": "DHCPOptions" }
}
},
Expecting to pass the DHCPOptionsId from the DHCPOptions.
I've found the issue. Just a simple error regarding the casing.
It should be DhcpOptionsId not DHCPOptionsId