Terraform interpolation adding unwanted zero to list - aws-cloudformation

I´m using the aws_cloudformation_stack resource in Terraform to gather IDs about cloudformation security group stacks like so:
data "aws_cloudformation_stack" "vpc-prod-sg" {
name = "vpc-prod-sg"
}
I define a list in my main.tf file with names that represent these security groups like this:
sg_ingress = ["DevMyAppLinkingSecurityGroup", "DevDBLinkingSecurityGroup"]
In my module I assign the values from the Cloufformation stacks to the names in the list this:
security_groups = [contains(var.sg_ingress, "DevMyAppLinkingSecurityGroup") ? "${data.aws_cloudformation_stack.vpc-prod-sg.outputs["DevMyAppLinkingSecurityGroup"]}" : 0, contains(var.sg_ingress, "DevDBLinkingSecurityGroup") ? "${data.aws_cloudformation_stack.vpc-prod-sg.outputs["DevDBLinkingSecurityGroup"]}" : 0]
However, when I run the terraform plan, the list is populated with the values I want, but it also add an additional entry to the list with value of zero. It looks like this:
+ security_groups = [
+ "0",
+ "sg-05443559898348",
+ "sg-05435345443545593"
I'm baffled as to where this zero is coming from or how I can deal with it. Has anyone come across anything similar?

Let's first add some vertical whitespace to your expression so it's easier to read:
security_groups = [
contains(var.sg_ingress, "DevMyAppLinkingSecurityGroup") ?
"${data.aws_cloudformation_stack.vpc-prod-sg.outputs["DevMyAppLinkingSecurityGroup"]}" :
0,
contains(var.sg_ingress, "DevDBLinkingSecurityGroup") ?
"${data.aws_cloudformation_stack.vpc-prod-sg.outputs["DevDBLinkingSecurityGroup"]}" :
0
]
Both of these element expressions are conditionals that produce a zero if their expression is false, and so it's likely that the zero you are seeing is produced by one of these conditions being false. The zero is then converted to a string because security_groups is defined as being a collection of strings.
Taking a step back and looking at the original problem, it seems like your goal here is to map from some symbolic names (exported by your CloudFormation stack) to the physical security group ids they represent. For this sort of mapping problem, I'd suggest using for expressions, like this:
security_groups = [
for n in var.sg_ingress : data.aws_cloudformation_stack.vpc-prod-sg.outputs[n]
]
If there are other outputs from this CloudFormation Stack and you want to ensure that var.sg_ingress can only refer to these two, you can add some additional indirection to ensure that:
locals {
allowed_security_group_outputs = ["DevMyAppLinkingSecurityGroup", "DevDBLinkingSecurityGroup"]
security_group_ids = {
for n in local.allowed_security_group_outputs :
n => data.aws_cloudformation_stack.vpc-prod-sg.outputs[n]
}
}
...and then:
security_groups = [
for n in var.sg_ingress : local.security_group_ids[n]
]

Related

Does TOML support top level arrays of dictionaries?

I am trying to write a configuration file that would be an array (list) or dictionaries (hashes, tables). In JSON terms that would be for instance
[
{
"a":1,
"b":2
},
{
"a":10,
"b":20
}
]
I was hoping that
[[]]
a = 1
b = 2
[[]]
a = 10
b = 20
would be correct, but it is rejected by my Go parser with
unexpected token "]]", was expecting a table array key
This suggests that only top-level dictionaries (hashes, tables) are allowed. Is that really the case?

DMN Nested Object Filering

Literal Expression = PurchaseOrders.pledgedDocuments[valuation.value=62500]
Purchase Order Structure
PurchaseOrders:
[
{
"productId": "PURCHASE_ORDER_FINANCING",
"pledgedDocuments" : [{"valuation" : {"value" : "62500"} }]
}
]
The literal Expression produces null result.
However if
PurchaseOrders.pledgedDocuments[valuation = null]
Return all results !
What am I doing wrong ?
I was able to solve using flatten function - but dont know how it worked :(
In your original question, it is not exactly clear to me what is your end-goal, so I will try to provide some references.
value filtering
First, your PurchaseOrders -> pledgedDocuments -> valuation -> value appears to be a string, so in your original question trying to filter by
QUOTE:
... [valuation.value=62500]
will not help you.
You'll need to filter to something more ~like: valuation.value="62500"
list projection
In your original question, you are projecting on the PurchaseOrders which is a list and accessing pledgedDocuments which again is a list !
So when you do:
QUOTE:
PurchaseOrders.pledgedDocuments (...)
You don't have a simple list; you have a list of lists, it is a list of all the lists of pledged documents.
final solution
I believe what you wanted is:
flatten(PurchaseOrders.pledgedDocuments)[valuation.value="62500"]
And let's do the exercise on paper about what is actually happening.
First,
Let's focus on PurchaseOrders.pledgedDocuments.
You supply PurchaseOrders which is a LIST of POs,
and you project on pledgedDocuments.
What is that intermediate results?
Referencing your original question input value for POs, it is:
[
[{"valuation" : {"value" : "62500"} }]
]
notice how it is a list of lists?
With the first part of the expression, PurchaseOrders.pledgedDocuments, you have asked: for each PO, give me the list of pledged documents.
By hypothesis, if you supplied 3 POs, and each having 2 documents, you would have obtained with PurchaseOrders.pledgedDocuments a resulting list of again 3 elements, each element actually being a list of 2 elements.
Now,
With flatten(PurchaseOrders.pledgedDocuments) you achieve:
[{"valuation" : {"value" : "62500"} }]
So at this point you have a list containing all documents, regardless of which PO.
Now,
With flatten(PurchaseOrders.pledgedDocuments)[valuation.value="62500"] the complete expression, you still achieve:
[{"valuation" : {"value" : "62500"} }]
Because you have asked on the flattened list, to keep only those elements containing a valuation.value equal to the "62500" string.
In other words iff you have used this expression, what you achieved is:
From any PO, return me the documents having the valuations' value
equals to the string 62500, regardless of the PO the document belongs to.

How to merge aql query with iterative traversal

I want to query a collection in ArangoDB using AQL, and at each node in the query, expand the node using a traversal.
I have attempted to do this by calling the traversal as a subquery using a LET statement within the collection query.
The result set for the traversal is empty, even though the query completes.
FOR ne IN energy
FILTER ne.identifier == "12345"
LET ne_edges = (
FOR v, e IN 1..1 ANY ne relation
RETURN e
)
RETURN MERGE(ne, {"edges": ne_edges})
[
{
"value": 123.99,
"edges": []
}
]
I have verified there are edges, and the traversal returns correctly when it is not executed as a subquery.
It seems as if the initial query is completing before a result is returned from the subquery, giving the result below.
What am I missing? or is there a better way?
I can think of two way to do this. This first is easier to understand but the second is more compact. For the examples below, I have a vertex collection test2 and an edge collection testEdge that links parent and child items within test2
Using Collect:
let seed = (FOR testItem IN test2
FILTER testItem._id in ['test2/Q1', 'test2/Q3']
RETURN testItem._id)
let traversal = (FOR seedItem in seed
FOR v, e IN 1..1 ANY seedItem
testEdge
RETURN {seed: seedItem, e_to: e._to})
for t in traversal
COLLECT seeds = t.seed INTO groups = t.e_to
return {myseed: seeds, mygroups: groups}
Above we first get the items we want to traverse through (seed), then we perform the traversal and get an object that has the seed .id and the related edges
Then we finally use collect into to group the results
Using array expansion
FOR testItem IN test2
FILTER testItem._id in ['test2/Q1', 'test2/Q3']
LET testEdges = (
FOR v, e IN 1..1 ANY testItem testEdge
RETURN e
)
RETURN {myseed: testItem._id, mygroups: testEdges[*]._to}
This time we combine the seed search and the traversal by using the let statement. then we use array expansion to group items
In either case, I end up with something that looks like this:
[
{
"myseed": "test2/Q1",
"mygroups": [
"test2/Q1-P5-2",
"test2/Q1-P6-3",
"test2/Q1-P4-1"
]
},
{
"myseed": "test2/Q3",
"mygroups": [
"test2/Q3",
"test2/Q3"
]
}
]

Scala how to group a map and then subgroup and transform values

I have an object like this:
case class MyObject(x : Int,y : String,...) {
val buckets = 3
def bucket = x % buckets // returns a number between 0 and |buckets|
}
(x is an arbitrary number)
for example assume "buckets = 3" and we have many objects
MyObject(x = 0, y = "Something", ...)
MyObject(x = 1, y = "Something else", ...)
....
...
Using "groupBy" I collect "MYObjects" using the x % buckets, so it will be like:
val objects : Seq[MyObject] = ...
val groupedObjects : Map[Int: Seq[MyObjects]] = objects.groupBy(obj => x.bucket)
now I want to transform each value and also regroup to sublists of the different type
so lets say for each item in group = 1 , I want to nest under an additional layer and store a different calculated value:
so lets say if bucket 0 after the initial grouping looked like:
bucket[0] = [obj1,obj2,...,objn]
I want to be able to transform bucket "0" to contain another nested grouping:
bucket[0] = Map(sub_bucket_0 -> [transformed(objects)...], sub_bucket_1 -> [transformed(object)...),....]
meaning that eventually I have a data structure with the type:
Map[Int,Map[Sub_bucket_type,Seq[TransformedObject_type]]]
I think what you're looking for is mapValues() which will modify the Map's value elements to new values and/or types.
groupedObjects.mapValues(_.groupBy(/*returns new key type/value*/))
.mapValues(_.mapValues(_.map(/*transform MyObject elements*/)))

Alternative compound key ranges in CouchDB

Assuming a mapreduce function representing object relationships like:
function (doc) {
emit([doc.source, doc.target, doc.name], null);
}
The normal example of filtering a compound key is something like:
startKey = [ a_source ]
endKey = [ a_source, {} ]
That should provide a list of all objects referenced from a_source
Now I want the oposite and I am not sure if that is possible. I have not been able to find an example where the variant part comes first, like:
startKey = [ *simbol_first_match* , a_destination ]
endKey = [ {} , a_destination ]
Is that posible? Are compound keys (1) filter and (2) sort operations within a query limited to the order of the elements appear in the key?
I know I could define another view/mapreduce, but I would like to avoid the extra disk space cost if possible.
No, you can't do that. See here where I explained how keys work in view requests with CouchDB.
Compound keys are nothing special, no filtering or anything. Internally you have to imagine that there is no array anymore. It's just syntactic sugar for us developers. So [a,b] - [a,c] is treated just like 'a_b' - 'a_c' (with _ being a special delimiter).