Does TOML support top level arrays of dictionaries? - toml

I am trying to write a configuration file that would be an array (list) or dictionaries (hashes, tables). In JSON terms that would be for instance
[
{
"a":1,
"b":2
},
{
"a":10,
"b":20
}
]
I was hoping that
[[]]
a = 1
b = 2
[[]]
a = 10
b = 20
would be correct, but it is rejected by my Go parser with
unexpected token "]]", was expecting a table array key
This suggests that only top-level dictionaries (hashes, tables) are allowed. Is that really the case?

Related

Sort and combine elements to be unique

I have a bunch of tokens stored in combinedCoinsFromAllWalles and I'm sorting them by who contains the largest monetary value like this:
let sortedCoins = combinedCoinsFromAllWalles.sorted() { Double($0.token!.quote!) > Double($1.token!.quote!) }
The problem is that some tokens are repeated by name, for example, on that sorting I could have 2 tokens with the same name on $0.token!.name
What would be the most efficient way to also combine those similar tokens and add their value? Something like this:
token A (named BTC)
token B (named BTC)
token C (named ETH)
I want to sum token A and B quote ($0.token!.quote! + $0.token!.quote!) while filtering.
How do I do that in the most efficient way?
That first sort in your example is a waste since you have not combined all the different sources for similar coins first and the order may then change.
You should:
Aggregate coin values
Sort by desired order
One simple way to do this would be to create a dictionary, adding new coins or summing totals as you iterate through your data. Then convert the dictionary back to an array and sort how you would like.
Ex:
var dict: [String: Float] = []
for each in combinedCoinsFromAllWalles {
if dict.contains(each.token) {
dict[each.token] += each.quote
}else{
dict[each.token] = each.quote
}
}
let sortedCoinValueArray = dict.sorted(by: $0.1 < $1.1)
The resulting array is an array of key-value pairs, so you may iterate over it like this:
for (key, value) in sortedCoinValueArray {
print("${key}: ${value}"
}

Terraform interpolation adding unwanted zero to list

I´m using the aws_cloudformation_stack resource in Terraform to gather IDs about cloudformation security group stacks like so:
data "aws_cloudformation_stack" "vpc-prod-sg" {
name = "vpc-prod-sg"
}
I define a list in my main.tf file with names that represent these security groups like this:
sg_ingress = ["DevMyAppLinkingSecurityGroup", "DevDBLinkingSecurityGroup"]
In my module I assign the values from the Cloufformation stacks to the names in the list this:
security_groups = [contains(var.sg_ingress, "DevMyAppLinkingSecurityGroup") ? "${data.aws_cloudformation_stack.vpc-prod-sg.outputs["DevMyAppLinkingSecurityGroup"]}" : 0, contains(var.sg_ingress, "DevDBLinkingSecurityGroup") ? "${data.aws_cloudformation_stack.vpc-prod-sg.outputs["DevDBLinkingSecurityGroup"]}" : 0]
However, when I run the terraform plan, the list is populated with the values I want, but it also add an additional entry to the list with value of zero. It looks like this:
+ security_groups = [
+ "0",
+ "sg-05443559898348",
+ "sg-05435345443545593"
I'm baffled as to where this zero is coming from or how I can deal with it. Has anyone come across anything similar?
Let's first add some vertical whitespace to your expression so it's easier to read:
security_groups = [
contains(var.sg_ingress, "DevMyAppLinkingSecurityGroup") ?
"${data.aws_cloudformation_stack.vpc-prod-sg.outputs["DevMyAppLinkingSecurityGroup"]}" :
0,
contains(var.sg_ingress, "DevDBLinkingSecurityGroup") ?
"${data.aws_cloudformation_stack.vpc-prod-sg.outputs["DevDBLinkingSecurityGroup"]}" :
0
]
Both of these element expressions are conditionals that produce a zero if their expression is false, and so it's likely that the zero you are seeing is produced by one of these conditions being false. The zero is then converted to a string because security_groups is defined as being a collection of strings.
Taking a step back and looking at the original problem, it seems like your goal here is to map from some symbolic names (exported by your CloudFormation stack) to the physical security group ids they represent. For this sort of mapping problem, I'd suggest using for expressions, like this:
security_groups = [
for n in var.sg_ingress : data.aws_cloudformation_stack.vpc-prod-sg.outputs[n]
]
If there are other outputs from this CloudFormation Stack and you want to ensure that var.sg_ingress can only refer to these two, you can add some additional indirection to ensure that:
locals {
allowed_security_group_outputs = ["DevMyAppLinkingSecurityGroup", "DevDBLinkingSecurityGroup"]
security_group_ids = {
for n in local.allowed_security_group_outputs :
n => data.aws_cloudformation_stack.vpc-prod-sg.outputs[n]
}
}
...and then:
security_groups = [
for n in var.sg_ingress : local.security_group_ids[n]
]

Count occurrences of part key value in Dict (Swift)

Lets say I have a dict containing the following values and keys
let dict = ["Foo" : 1,
"FooBar" : 2,
"Bar" : 3,
"BarBar" : 4,
"FooFoo" : 5 ]
My question is :-
How would one count the occurrences of the KEY containing or partly containing the key string "Foo"
The result should be 3 ("Foo","FooBar","FooFoo" )
One direction I am looking at is using
print( dict.keys .contains("Foo"))
This of course returns true
print( dict.keys .contains("Fo"))
It will return a false value when in actual fact "Fo" occurs 3 times but only as a part key name.
Hoping that makes sense :F
So again how do I count the par key name occurrences in a given dictionary
You need to filter the keys and then count them
let arr = dict.keys.filter{ $0.contains("Fo") }
print(arr.count)
A straightforward way is this:
dict.filter{ $0.key.contains("Foo") }.count
We leave all the keys that conatins "Foo" in the dictionary and count the number of KVPs left!

MongoDB find if all array elements are in the other bigger array

I have an array of id's of LEGO parts in a LEGO building.
// building collection
{
"name": "Gingerbird House",
"buildingTime": 45,
"rating": 4.5,
"elements": [
{
"_id": 23,
"requiredElementAmt": 14
},
{
"_id": 13,
"requiredElementAmt": 42
}
]
}
and then
//elements collection
{
"_id": 23,
"name": "blue 6 dots brick",
"availableAmt":20
}
{
"_id": 13,
"name": "red 8 dots brick",
"availableAmt":50
}
{"_id":254,
"name": "green 4 dots brick",
"availableAmt":12
}
How can I find it's possible to build a building? I.e. database will return the building only if the "elements" array in a building document consists of those elements that I have in a warehouse(elements collection) require less(or equal) amount of certain element.
In SQL(which from I came recently) I would write something likeSELECT * FROM building WHERE id NOT IN(SELECT fk_building FROM building_elemnt_amt WHERE fk_element NOT IN (1, 3))
Thank you in advance!
I wont pretend I get how it works in SQL without any comparison, but in mongodb you can do something like that:
db.buildings.find({/* building filter, if any */}).map(function(b){
var ok = true;
b.elements.forEach(function(e){
ok = ok && 1 == db.elements.find({_id:e._id, availableAmt:{$gt:e.requiredElementAmt}}).count();
})
return ok ? b : false;
}).filter(function(b){return b});
or
db.buildings.find({/* building filter, if any */}).map( function(b){
var condition = [];
b.elements.forEach(function(e){
condition.push({_id:e._id, availableAmt:{$gt:e.requiredElementAmt}});
})
return db.elements.find({$or:condition}).count() == b.elements.length ? b : false;
}).filter(function(b){return b});
The last one should be a bit quicker, but I did not test. If performance is a key, it must be better to mapReduce it to run subqueries in parallel.
Note: The examples above work with assumption that buildings.elements have no elements with the same id. Otherwise the array of elements needs to be pre-processed before b.elements.forEach to calculate total requiredElementAmt for non-unique ids.
EDIT: How it works:
Select all/some documents from buildings collection with find:
db.buildings.find({/* building filter, if any */})
returns a cursor, which we iterate with map applying the function to each document:
map(function(b){...})
The function itself iterates over elements array for each buildings document b:
b.elements.forEach(function(e){...})
and find number of documents in elements collection for each element e
db.elements.find({_id:e._id, availableAmt:{$gte:e.requiredElementAmt}}).count();
which match a condition:
elements._id == e._id
and
elements.availableAmt >= e.requiredElementAmt
until first request that return 0.
Since elements._id is unique, this subquery returns either 0 or 1.
First 0 in expression ok = ok && 1 == 0 turns ok to false, so rest of the elements array will be iterated without touching the db.
The function returns either current buildings document, or false:
return ok ? b : false
So result of the map function is an array, containing full buildings documents which can be built, or false for ones that lacks at least 1 resource.
Then we filter this array to get rid of false elements, since they hold no useful information:
filter(function(b){return b})
It returns a new array with all elements for which function(b){return b} doesn't return false, i.e. only full buildings documents.

Alternative compound key ranges in CouchDB

Assuming a mapreduce function representing object relationships like:
function (doc) {
emit([doc.source, doc.target, doc.name], null);
}
The normal example of filtering a compound key is something like:
startKey = [ a_source ]
endKey = [ a_source, {} ]
That should provide a list of all objects referenced from a_source
Now I want the oposite and I am not sure if that is possible. I have not been able to find an example where the variant part comes first, like:
startKey = [ *simbol_first_match* , a_destination ]
endKey = [ {} , a_destination ]
Is that posible? Are compound keys (1) filter and (2) sort operations within a query limited to the order of the elements appear in the key?
I know I could define another view/mapreduce, but I would like to avoid the extra disk space cost if possible.
No, you can't do that. See here where I explained how keys work in view requests with CouchDB.
Compound keys are nothing special, no filtering or anything. Internally you have to imagine that there is no array anymore. It's just syntactic sugar for us developers. So [a,b] - [a,c] is treated just like 'a_b' - 'a_c' (with _ being a special delimiter).