Count distinct value in Array in mongodb - mongodb

Sample Doc :
{
"id": "K",
"powers": [
{
"label": "a",
"Rating": 7
},
{
"label": "b",
"Rating": 3
},
{
"label": "c",
"Rating": 4
},
{
"label": "d",
"Rating": 5
}
],
"phy": {
"height": 67,
"weight": 150
}
}
For this collection, I want to count how many distinct powers each id has.
I want the result as - ID =K, distinct power label - 4

So the easiest way to get it done is
/** db.collection.distinct('field on which distinct is needed', condition) */
db.collection.distinct('powers.label', {"id" : "K"})
As it will be an array, You can do .length in code to get the unique length.
Ref : .distinct()

Related

Flutter - How to get if an object in a nested json list is contained in another nested json list

Nested list 1
{
"count": 0,
"Services": [
{
"Name": "Air",
"id": 1
},
{
"Name": "Road",
"id": 2
},
{
"Name": "Ship",
"id": 3
}
]
}
Nested List 2
{
"count": 0,
"TransportMeans": [
{
"Name": "Helicopter",
"serviceId": 1
},
{
"Name": "Car",
"serviceId": 2
}
]
}
Check whether the serviceId has already been set or given so that it should not accept duplicate values. If the id in list 1 is in list 2 then it should throw an error else build an Alertbox and the serviceId is related to the Id.

Calculate aggregates in a bucket in Upsert MongoDB update statement

My application gets measurements from a device that should be stored in a MongoDB database. Each measurement contains values for several probes of the device.
The measurements should displayed in an aggregation for a certain amount of time. I'm using the Bucket pattern in order to prepare the aggregates and simplify indexing and querying.
The following sample shows a document:
{
"DeviceId": "Device1",
"StartTime": 100, "EndTime": 199,
"Measurements": [
{ "timestamp": 100, "probeValues": [ { "id": "1", "t": 30 }, { "id": "2", "t": 67 } ] },
{ "timestamp": 101, "probeValues": [ { "id": "1", "t": 32 }, { "id": "2", "t": 67 } ] },
{ "timestamp": 102, "probeValues": [ { "id": "1", "t": 34 }, { "id": "2", "t": 55 } ] },
{ "timestamp": 103, "probeValues": [ { "id": "1", "t": 27 }, { "id": "2", "t": 30 } ] }
],
"probeAggregates": [
{ "id": "1", "cnt": 4, "total": 123 },
{ "id": "2", "cnt": 4, "total": 219 }
]
}
Updating the values and calculating the aggregates in a single request works well if the document already exists (1st block: query, 2nd: update, 3rd: options):
{
"DeviceId": "Device1",
"StartTime": 100,
"EndTime": 199
},
{
$push: {
"Measurements": {
"timestamp": 103,
"probeValues": [ { "id": "1", "t": 27 }, { "id": "2", "t": 30 } ]
}
},
$inc: {
"probeAggregates.$[probeAggr1].cnt": 1,
"probeAggregates.$[probeAggr1].total": 27,
"probeAggregates.$[probeAggr2].cnt": 1,
"probeAggregates.$[probeAggr2].total": 30
}
},
{
arrayFilters: [
{ "probeAggr1.id": "1" },
{ "probeAggr2.id": "2" }
]
}
Now I want to extend the statement to do a upsert if the document does not exist yet. However, if I do not change the update statement at all, there is the following error:
The path 'probeAggregates' must exist in the document in order to apply array updates.
If I try to prepare the probeAggregates array in case of an insert (e.g. by using $setOnInsert or $addToSet), this leads to another error:
Updating the path 'probeAggregates.$[probeAggr1].cnt' would create a conflict at 'probeAggregates'
Both errors can be explained and seem legit. One way to solve this would be to change the document structure and create one document per device, timeframe and probe and by that simplify the required update statement. In order to keep the number of documents low, I'd rather solve this by changing the update statement. Is there a way to create a valid document in an upsert?
(as I'm just learning to use a document db, feel free to share your experience in the comments on whether it is a good goal to keep the number of documents low in real world scenarios)

Fetch result from json array column in postgres

I have PostgreSQL 9.5 table(instrument) having 2 columns instrument_id and user_maps as shown below:
I want to fetch all instruments based on following conditions:
Loop through each json object in user_maps
count the number of json objects having status in Y, N or D
Count should be more than 2.
Note: user_maps is an array of json object having 2 fields status and userId
Sample user_maps linked to instrument_id "I01":
[
{
"status": "Y",
"userId": "ZU201707120539150007"
},
{
"status": "D",
"userId": "ZU201707120540510008"
},
{
"status": "I",
"userId": "ZU201707120542540009"
},
{
"status": "I",
"userId": "ZU201707011725050001"
},
{
"status": "Y",
"userId": "ZU201707120552050013"
}
]
Instrument id "I01" should come in final result .
Another, Sample user_maps linked to instrument_id "I02":
[
{
"status": "I",
"userId": "ZU201707120539150007"
},
{
"status": "I",
"userId": "ZU201707120540510008"
},
{
"status": "I",
"userId": "ZU201707120542540009"
},
{
"status": "I",
"userId": "ZU201707011725050001"
},
{
"status": "Y",
"userId": "ZU201707120552050013"
}
]
Instrument id "I02" should not come in final result beacuse it has only one json having status in (Y, N,D).
If I understood correctly your request then this is how you can do it:
-- This is just test dataset as you provided
WITH test( instrument_id, user_maps ) AS (
VALUES
( 'I01'::text,
$$[
{ "status": "Y", "userId": "ZU201707120539150007" },
{ "status": "D", "userId": "ZU201707120540510008" },
{ "status": "I", "userId": "ZU201707120542540009" },
{ "status": "I", "userId": "ZU201707011725050001" },
{ "status": "Y", "userId": "ZU201707120552050013" }
]$$::jsonb ),
( 'I02'::text,
$$[
{ "status": "I", "userId": "ZU201707120539150007" },
{ "status": "I", "userId": "ZU201707120540510008" },
{ "status": "I", "userId": "ZU201707120542540009" },
{ "status": "I", "userId": "ZU201707011725050001" },
{ "status": "Y", "userId": "ZU201707120552050013" }
]$$::jsonb )
)
SELECT t.instrument_id,
count( u )
FROM test t,
jsonb_array_elements( user_maps ) u -- does lateral join to get json array elements
WHERE u -> 'status' ?| ARRAY['Y', 'N', 'D'] -- your search condition
GROUP BY 1
HAVING count( u ) > 2; -- the count condition you wanted
-- This is the result of the query
instrument_id | count
---------------+-------
I01 | 3
(1 row)

MongoDB find where resoult + value > when 100

I have the following db structure:
[
{
"_id": 1,
"family": "First Family",
"kids": [
{
"name": "David",
"age": 10
},
{
"name": "Moses",
"age": 15
}
]
},
{
"_id": 1,
"family": "Second Family",
"kids": [
{
"name": "Sara",
"age": 17
},
{
"name": "Miriam",
"age": 45
}
]
}
]
I want to select all families that have a kid that his age + 10 is bigger then 30.
What would be the best way to achieve this?
please find query below
db.collection.find({ "kids.age":{$gt:20}})

Replace or Skip Part of Array in MongoDB

Assume I have a collection with documents that look like this:
{
"_id": ";lkajsdflhkadfhaewifjasdkfjalksdfjs",
"tree": [
{ "id": "a", "name": "One" },
{ "id": "b", "name": "Two" },
{ "id": "c", "name": "Three" },
{ "id": "d", "name": "Four" }
]
}
Now let's say I want to replace the a, b, and c entries in my tree array with e, f, and c entries. Is it possible to do that replace with an update query? If not, is there a way to select the document such that the tree array only contains the c and d entries (or just the d entry)? I want my document to look like this:
{
"_id": ";lkajsdflhkadfhaewifjasdkfjalksdfjs",
"tree": [
{ "id": "e", "name": "Five" },
{ "id": "f", "name": "Six" },
{ "id": "c", "name": "Three" },
{ "id": "d", "name": "Four" }
]
}
Order of the tree array matters. I'm aware of $splice, but I do not know ahead of time the index of the c entry. Also, the index may vary between documents. Can I do a query inside of $splice that lets me find the index?
How about doing a find().forEach?
db.test.find().forEach(function(doc){for (var i = 0; i < doc.tree.length; i++){
switch(doc.tree[i].id){
case "a": doc.tree[i] = { "id": "e", "name": "Five" };
break;
case "b": doc.tree[i] = { "id": "f", "name": "Six" };
break;
}} db.test.save(doc)});
Of course you can put in more specific logic to fit your rules but this will simply replace the a entries with e and b with f.