i am writing a spring boot mongo query to fetch the results based name, csi, city. sometimes name, csi is empty in that case my query should not consider the fields which are empty. query should exclude the fields and incase data is there it suppose to fetch the results.
For Example
$match:{
name:'test',
csi:'' //exclude if blank or null,
city:'hyd'
}
my springboot query is but it is not ignoring the fields when empty
#Aggregation({
"{\"$match\": { \"$and\" : [ {\"name\" : { \"$ne\":\"\" }}, {\"name\" : ?0 }, {\"csi\" : { \"$ne\":\"\" }}, {\"csi\" : ?1 }, {\"city\" : { \"$ne\":\"\" }}, {\"city\" : ?2 }] }}"
})
List<data> getdata(String name, String csi, String city);
Related
I have a JSON in MongoDB and I am trying to check if at least one of the items in the JSON doesn't contain a specific field.
{
"_id" : 12345,
"orderItems" : [
{
"itemId" : 45678,
"isAvailable" : true,
"isEligible" " false
},
{
"itemId" : 87653,
"isAvailable" : true
}
]
}
So in the above JSON, since the 2nd one under order items doesn't contain iseligible field, I need to get this _id.
I tried the below query so far, which didnt work:
db.getCollection('orders').find({"orderItems.iseligible":{$exists:false})
You can use $elemMatch to evaluate the presence of the nested key. Once that's accomplished, project out the _id value.
db.orders.find({
orderItems: {
$elemMatch: {
"isEligible": {
$exists: false
}
}
}
},
{
_id: 1
})
Here is a Mongo playground with the finished code, and a similar SO answer.
I have a large amount of data (~160M items) where a date value wasn't populated on the sub-document array fields, but was populated on the parent document. I'm very new to MongoDB and having trouble figuring out how to $set the field to match. Here's a sample of the data:
{
"_id": "5f11d4c48663f32e940696ed",
"Widgets":[{
"WidgetId":663,
"Name":"Super Widget 2.0",
"Created":null,
"LastUpdated":null
}],
"Status":3,
"LastUpdated":null,
"Created": "2018-11-09T18:22:16.000Z"
}
}
My knowledge of MongoDB is pretty limited but here's the basic aggregation I have created for part of the pipeline and where I'm struggling:
db.sample.aggregate(
[
{
"$match" : {
"Donors.$.Created" : {
"$exists" : true
}
}
},
{
"$match" : {
"Widgets.$.Created" : null
}
},
{
"$set" : {
"Widgets.$.Created" : "Created" // <- This is where I can't figure out how to define the reference to the parent "Created" field
}
}
]
);
The desired output would be:
{
"_id": "5f11d4c48663f32e940696ed",
"Widgets":[{
"WidgetId":663,
"Name":"Super Widget 2.0",
"Created":"2018-11-09T18:22:16.000Z",
"LastUpdated":null
}],
"Status":3,
"LastUpdated":null,
"Created": "2018-11-09T18:22:16.000Z"
}
}
Thanks for any assitance
Are you attempting to add the Created field to sub documents on query/aggregation? Or are you attempting to update/save the Created field on the subdocuments?
The $ is an update operator, to be used with updateMany or updateOne. Not aggregate.
https://docs.mongodb.com/manual/reference/operator/query-array/
https://docs.mongodb.com/manual/reference/operator/update-array/
If you just want to add the parents Created field to all subdocuments on query/aggregation this is all you have to do: https://mongoplayground.net/p/yHDHULCSTIz
db.collection.aggregate([
{
"$addFields": {
"Widgets.Created": "$Created"
}
}
])
If your attempting to save the parents Created field to all subdocuments:
db.sample.updateMany({"Widgets.Created" : null}, [{$set: {"Widgets.Created" : "$Created"}}])
Note: This matches any doc that has a subdocument with a null Created field and updates all the subdocuments.
I am having a record set like below :
I need to write a query where foreach datatype of every parent I show the data type with highest date i.e
So far I am able to create two groups one on parent id & other on data type but i am unable to understand how to get record with max date.
Below is my query :
db.getCollection('Maintenance').aggregate( [{ $group :
{ _id :{ parentName: "$ParentID" , maintainancename : "$DataType" }}},
{ $group : {
_id : "$_id.parentName",
maintainancename: {
$push: {
term:"$_id.DataType"
}
}
}
}] )
You don't have to $group twice, try below aggregation query :
db.collection.aggregate([
/** group on two fields `ParentID` & `Datatype`,
* which will leave docs with unique `ParentID + Datatype`
* & use `$max` to get max value on `Date` field in unique set of docs */
{
$group: {
_id: {
parentName: "$ParentID",
maintainancename: "$Datatype"
},
"Date": { $max: "$Date" }
}
}
])
Test : mongoplayground
Note : After group stage you can use $project or $addFieldsstages to transform fields the way you want.
I have designed JasperReports report with MongoDB data source. See my mongodb pipeline query below:
{
runCommand:{
aggregate:"my_collection",
pipeline:[
{$match :
{$and : [
{ tenant_id: 1},
{ $or: [ { $P{Location}: -1 }, { location : $P{Location}} ] },
]}},
{
$project : {
product_attribute_value : 1,
inventory_on_hand : 1 ,
unit_cost : 1
}
},
{
$group : {
_id : "$product_attribute_value",
itemsCount: { $sum : 1 },
inventoryValue:{$multiply : ["$inventory_on_hand", "$unit_cost"] },
}
}
}
]
}
}
I have a location parameter with corresponding input control as a dropdown. I want to group data based on change in location dropdown. The location dropdown has ALL as a default value. When user selects ALL then parameter value will be -1 and I want to get records for all locations. But $P{Location} is not a valid mongo field so it is not working.
I know other way, have a project stage before match and have a literal defined with value -1 and use that literal in match stage, but if I use project before match stage then it will fetch all data and then will apply match. This will degrade performance. I don't want to do this. I want to apply filters first and then pass filtered documents to pipeline stages.
Please suggest me an alternative.
I am new to mongodb and now using aggregate.
I am in a problem that I have 2 column let this column1 and column2 I want to match either by column1 or column2 inside $match Is it possible. I am getting stuck please help.
db Structure:
{
"_id" : ObjectId("55794aa1be1f8fe822da139d"),
"transactionType" : "1",
"_store" : {
"storeLocation" : "Pitampura",
"storeName" : "Godown",
"_id" : "5576b5c5e414d90c03d1e330"
}
}
I am try to filter according to transactionType and storeName, I am sending these 2 params to api but when storeName sended as empty string then only filter according to transactionType else by both paramater. Not wanted to use if-elseif.
Well of course it can suit your query. You just handle as follows:
// Initial data
var request = { "storeName": "", "transactionType": "1" };
// Transform to array
var conditions = Object.keys(request).map(function(key) {
var obj = {},
newKey = "";
if ( key == "storeName" ) {
newKey = "_store." + key;
} else {
newKey = key;
}
obj[newKey] = request[key];
return obj;
});
db.collection.find({ "$or": conditions });
Where the whole structure after transformation breaks down to :
db.collection.find({
"$or": [
{ "_store.storeName": "" },
{ "transactionType": "1" }
]
})
Which of course matches the document on the condition that "transactionType" is met.
So that is what $or does, considers that at least one of the conditions in the query arguments matches data in the document.
The other thing here is that since the data presented in the request is not a "direct match" for the data in the document, manipulation is done on the "key name" to use the correct "dot notation" form for acessing that element.
These are just basic queries, so the same rules apply to aggregation $match, which is just a query element itself:
db.collection.aggregate([
// Possibly other pipeline before
// Your match phase, which probably should be first
{ "$match": {
"$or": [
{ "_store.storeName": "" },
{ "transactionType": "1" }
]
}},
// Other aggregagtion pipeline
])