MongoDB. One of the fields of a document can be either an array, including an empty array, a subdocument, that can be empty or not, or null, or not exist at all. I need a condition for find() that will match a non-empty subdocument, and only that.
So:
fieldName: {} - no match.
fieldName: [ { id:0 } ] - no match.
fieldName: [ {} ] - no match.
No field called fieldName - no match.
fieldName: null - no match.
fieldName: { id: 0 } - match.
I have no rights to modify anything, I have to work with the database as is. How to formulate that find() ?
Use the following query:
db.test.find({
"fieldName": { "$gt": {} },
"fieldName.0": { "$exists": false }
})
For example, with the above test case, insert the following documents:
db.test.insert([
{ _id: 1, fieldName: {} },
{ _id: 2, fieldName: [ { id: 0 } ] },
{ _id: 3, fieldName: [ {} ] },
{ _id: 4 },
{ _id: 5, fieldName: null},
{ _id: 6, fieldName: { id: 0 } }
])
the above query will return the document with _id: 6
/* 0 */
{
"_id" : 6,
"fieldName" : {
"id" : 0
}
}
You can use the $type and the $exists operator.
The first check the type of id and the latter if fieldname is an array using the so called dot notation.
db.collection.find({
"fieldName.id": { $type: 1 }, "fieldName.0": { $exists: false }
})
Related
I have an array of documents like this:
[
{
_id: ObjectId("63845afd1f4ec22ab0d11db9"),
ticker: 'ABCD',
aggregates: [
{ date: '2022-05-20' },
{ date: '2022-05-20' },
{ date: '2022-05-20' }
]
}
]
How may I create an unique index on aggregates.date, so user may not push a duplicate date into array aggregates.
My existing aggregates are as follows:
db.aggregates_1_day.getIndexes()
[
{ v: 2, key: { _id: 1 }, name: '_id_' },
{ v: 2, key: { ticker: 1 }, name: 'ticker_1', unique: true },
{
v: 2,
key: { 'aggregates.date': 1 },
name: 'aggregates.date_1',
unique: true
}
]
Unique index ensure no duplicates across documents , but do not enforce uniqness for objects in array in same collection document.
But you have few other options here:
1. Do not use $push , but use $addToSet instead to add unique objects inside aggregates array of objects:
db.collection.update({},
{
"$addToSet": {
"aggregates": {
date: "2022-05-20"
}
}
})
note: $addToSet
only ensures that there are no duplicate items added to the set and does not affect existing duplicate elements.
Playground
2. You can configure schema validation:
> db.runCommand({collMod:"aggregates_1_day", validator: {$expr:{$eq:[{$size:"$aggregates.date"},{$size:{$setUnion:"$aggregates.date"}}]}}})
> db.aggregates_1_day.insert({aggregates:[{date:1}]}) /* success */
> db.aggregates_1_day.update({},{ '$push' : { 'aggregates':{date:1}}})
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 121,
"errmsg" : "Document failed validation"
}
})
>
more details in the mongoDB ticket
Note: In this approach you will need to clean the duplicates in advance otherways the validation will not allow to $push new objects at all.
In case you dont like it you can remove validation with:
db.runCommand({
collMod: "aggregates_1_day",
validator: {},
validationLevel: "off"
})
3. You can use update/aggregation as follow:
db.collection.update({},
[
{
$set: {
aggregates: {
$cond: [
{
$in: [
"2022-02-02",
"$aggregates.date"
]
},
"$aggregates",
{
$concatArrays: [
"$aggregates",
[
{
date: "2022-02-02"
}
]
]
}
]
}
}
}
])
Explained:
Add the object to the array only if do not exist in the array of objects.
Playground3
I'm using the Bucket Pattern to limit documents' array size to maxBucketSize elements. Once a document's array elements is full (bucketSize = maxBucketSize), the next update will create a new document with a new array to hold more elements using upsert.
How would you copy the static fields (see below recordType and recordDesc) from the last full bucket with a single call?
Sample document:
// records collection
{
recordId: 12345, // non-unique index
recordType: "someType",
recordDesc: "Some record description.",
elements: [ { a: 1, b: 2 }, { a: 3, b: 4 } ]
bucketsize: 2,
}
Bucket implementation (copies only queried fields):
const maxBucketSize = 2;
db.collection('records').updateOne(
{
recordId: 12345,
bucketSize: { $lt: maxBucketSize } // false
},
{
$push: { elements: { a: 5, b: 6 } },
$inc: { bucketSize: 1 },
$setOnInsert: {
// Should be executed because bucketSize condition is false
// but fields `recordType` and `recordDesc` inaccessible as
// no documents matched and were not included in the query
}
},
{ upsert: true }
)
Possible solution
To make this work, I can always make two calls, findOne() to get static values and then updateOne() where I set fields with setOnInsert, but it's inefficient
How can I modify this as one call with an aggregate? Examining one (last added) document matching recordId (index), evaluate if array is full, and add new document.
Attempt:
// Evaluate last document added
db.collection('records').findOneAndUpdate(
{ recordId: 12345 },
{
$push: { elements: {
$cond: {
if: { $lt: [ '$bucketSize', maxBucketSize ] },
then: { a: 5, b: 6 }, else: null
}
}},
$inc: { bucketSize: {
$cond: {
if: { $lt: [ '$bucketSize', maxBucketSize ] },
then: 1, else: 0
}
}},
$setOnInsert: {
recordType: '$recordType',
recordDesc: '$recordDesc'
}
},
{
sort: { $natural: -1 }, // last document
upsert: true, // Bucket overflow
}
)
This comes back with:
MongoError: Cannot increment with non-numeric argument: { bucketSize: { $cond: { if: { $lt: [ "$bucketSize", 2 ] }, then: 1, else: 0 } }}
I am unable to add the following object:
[
{ 'option1':'opt1','option2':'opt2','option3':'opt3'},
]
as a sub filed to:
const question_list=await Questions.find({ $and: [{categoryid:categoryId},{ isDeleted: false }, { status: 0 }] }, { name: 1 });
question_list=[{"_id":"5eb167fb222a6e11fc6fe579","name":"q1"},{"_id":"5eb1680abb913f2810774c2a","name":"q2"},{"_id":"5eb16b5686068831f07c65c3","name":"q5"}]
I want the final Object to be as:
[{"_id":"5eb167fb222a6e11fc6fe579","name":"q1","options":[
{ 'option1':'opt1','option2':'opt2','option3':'opt3'},
]},{"_id":"5eb1680abb913f2810774c2a","name":"q2","options":[
{ 'option1':'opt1','option2':'opt2','option3':'opt3'},
]},{"_id":"5eb16b5686068831f07c65c3","name":"q5","options":[
{ 'option1':'opt1','option2':'opt2','option3':'opt3'},
]}]
what is the best possible solution?
You need to use aggregation-pipeline to do this, instead of .find(). As projection in .find() can only accept $elemMatch, $slice, and $ on existing fields : project-fields-from-query-results. So to add a new field with new data to documents, use $project in aggregation framework.
const question_list = await Questions.aggregate([
{
$match: {
$and: [{ categoryid: categoryId }, { isDeleted: false }, { status: 0 }]
}
},
{
$project: {
_id: 0,
name: 1,
options: [{ option1: "opt1", option2: "opt2", option3: "opt3" }]
}
}
]);
Test : mongoplayground
In mongo I have a documents that follow the below pattern :
{
name: "test",
codes: [
[
{
code: "abc",
value: 123
},
{
code: "def",
value: 456
},
],
[
{
code: "ghi",
value: 789
},
{
code: "jkl",
value: 012
},
]
]
}
I'm using an aggregate query (because of joins) and in a $project block I need to return the "name" and the value of the object that has a code of "def" if it exists and an empty string if it doesn't.
I can't simply $unwind codes and $match because the "def" code is not guaranteed to be there.
$filter seems like the right approach as $elemMatch doesn't work, but its not obvious to me how to do this on nested array of arrays.
You can try below query, instead of unwinds & filter this can give you required result with less docs to operate on :
db.collection.aggregate([
/** merge all arrays inside codes array into code array */
{
$addFields: {
codes: {
$reduce: {
input: '$codes',
initialValue: [],
in: { $concatArrays: ["$$value", "$$this"] }
}
}
}
},
/** project only needed fields & value will be either def value or '',
* if 'def' exists in any doc then we're check index of it to get value of that particular object using arrayElemAt */
{
$project: {
_id:0, name: 1, value:
{
$cond: [{ $in: ["def", '$codes.code'] }, { $arrayElemAt: ['$codes.value', { $indexOfArray: ["$codes.code", 'def'] }] }, '']
}
}
}])
Test : MongoDB-Playground
Let's say I have a simple schema:
var testSchema = new mongoose.Schema({
map: { type: [ mongoose.Schema.Types.Mixed ], default: [] },
...possibly something else
});
Now let's ensure that pairs (_id, map._id) are unique.
testSchema.index({ _id: 1, 'map._id': 1 }, { unique: true });
Quick check using db.test.getIndexes() shows that it was created.
{
"v" : 1,
"unique" : true,
"key" : {
"_id" : 1,
"map._id" : 1
},
"name" : "_id_1_map._id_1",
"ns" : "test.test",
"background" : true,
"safe" : null
}
The problem is, this index is ignored and I can easily create multiple subdocuments with the same map._id. I can easily execute following query multiple times:
db.maps.update({ _id: ObjectId("some valid id") }, { $push: { map: { '_id': 'asd' } } });
and end up with following:
{
"_id": ObjectId("some valid id"),
"map": [
{
"_id": "asd"
},
{
"_id": "asd"
},
{
"_id": "asd"
}
]
}
What's going on here? Why can I push conflicting subdocuments?
Long story short: Mongo doesn't support unique indexes for subdocuments, although it allows creating them...
This comes up in google so I thought I'd add an alternative to using an index to achieve unique key constraint like functionality in subdocuments, hope that's OK.
I'm not terribly familiar with Mongoose so it's just a mongo console update:
var foo = { _id: 'some value' }; //Your new subdoc here
db.yourCollection.update(
{ '_id': 'your query here', 'myArray._id': { '$ne': foo._id } },
{ '$push': { myArray: { foo } })
With documents looking like:
{
_id: '...',
myArray: [{_id:'your schema here'}, {...}, ...]
}
The key being that you ensure update will not return a document to update (i.e. the find part) if your subdocument key already exists.
First objectId length in mongodb must be 24. Then you can turn off _id, and rename _id as id or others,and try $addToSet. Good luck.
CoffeeScript example:
FromSchema = new Schema(
source: { type: String, trim: true }
version: String
{ _id: false }//to trun off _id
)
VisitorSchema = new Schema(
id: { type: String, unique: true, trim: true }
uids: [ { type: Number, unique: true} ]
from: [ FromSchema ]
)
//to update
Visitor.findOneAndUpdate(
{ id: idfa }
{ $addToSet: { uids: uid, from: { source: source, version: version } } }
{ upsert: true }
(err, visitor) ->
//do stuff