Before I explain my use case, I'd like to state that yes, I could change this application so that it would store things in a different manner or even split it into 2 collections for that matter. But that's not my intention, and I'd rather want to know if this is at all possible within MongoDB (since I am quite new to MongoDB). I can for sure work around this problem if I'd really need to, but rather looking for a method to achieve what I want (no I am not being lazy here, I really want to know a way to do this).
Let's get to the problem then.
I have a document like below:
{
"_id" : ObjectId("XXXXXXXXXXXXXXXXXXXXX"),
"userId" : "XXXXXXX",
"licenses" : [
{
"domain" : "domain1.com",
"addons" : [
{"slug" : "1"},
{"slug" : "2"}
]
},
{
"domain" : "domain2.com",
"addons" : [
{"slug" : "1"},
]
}
]
}
My goal is to check if a specific domain has a specific addon. When I use the below query to count the documents with domain: domain2.com and addon slug: 2 the result should be: 0. However with the below query it returns 1. I know that this is because the query is executed document wide and not just the license index that matched domain2.com. So my question is, how to do a sub $and (or however you'd call it)?
db.test.countDocuments(
{$and: [
{"licenses.domain": "domain2.com"},
{"licenses.addons.slug": "2"},
]}
)
Basically I am looking for something like this (below isn't working obviously), but below should return 0, not 1:
db.test.countDocuments(
{$and: [
{
"licenses.domain": "domain2.com",
$and: [
{ "licenses.addons.slug": "2"}
]
}
]}
)
I know there is $group and $filter operators, I have been trying many combinations to no avail. I am lost at this point, I feel like I am completely missing the logic of Mongo here. However I believe this must be relatively easy to accomplish with a single query (just not for me I guess).
I have been trying to find my answer on the official documentation and via stack overflow/google, but I really couldn't find any such use case.
Any help is greatly appreciated! Thanks :)
What you are describe is searching for a document whose array contains a single element that matches multiple criteria.
This is exactly what the $elemMatch operator does.
Try using this for the filter part:
{
licenses: {
$elemMatch: {
domain: "domain2.com",
"addons.slug": "2"
}
}
}
Related
This is the case: A webshop in which I want to configure which items should be listed in the sjop based on a set of parameters.
I want this to be configurable, because that allows me to experiment with different parameters also change their values easily.
I have a Product collection that I want to query based on multiple parameters.
A couple of these are found here:
within product:
"delivery" : {
"maximum_delivery_days" : 30,
"average_delivery_days" : 10,
"source" : 1,
"filling_rate" : 85,
"stock" : 0
}
but also other parameters exist.
An example of such query to decide whether or not to include a product could be:
"$or" : [
{
"delivery.stock" : 1
},
{
"$or" : [
{
"$and" : [
{
"delivery.maximum_delivery_days" : {
"$lt" : 60
}
},
{
"delivery.filling_rate" : {
"$gt" : 90
}
}
]
},
{
"$and" : [
{
"delivery.maximum_delivery_days" : {
"$lt" : 40
}
},
{
"delivery.filling_rate" : {
"$gt" : 80
}
}
]
},
{
"$and" : [
{
"delivery.delivery_days" : {
"$lt" : 25
}
},
{
"delivery.filling_rate" : {
"$gt" : 70
}
}
]
}
]
}
]
Now to make this configurable, I need to be able to handle boolean logic, parameters and values.
So, I got the idea, since such query itself is JSON, to store it in Mongo and have my Java app retrieve it.
Next thing is using it in the filter (e.g. find, or whatever) and work on the corresponding selection of products.
The advantage of this approach is that I can actually analyse the data and the effectiveness of the query outside of my program.
I would store it by name in the database. E.g.
{
"name": "query1",
"query": { the thing printed above starting with "$or"... }
}
using:
db.queries.insert({
"name" : "query1",
"query": { the thing printed above starting with "$or"... }
})
Which results in:
2016-03-27T14:43:37.265+0200 E QUERY Error: field names cannot start with $ [$or]
at Error (<anonymous>)
at DBCollection._validateForStorage (src/mongo/shell/collection.js:161:19)
at DBCollection._validateForStorage (src/mongo/shell/collection.js:165:18)
at insert (src/mongo/shell/bulk_api.js:646:20)
at DBCollection.insert (src/mongo/shell/collection.js:243:18)
at (shell):1:12 at src/mongo/shell/collection.js:161
But I CAN STORE it using Robomongo, but not always. Obviously I am doing something wrong. But I have NO IDEA what it is.
If it fails, and I create a brand new collection and try again, it succeeds. Weird stuff that goes beyond what I can comprehend.
But when I try updating values in the "query", changes are not going through. Never. Not even sometimes.
I can however create a new object and discard the previous one. So, the workaround is there.
db.queries.update(
{"name": "query1"},
{"$set": {
... update goes here ...
}
}
)
doing this results in:
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 52,
"errmsg" : "The dollar ($) prefixed field '$or' in 'action.$or' is not valid for storage."
}
})
seems pretty close to the other message above.
Needles to say, I am pretty clueless about what is going on here, so I hope some of the wizzards here are able to shed some light on the matter
I think the error message contains the important info you need to consider:
QUERY Error: field names cannot start with $
Since you are trying to store a query (or part of one) in a document, you'll end up with attribute names that contain mongo operator keywords (such as $or, $ne, $gt). The mongo documentation actually references this exact scenario - emphasis added
Field names cannot contain dots (i.e. .) or null characters, and they must not start with a dollar sign (i.e. $)...
I wouldn't trust 3rd party applications such as Robomongo in these instances. I suggest debugging/testing this issue directly in the mongo shell.
My suggestion would be to store an escaped version of the query in your document as to not interfere with reserved operator keywords. You can use the available JSON.stringify(my_obj); to encode your partial query into a string and then parse/decode it when you choose to retrieve it later on: JSON.parse(escaped_query_string_from_db)
Your approach of storing the query as a JSON object in MongoDB is not viable.
You could potentially store your query logic and fields in MongoDB, but you have to have an external app build the query with the proper MongoDB syntax.
MongoDB queries contain operators, and some of those have special characters in them.
There are rules for mongoDB filed names. These rules do not allow for special characters.
Look here: https://docs.mongodb.org/manual/reference/limits/#Restrictions-on-Field-Names
The probable reason you can sometimes successfully create the doc using Robomongo is because Robomongo is transforming your query into a string and properly escaping the special characters as it sends it to MongoDB.
This also explains why your attempt to update them never works. You tried to create a document, but instead created something that is a string object, so your update conditions are probably not retrieving any docs.
I see two problems with your approach.
In following query
db.queries.insert({
"name" : "query1",
"query": { the thing printed above starting with "$or"... }
})
a valid JSON expects key, value pair. here in "query" you are storing an object without a key. You have two options. either store query as text or create another key inside curly braces.
Second problem is, you are storing query values without wrapping in quotes. All string values must be wrapped in quotes.
so your final document should appear as
db.queries.insert({
"name" : "query1",
"query": 'the thing printed above starting with "$or"... '
})
Now try, it should work.
Obviously my attempt to store a query in mongo the way I did was foolish as became clear from the answers from both #bigdatakid and #lix. So what I finally did was this: I altered the naming of the fields to comply to the mongo requirements.
E.g. instead of $or I used _$or etc. and instead of using a . inside the name I used a #. Both of which I am replacing in my Java code.
This way I can still easily try and test the queries outside of my program. In my Java program I just change the names and use the query. Using just 2 lines of code. It simply works now. Thanks guys for the suggestions you made.
String documentAsString = query.toJson().replaceAll("_\\$", "\\$").replaceAll("#", ".");
Object q = JSON.parse(documentAsString);
So, my schema design requires that I use an embedded document format. While I recognize that what I'm about to ask could be made easier by redesigning the schema, the current design meets all of the other requirements in place so I'm doing my best to make it work.
Consider the following rudementary schema:
{
"_id" : "01234ABCD,
"type" : "thing",
"resources" : {
foo : [
{
"herp" : "derp",
},
],
bar : [
{
"herp" : "derp",
},
{
"derp" : "herp"
}
]
},
}
Obviously the value that corresponds to the "resources" key is an embedded document. I would like to be able to efficiently calculate the count of keys in that document, and derive results based upon tests on that value. It's important to note that the length and content of the embedded doc is an unknown quantity - hence my reason for wanting to be able to query this meta. Being a complete js idiot, I've managed to cobble together the following query. For example, if I were to look for documents with more than 3 keys in the "resources" document:
db.coll.find({$where: function(){
var total = 0;
for(i in this['resources']){
++total;
if(total > 3){
return true;
}
}
}})
As I'm pretty new to Mongo and terrible at js, I feel like there may be a smarter way to do this. I'm also very curious to hear opinions on whether or not this goes against the Mongo ethos a bit by not pushing this processing to the client. Any feedback or criticism of this approach and implementation are most welcome.
Thanks for reading.
You can use an aggregate pipeline to assemble metadata about the docs and then filter on them.
db.coll.aggregate([
{$project: {
// Compute a total count of the keys in the resources docs
keys: {$add: [{$size: '$resources.foo'}, {$size: '$resources.bar'}]},
// Project the original doc
doc: '$$ROOT'
}},
// Only include the docs that have more than 3 keys
{$match: {keys: {$gt: 3}}}
])
Can someone please tell me the difference b/w following two queries. Both works for me and seems to be giving correct results, but i am not sure if really there is any difference or not
Retrieve all students having scores between 80 and 95
var query1 = { 'grade' : {"$gt":80}, 'grade' : {"$lt":95} };
var query2 = { 'grade' : {"$gt":80,"$lt":95} };
I think that you will find that your first form does not actually work, even if it did for you on a minimal sample. And it will fail for a very good reason. Consider the following documents:
{ "grade" : 90 }
{ "grade" : 96 }
{ "grade" : 80 }
If you issue your first query form you will get this for a result:
{ "grade" : 90 }
{ "grade" : 80 }
The reason being that you cannot have the "same key" in a document like this, and one will negate the other. In this case the "right" side key takes precedence over the left key and overwrites. This is common behavior for hash or dictionary structures.
This is why the second form is required and will of course return only the document that matches the conditions that are intended to be specified.
When you have an actual case to use the same field and probably using different conditions you can use the $and operator. Not the best example, but just to clarify:
db.collection.find({ "$and": [
{ "grade": { "$gt": 80 } },
{ "grade": { "$lt": 95 } },
]})
It's real purpose is to combine different conditions on the same field.
For your case though, use the second form you specified.
You should use the second query for the following reason: you query is actually a JSON. And in your first query you are providing a duplicate key (grade).
Nonetheless they are permitted by the JSON RFC, however even Doug Crockford mentioned that he regrets leaving this ambiguity in the spec because it inevitably leads to all kinds of confusion.
Maybe Mongo parses it correctly right now (or may be rather in your case), but you do not know for how long will it be this way (and some other json parsers tells you that you have an error).
So the best way is to think about the first query as bad and only use second one.
Ok there are a couple of things going on here..I have two collections: test and test1. The documents in both collections have an array field (tags and tags1, respectively) that contains some tags. I need to find the intersection of these tags and also fetch the whole document from collection test1 if even a single tag matches.
> db.test.find();
{
"_id" : ObjectId("5166c19b32d001b79b32c72a"),
"tags" : [
"a",
"b",
"c"
]
}
> db.test1.find();
{
"_id" : ObjectId("5166c1c532d001b79b32c72b"),
"tags1" : [
"a",
"b",
"x",
"y"
]
}
> db.test.find().forEach(function(doc){db.test1.find({tags1:{$in:doc.tags}})});
Surprisingly this doesn't return anything. However when I try it with a single document, it works:
> var doc = db.test.findOne();
> db.test1.find({tags1:{$in:doc.tags}});
{ "_id" : ObjectId("5166c1c532d001b79b32c72b"), "tags1" : [ "a", "b", "x", "y" ] }
But this is part of what I need. I need intersection as well. So I tried this:
> db.test1.find({tags1:{$in:doc.tags}},{"tags1.$":1});
{ "_id" : ObjectId("5166c1c532d001b79b32c72b"), "tags1" : [ "a" ] }
But it returned just "a" whereas "a" and "b" both were in tags1. Does positional operator return just the first match? Also, using $in won't exactly give me an intersection..How can I get an intersection (should return "a" and "b") irrespective of which array is compared against the other.
Now say there's an operator that can do this..
> db.test1.find({tags1:{$intersection:doc.tags}},{"tags1.$":1});
{ "_id" : ObjectId("5166c1c532d001b79b32c72b"), "tags1" : [ "a", "b" ] }
My requirement is, I need the entire tags1 array PLUS this intersection, in the same query like this:
> db.test1.find({tags1:{$intersection:doc.tags}},{"tags1":1, "tags1.$":1});
{ "_id" : ObjectId("5166c1c532d001b79b32c72b"), "tags1": [ "a", "b", "x", "y" ],
"tags1" : [ "a", "b" ] }
But this is an invalid json. Is renaming key possible, or this is possible only through aggregation framework (and across different collections?)? I tried the above query with $in. But it behaved as if it totally ignored "tags:1" projection.
PS: I am going to have at least 10k docs in test1 and very few (<10) in test. And this query is in real-time, so I want to avoid mapreduce :)
Thanks for any help!
In newer versions you can use aggregation to accomplish this.
db.test.aggregate(
{
$match: {
tags1: {
$in: doc.tags
}
}
},
{
$project: {
tags1: 1,
intersection: {
$setIntersection: [doc.tags, "$tags1"]
}
}
}
);
As you can see, the match portion is exactly the same as your initial find() query. The project portion generates the result fields. In this case, it selects tags1 from the matching documents and also creates intersection from the input and the matching docs.
Mongo doesn't have any inherent ability to retrieve array intersections. If you really need to use ad-hoc querying get the intersection on the client side.
On the other hand, consider using Map-Reduce and storing it's output as a collection. You can augment the returned objects in the finalize section to add the intersecting tags. Cron MR to run every few seconds. You get the benefit of a permanent collection you can query from on the client side.
If you want to have this in realtime you should consider to move away from Serverside Javascript which is only run with one thread and should be quite slow (single threaded) (this is no longer true for v2.4, http://docs.mongodb.org/manual/core/server-side-javascript/).
The positional operator only returns the first matching/current value. Without knowing the internal implementation, from the point of performance it doesn't even makes sense to look for further matching criteria if the document was already evaluated as match. So I doubt that you can go for this.
I don't know if you need the cartesian product for your search, but I would consider joining your few test one document tags into one and then have some $in search for it on test1, returning all matching documents. On your local machine you could have multiple threads which generate the intersection for your document.
Depending on how frequent your test1 and test collection changes, you're performing this query you might precalculate this information. Which would allow to easily do a query on the field which contains the intersection information.
The document is invalid because you have two fields names tags1
I need help incrementing value of all keys in participants without having to know name of the keys inside of it.
> db.conversations.findOne()
{
"_id" : ObjectId("4faf74b238ba278704000000"),
"participants" : {
"4f81eab338ba27c011000001" : NumberLong(2),
"4f78497938ba27bf11000002" : NumberLong(2)
}
}
I've tried with something like
$mongodb->conversations->update(array('_id' => new \MongoId($objectId)), array('$inc' => array('participants' => 1)));
to no avail...
You need to redesign your schema. It is never a good idea to have "random key names". Even though MongoDB is schemaless, it still means you need to have defined key names. You should change your schema to:
{
"_id" : ObjectId("4faf74b238ba278704000000"),
"participants" : [
{ _id: "4f81eab338ba27c011000001", count: NumberLong(2) },
{ _id: "4f78497938ba27bf11000002", count: NumberLong(2) }
]
}
Sadly, even with that, you can't update all embedded counts in one command. There is currently an open feature request for that: https://jira.mongodb.org/browse/SERVER-1243
In order to still update everything, you should:
query the document
update all the counts on the client side
store the document again
In order to prevent race conditions with that, have a look at "Compare and Swap" and following paragraphs.
It is not possible to update all nested elements in one single move in current version of MongoDB. So I can advice to use "foreach {}".
Read realted topic: How to Update Multiple Array Elements in mongodb
I hope this feature will be implemented in next version.