Unusual non 1 or -1 values for MongoDB indexes - mongodb

I tried to create index using
db.collection_name.createIndex({"field_name":1})
Then when I'm calling getIndexes() it gives me following results
{
"v" : 2,
"key" : {
"field_name" : 1.0
},
"name" : "field_name_1",
"ns" : "dbname.collection_name"
}
So I wonder why is it floating point "field_name" : 1.0 now? Is it bad? Should I even worry about it? Any way to make it exactly 1?
And out of curiosity: I've noticed I can even successfully call it that way:
db.collection_name.createIndex({"another_field_name":12345})
without it producing any errors. I wonder what's happening in this case.

Your question is actually a couple of questions, but the first does have a brief answer
Q: "Why am I getting a floating point?"
A: Because you are using robomongo, and the interface simply displays the supplied Number type in that way. The mongo shell actually displays this differently
And the second:
Q: "Why can I use 12345 as a value instead of just 1 or -1?"
A: Because its still actually numeric and valid. All MongoDB cares about here is "positive" or "negative". So where "positive" issuing a query that used the index would sort "ascending" by default. But you still would need to supply 1 or -1 to a specific .sort(), since that is all that would be valid.
To demonstrate the latter case, insert some data into your collection:
db.collection_name.insertMany(
[5,1,3].map( v => ({ another_field_name: v }) )
)
And create your index:
db.collection_name.createIndex({ "another_field_name": 12345 })
If you issue a range query, the "ascending" order is used by the "positive" value:
db.collection_name.find({ "another_field_name": { "$gt": 0 } },{ "_id": 0 })
{ "another_field_name" : 1.0 }
{ "another_field_name" : 3.0 }
{ "another_field_name" : 5.0 }
This shows the order of the index being applied even though the actual insertion of values was in a different order. So the index is being clearly applied here.
If you tried to explicitly .sort() using any other value than 1 or -1 on this type of index, then that would produce an error. But of course this would result in "ascending" or "descending" sort respectively, as MongoDB will happily reverse the order of traversal for the index.
If you removed the index and created one using a "negative" value:
db.collection_name.dropIndexes();
db.collection_name.createIndex({ "another_field_name": -54321 });
And then issued the same query:
db.collection_name.find({ "another_field_name": { "$gt": 0 } },{ "_id": 0 })
{ "another_field_name" : 5.0 }
{ "another_field_name" : 3.0 }
{ "another_field_name" : 1.0 }
Then the "descending" order is applied because that is essentially what you told it to do in default handling.
Is this good or bad overall? From a storage point of view it really does not matter, as no matter the actual value presented a BSON Double is still a BSON Double.
You could alternately use NumberInt for a specific 32-bit value as opposed to a 64-bit value as specified in BSON Types, but again the value being 1 or 65,000 or in the reverse as -1 or -65,000 does not change the allocated storage or the basic handling of where it is "postive" or "negative".
For general readability and consistency with arguments to .sort() then as a "matter of opinion", using 1 and -1 is more consistently understood for it's intended purpose.
It is actually the "preferred" implementation as to the specification, and is somewhat echoed in the documentation ( though not to prominently ):
Some drivers may specify indexes, using NumberLong(1) rather than 1 as the specification. This does not have any affect on the resulting index.

Related

Mongodb Sorting returns null instead of data

My Mongodb dataset is like this
{
"_id" : ObjectId("5a27cc4783800a0b284c7f62"),
"action" : "1",
"silent" : "0",
"createdate" : ISODate("2017-12-06T10:53:59.664Z"),
"__v" : 0
}
Now I have to find the data whose Action value is 1 and silent value is 0. one more thing is that all the data returns is descending Order.
My Mongodb Query is
db.collection.find({'action': 1, 'silent': 0}).sort({createdate: -1}).exec(function(err, post) {
console.log(post.length);
});
Earlier It works Fine for me But Now I have 121000 entry on this Collection. Now it returns null.
I know there is some confusion on .sort()
If i remove the sort Query then everything is fine. Example
db.collection.find({'action': 1, 'silent': 0}).exec(function(err, post) {
console.log(post.length);// Now it returns data but not on Descending order
});
MongoDB limits the amount of data it will attempt to sort without an index .
This is because Mongo has to sort the data in memory or on disk, both of which can be expensive operations, particularly for queries run frequently.
In most cases, this can be alleviated by creating indexes on the fields you sort on.
you can create index with :-
db.myColl.createIndex( { createdate: 1 })
thanks !

long number is not updating properly in mongodb

I tried to update a field in a document which was long integer. But it was updated to the value '14818435007969200' instead of '14818435007969199'.
db.getCollection('title').updateMany({},
{$set:{'skillId':[NumberLong(14818435007969199)]}})
db.getCollection('title').find({})
{
"_id" : ObjectId("5853351c0274072315da2426"),
"skillId" : [
NumberLong(14818435007969200)
]
}
Is there any solution? I am using robomongo 0.9.0.
The mongo shell treats all numbers as floating point values. So while using the NumberLong() wrapper pass the long value as string or risk the loss for precision or conversion mismatches.
This should work as expected.
db.getCollection('title').updateMany({},
{$set:{'skillId':[NumberLong("14818435007969199")]}})
Just to demonstrate for example.
So when converting 14818435007969199 to binary base 2 value you get 110100101001010100100111000010110001101111011110110000 which when converted back to base 10 is 14818435007969200
You can checkout the floating point arithmetic for more details.
here is an example with where condition in the query
db.CustomerRatibs.update(
{ custRatibId:'8b19bfdbac7b468b9c3edafc37ad5409' },
{ $set:
{
uAt : NumberLong(1536581726000)
}
},
{
multi:false
}
)

Storing a query in Mongo

This is the case: A webshop in which I want to configure which items should be listed in the sjop based on a set of parameters.
I want this to be configurable, because that allows me to experiment with different parameters also change their values easily.
I have a Product collection that I want to query based on multiple parameters.
A couple of these are found here:
within product:
"delivery" : {
"maximum_delivery_days" : 30,
"average_delivery_days" : 10,
"source" : 1,
"filling_rate" : 85,
"stock" : 0
}
but also other parameters exist.
An example of such query to decide whether or not to include a product could be:
"$or" : [
{
"delivery.stock" : 1
},
{
"$or" : [
{
"$and" : [
{
"delivery.maximum_delivery_days" : {
"$lt" : 60
}
},
{
"delivery.filling_rate" : {
"$gt" : 90
}
}
]
},
{
"$and" : [
{
"delivery.maximum_delivery_days" : {
"$lt" : 40
}
},
{
"delivery.filling_rate" : {
"$gt" : 80
}
}
]
},
{
"$and" : [
{
"delivery.delivery_days" : {
"$lt" : 25
}
},
{
"delivery.filling_rate" : {
"$gt" : 70
}
}
]
}
]
}
]
Now to make this configurable, I need to be able to handle boolean logic, parameters and values.
So, I got the idea, since such query itself is JSON, to store it in Mongo and have my Java app retrieve it.
Next thing is using it in the filter (e.g. find, or whatever) and work on the corresponding selection of products.
The advantage of this approach is that I can actually analyse the data and the effectiveness of the query outside of my program.
I would store it by name in the database. E.g.
{
"name": "query1",
"query": { the thing printed above starting with "$or"... }
}
using:
db.queries.insert({
"name" : "query1",
"query": { the thing printed above starting with "$or"... }
})
Which results in:
2016-03-27T14:43:37.265+0200 E QUERY Error: field names cannot start with $ [$or]
at Error (<anonymous>)
at DBCollection._validateForStorage (src/mongo/shell/collection.js:161:19)
at DBCollection._validateForStorage (src/mongo/shell/collection.js:165:18)
at insert (src/mongo/shell/bulk_api.js:646:20)
at DBCollection.insert (src/mongo/shell/collection.js:243:18)
at (shell):1:12 at src/mongo/shell/collection.js:161
But I CAN STORE it using Robomongo, but not always. Obviously I am doing something wrong. But I have NO IDEA what it is.
If it fails, and I create a brand new collection and try again, it succeeds. Weird stuff that goes beyond what I can comprehend.
But when I try updating values in the "query", changes are not going through. Never. Not even sometimes.
I can however create a new object and discard the previous one. So, the workaround is there.
db.queries.update(
{"name": "query1"},
{"$set": {
... update goes here ...
}
}
)
doing this results in:
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 52,
"errmsg" : "The dollar ($) prefixed field '$or' in 'action.$or' is not valid for storage."
}
})
seems pretty close to the other message above.
Needles to say, I am pretty clueless about what is going on here, so I hope some of the wizzards here are able to shed some light on the matter
I think the error message contains the important info you need to consider:
QUERY Error: field names cannot start with $
Since you are trying to store a query (or part of one) in a document, you'll end up with attribute names that contain mongo operator keywords (such as $or, $ne, $gt). The mongo documentation actually references this exact scenario - emphasis added
Field names cannot contain dots (i.e. .) or null characters, and they must not start with a dollar sign (i.e. $)...
I wouldn't trust 3rd party applications such as Robomongo in these instances. I suggest debugging/testing this issue directly in the mongo shell.
My suggestion would be to store an escaped version of the query in your document as to not interfere with reserved operator keywords. You can use the available JSON.stringify(my_obj); to encode your partial query into a string and then parse/decode it when you choose to retrieve it later on: JSON.parse(escaped_query_string_from_db)
Your approach of storing the query as a JSON object in MongoDB is not viable.
You could potentially store your query logic and fields in MongoDB, but you have to have an external app build the query with the proper MongoDB syntax.
MongoDB queries contain operators, and some of those have special characters in them.
There are rules for mongoDB filed names. These rules do not allow for special characters.
Look here: https://docs.mongodb.org/manual/reference/limits/#Restrictions-on-Field-Names
The probable reason you can sometimes successfully create the doc using Robomongo is because Robomongo is transforming your query into a string and properly escaping the special characters as it sends it to MongoDB.
This also explains why your attempt to update them never works. You tried to create a document, but instead created something that is a string object, so your update conditions are probably not retrieving any docs.
I see two problems with your approach.
In following query
db.queries.insert({
"name" : "query1",
"query": { the thing printed above starting with "$or"... }
})
a valid JSON expects key, value pair. here in "query" you are storing an object without a key. You have two options. either store query as text or create another key inside curly braces.
Second problem is, you are storing query values without wrapping in quotes. All string values must be wrapped in quotes.
so your final document should appear as
db.queries.insert({
"name" : "query1",
"query": 'the thing printed above starting with "$or"... '
})
Now try, it should work.
Obviously my attempt to store a query in mongo the way I did was foolish as became clear from the answers from both #bigdatakid and #lix. So what I finally did was this: I altered the naming of the fields to comply to the mongo requirements.
E.g. instead of $or I used _$or etc. and instead of using a . inside the name I used a #. Both of which I am replacing in my Java code.
This way I can still easily try and test the queries outside of my program. In my Java program I just change the names and use the query. Using just 2 lines of code. It simply works now. Thanks guys for the suggestions you made.
String documentAsString = query.toJson().replaceAll("_\\$", "\\$").replaceAll("#", ".");
Object q = JSON.parse(documentAsString);

Block a change field in mongoDB

I have a collection in MongoDB which makes an increase, the field was initially defined as an Integer, but I find that after the increase was converted to double.
But then I make an update of the document and see that changes to Long.
Is there any way to block these changes in Mongo?
Thanks in advance
Since MongoDB doesn't have a fixed schema per collection, there's no way to prevent such changes on the database side. Make sure that you use the same data type for the field everywhere, including its update operations. The C# driver is pretty smart about this.
Be careful when working with the shell, it can be irritating. Per default, the mongo shell will treat every number as a double, e.g.:
> db.Inc.find().pretty();
{ "_id" : 1, "Number" : 1000023272226647000 }
// this number is waaayyy larger than the largest 32 bit int, but there's no
// NumberLong here. So it must be double.
> db.Inc.update({}, {$inc: {"Number" : 1 }});
> db.Inc.find().pretty();
{ "_id" : 1, "Number" : 1000023272226647000 }
// Yikes, the $inc doesn't work anymore because of precision loss
Let's use NumberLong:
> db.Inc.insert({"Number" : NumberLong("1000023272226647000")});
> db.Inc.update({}, {$inc: {"Number" : 1}});
> db.Inc.find();
{ "Number" : 1000023272226647000, "_id" : 1 }
// Yikes! type conversion changed to double again! Also note
// that the _id field moved to the end
Let's use NumberLong also in $inc:
> db.Inc.insert({"Number" : NumberLong("1000023272226647000")});
> db.Inc.update({}, {$inc: {"Number" : NumberLong("1")}});
> db.Inc.find();
{ "_id" : 1, "Number" : NumberLong("1000023272226647001") }
// This actually worked
In C#, both of the following updates work, Number remains a long:
class Counter { public long Number {get;set;} public ObjectId Id {get;set;} }
var collection = db.GetCollection("Counter");
collection.Insert(new Counter { Number = 1234 });
collection.Update(Query.Null, Update<Counter>.Inc(p => p.Number, 1)); // works
collection.Update(Query.Null, Update.Inc("Number", 1)); // works too
MongoDB is schema-less. Schamalessness provides for easier changes in your data structure but at the cost of the database not enforcing things like type constraints. You need to be disciplined in your application code to ensure that things are persisted in the way you want them to be.
If you need to ensure that the data is always of type Integer then it's recommended to have your application access MongoDB through a data access layer within the application. The data access layer can enforce type constraints (as well as any other constraints you want to put on your objects).
Short answer: There is no way to enforce this in MongoDB.

Can we apply index to match a certain value in mongoDB

i have a collection named users as shown below .
db.users.find().pretty()
{
"_id" : ObjectId("512efc206074b0e4bbdce792"),
"login_id" : "dutchuser",
"isBroker" : false
}
I want to apply index for this users collection with the login_id and isBroker field also .
db.users.ensureIndex( { "login_id": 1, "isBroker": 1 }, { unique: false } )
My concern is that most of the isBroker field has got a value of false .
So is there any possibility that i can apply index in that way ??
You cannot conditionally apply a filter to an index in MongoDB. While you could potentially restructure your data or introduce additional, potentially duplicate fields in your schema, I'm not convinced it's a reasonable "optimization."
Use db.stats() to actually measure the size of the database and db.{collectionname}.totalIndexSize() to see what the impact of having the index you proposed really is.
By using this index:
db.users.ensureIndex( { "login_id": 1, "isBroker": 1 }, { unique: false } )
You can only use queries that involve login_id and isBroker or just login_id. Depending on the types of queries that you run, you may also run into this currently open issue that might make a simple grouping/sorting on isBroker inefficient (or if at some point it becomes broker_type for example).