Locking document in MongoDB while allowing queries to get non-locked records - mongodb

In our product we have a core collection which can be accessed from a distributed set of workers.
I want to be able to get a document from the collection without any of the workers accidentally picking up the same document.
The best way I've come up with so far for managing to prevent duplicate records being loaded is the following:
Having 2 separate collections with the following basic structure:
core: { _id: '{mongoGeneratedId}', locked: false, lockTimeout: 0}
lock: { _id: null, lockTimeout: 0}
(lockTimeout would have a TTL index)
A worker would run a query that looks something like this:
db.core.findOne({
$or: [
{locked: false},
{lockTimeout < $currentTime}
]
})
and would have a record returned to it.
To test if the record has been grabbed by another worker and locked it would then try to insert a record into lock with a lockTimeout of 5 mins in the future and an id of the same id as your id from the core table.
If this fails, then we know that another worker pipped us to the post and we want to try to run the query again. If it succeeds, then we update core to have locked as true and have the lockTimeout as the same as the lockTimeout from the lock collection.
Apart from the addition of some form of slightly more complicated ordering to reduce the chances of 2 workers picking up the same record I believe this should work.
However, it doesn't feel elegant and I feel like there should be a better way that doesn't require me to create a secondary collection just to keep track of locking.
Does such a thing exist? Kind regards!

Try using the findAndModify command. This command atomically updates a document and returns the document (default pre-, optionally post-update). You can use the atomic update to lock the document as you grab it:
> db.queue.insert({ "x" : 1, "locked" : false })
> db.queue.findAndModify({
"query" : { "locked" : false },
"update" : { "$set" : { "locked" : true } }
})
{ "_id" : ObjectId("53ea6f0ef9b63e0dd3ca1a1f"), "x" : 1, "locked" : false }
You can also remove the document atomically. Check out the link for all of the features that could help for your queue-like use case and to read more about the command's behavior.

Related

MongoDB - how do I update a value in nested array/object?

I have a document in my Mongo collection which has a field with the following structure:
"_id" : "F7WNvjwnFZZ7HoKSF",
"process" : [
{
"process_id" : "wTGqVk5By32mpXadZ",
"stages" : [
{
"stage_id" : "D6Huk89DGFsd29ds7",
"completed" : "N"
},
{
"stage_id" : "Msd390vekn09nvL23",
"completed" : "N"
}
]
}
]
I need to update the value of completed where the stage_id is equal to 'D6Huk89DGFsd29ds7' - the update query will not know which object in the stages array this value of stage_id will be in.
How do I do this?
Since you have nested arrays in your object, this is bit tricky and I'm not sure if this problem can be solved with help of just one update query.
However, if you happen to know index of your matching object in first array, in your case process[0] you can write your update query like.
db.collection.update(
{"process.stages.stage_id":"D6Huk89DGFsd29ds7"},
{$set:{"process.0.stages.$.completed":"Y"}}
);
Query above will work perfect with your test case. Again, there is still possibility of having multiple objects at root level and there is no guarantee that matching object will always be at 0 index.
Solution I proposed above will fail if you have multiple children of process and if matching index of object is not zero.
However, you can achieve your goal with help of client side programming. That is find matching document, modify on client side and replace whole document with new content.
Since this approach is very in efficient, I'll suggest that you should consider altering your document structure to avoid nesting. Create another collection and move content of process array there.
In the end, I removed the outer process block, so that the process_id and stages were in the root of the document - made the process of updating easier using:
MyColl.update(
{
_id: 'F7WNvjwnFZZ7HoKSF',
"stages.stage_id": 'D6Huk89DGFsd29ds7'
},
{
$set: {"stages.$.completed": 'Y'}
}
);

Monodb database migration with embedded query

Currently in my database I have messages objects set up as the following.
{
"name" : "System",
"message" : "Sean Callahan has entered the room.",
"time" : 1406479167270,
"type" : "system_message",
"room" : "helloroom",
"_id" : "4yeHzhHAQmGJNtHww"
}
I want to basically migrate my data so that every message has a roomId that point it at the appropriate room. Currently this is done by the with the room attribute, which I know see the fault in my ways for various reasons.
My room objects are setup something like this.
{
"_id:" xxxxxxxxx
"room_name:" "testingroom"
}
So I was hoping there was a way to run a one-liner that would just add the correct roomId to every current message based on the current room attribute that is set
I was thinking something along the lines of..
db.messages.update({}, {$set: {roomId: db.rooms.findOne({room_name: room})._id}})
As of now, I am getting room is not defined, which makes perfect sense. But I can't seem to get it right, and this may just not be possible in a one-line query.
As you discovered, this isn't possible in a one-line query since you need to join data from two collections.
Here's an example of how to add the missing field in the mongo shell:
db.messages.find(
{ roomId: { $exists: false} }
).forEach(function(room) {
var roomId = db.rooms.findOne({room_name: room.room});
if (roomId._id) {
db.messages.update(
{ _id: room._id },
{ $set: { roomId: roomId._id }}
)
}
})
You could tidy this up with some error checking, and for updates on a large collection consider using the Bulk Update API (only available in MongoDB 2.6+).

Can we apply index to match a certain value in mongoDB

i have a collection named users as shown below .
db.users.find().pretty()
{
"_id" : ObjectId("512efc206074b0e4bbdce792"),
"login_id" : "dutchuser",
"isBroker" : false
}
I want to apply index for this users collection with the login_id and isBroker field also .
db.users.ensureIndex( { "login_id": 1, "isBroker": 1 }, { unique: false } )
My concern is that most of the isBroker field has got a value of false .
So is there any possibility that i can apply index in that way ??
You cannot conditionally apply a filter to an index in MongoDB. While you could potentially restructure your data or introduce additional, potentially duplicate fields in your schema, I'm not convinced it's a reasonable "optimization."
Use db.stats() to actually measure the size of the database and db.{collectionname}.totalIndexSize() to see what the impact of having the index you proposed really is.
By using this index:
db.users.ensureIndex( { "login_id": 1, "isBroker": 1 }, { unique: false } )
You can only use queries that involve login_id and isBroker or just login_id. Depending on the types of queries that you run, you may also run into this currently open issue that might make a simple grouping/sorting on isBroker inefficient (or if at some point it becomes broker_type for example).

event streaming via mongodb. Get last inserted events

I consuming data from existing database. this database store system events. My service should check this database by timer, check if some new events created, then upload it and handle. Something like simple queue implementation.
The question is - how can I get new docs each time, when I check database. I can't use timestamps, because events goes to database from different sources and there are no any order for events. So I just need to use inserting order only.
There are a couple of options.
The first, and easiest if it matches your use case, is to use a capped collection. The capped collection is a collection as a pre-defined size that acts as a sort of ring-buffer. Once then collection is full it starts overwriting the documents. For iterating over the collection you simply create a "tailable" cursor you will need some way of identifying the "last document processed (even a simple "done" flag in the document could work but it would have to exist when the document is inserted). If you truly can't modify the documents in any way then you could even save off the last processed document somewhere and use a course time stamp to (approximate the start position) and look for the last document before processing more documents.
The only real issue with this solution is that you will be limited in the number of documents you can write in the collection and it won't grow over time. There are limits on the write operations you can perform on the documents (they can't grow) but it does not sound like you are modifying the documents.
The second option, which is more complex, is to use the oplog. For a standalone configuration you will need to still pass the -replSet option to create and use the oplog. You will just not configure the oplog. In a sharded configuration you will need to track each "replica set" separately. The oplog contains a document for each insert, update, delete done to all collections/documents on the server. Each entry contains a timestamp, operation and id (at a minimum). Here are examples of each.
Insert
{ "ts" : { "t" : 1362958492000, "i" : 1 },
"h" : NumberLong("5915409566571821368"), "v" : 2,
"op" : "i",
"ns" : "test.test",
"o" : { "_id" : "513d189c8544eb2b5e000001" } }
Delete
{ ... "op" : "d", ..., "b" : true,
"o" : { "_id" : "513d189c8544eb2b5e000001" } }
Update
{ ... "op" : "u", ...,
"o2" : { "_id" : "513d189c8544eb2b5e000001" },
"o" : { "$set" : { "i" : 1 } } }
The timestamps are generated on the server and are guaranteed to be monotonically increasing. which allows you to quickly find the documents of interest.
This option is the most robust but requires some work on your part.
I wrote some demo code to create a "watcher" on a collection that is almost what you want. You can find that code on GitHub. Specifically look at the code in the com.allanbank.mongodb.demo.coordination package.
HTH, Rob
You can actually use timestamps if your _id is of type ObjectId:
prefix = Math.floor((new Date( 2013 , 03 , 11 )).getTime()/1000).toString(16)
db.foo.find( { _id : { $gt : new ObjectId( prefix + "0000000000000000" ) } } )
This way, it doesn't matter where the source of the event was or when it was,
it only matters when document insertion was recorded (higher than previous timer)
Of course, it is schema-less and you can always set a field such as isNew to true,
and set it to false in conjunction with your query / cursor

MongoDB: Doing $inc on multiple keys

I need help incrementing value of all keys in participants without having to know name of the keys inside of it.
> db.conversations.findOne()
{
"_id" : ObjectId("4faf74b238ba278704000000"),
"participants" : {
"4f81eab338ba27c011000001" : NumberLong(2),
"4f78497938ba27bf11000002" : NumberLong(2)
}
}
I've tried with something like
$mongodb->conversations->update(array('_id' => new \MongoId($objectId)), array('$inc' => array('participants' => 1)));
to no avail...
You need to redesign your schema. It is never a good idea to have "random key names". Even though MongoDB is schemaless, it still means you need to have defined key names. You should change your schema to:
{
"_id" : ObjectId("4faf74b238ba278704000000"),
"participants" : [
{ _id: "4f81eab338ba27c011000001", count: NumberLong(2) },
{ _id: "4f78497938ba27bf11000002", count: NumberLong(2) }
]
}
Sadly, even with that, you can't update all embedded counts in one command. There is currently an open feature request for that: https://jira.mongodb.org/browse/SERVER-1243
In order to still update everything, you should:
query the document
update all the counts on the client side
store the document again
In order to prevent race conditions with that, have a look at "Compare and Swap" and following paragraphs.
It is not possible to update all nested elements in one single move in current version of MongoDB. So I can advice to use "foreach {}".
Read realted topic: How to Update Multiple Array Elements in mongodb
I hope this feature will be implemented in next version.