I'm using meteor. I'm wondering if theres a shorthand way to do batch updates before the DOM is updated.
for instance I want to update some records,more than one (All at once):
Collection.update(id1,{..})
Collection.update(id2,{..})
Collection.update(id3,{..})
The problem is there are 3 items being updated separately. So when the DOM in my case was being redrawn 3 times instead of once (with all 3 updated records).
Is there a way to hold off the ui updating until all of them are updated?
Mongo's update can modify more than one document at a time. Just give it a selector that matches more than one document, and set the multi option. In your case, that's just a list of IDs, but you can use any selector.
Collection.update({_id: {$in: [id1, id2, id3]}}, {...}, {multi:true});
This will run a single DB update and a single redraw.
Execute them on the server instead, that way they might be synchronously done such that they are less likely to cause multiple DOM updates on the client.
See the first two and last interesting code bits, which explain how to protect your clients from messing with the database as well as how to define methods on the server and call them from the client.
Related
I'm developing a single page web app that will use a NoSQL Document Database (like MongoDB) and I want to generate events when I make a change to my entities.
Since most of these databases support transactions only on a document level (MongoDB just added ASIC support) there is no good way to store changes in one document and then store events from those changes to other documents.
Let's say for example that I have a collection 'Events' and a collection 'Cards' like Trello does. When I make a change to the description of a card from the 'Cards' collection, an event 'CardDescriptionChanged' should be generated.
The problem is that if there is a crash or some error between saving the changes to the 'Cards' collection and adding the event in the 'Events' collection this event will not be persisted and I don't want that.
I've done some research on this issue and most people would suggest that one of several approaches can be used:
Do not use MongoDB, use SQL database instead (I don't want that)
Use Event Sourcing. (This introduces complexity and I want to clear older events at some point, so I don't want to keep all events stored. I now that I can use snapshots and delete older events from the snapshot point, but there is a complexity in this solution)
Since errors of this nature probably won't happen too often, just ignore them and risk having events that won't be saved (I don't want that too)
Use an event/command/action processor. Store commands/action like 'ChangeCardDescription' and use a Processor that will process them and update the entities.
I have considered option 4, but a couple of question occurs:
How do I manage concurrency?
I can queue all commands for the same entity (like a card or a board) and make sure that they are processed sequentially, while events for different entities (different cards) can be processed in parallel. Then I can use processed commands as events. One problem here is that changes to an entity may generate several events that may not correspond to a single command. I will have to break down to very fine-grained commands all user actions so I can then translate them to events.
Error reporting and error handling.
If this process is asynchronous, I have to manage error reporting to the client. And also I have to remove or mark commands that failed.
I still have the problem with marking the commands as processed, as there are no transactions. I know I have to make processing of commands idempotent to resolve this problem.
Since Trello used MongoDB and generates actions ('DeleteCardAction', 'CreateCardAction') with changes to entities (Cards, Boards..) I was wondering how do they solve this problem?
Create a new collection called FutureUpdates. Write planned updates to the FutureUpdates collection with a single document defining the changes you plan to make to cards and the events you plan to generate. This insert will be atomic.
Now take a [ChangeStream][1] of the FutureUpdates collection this will give you the stream of updates you need to make. Take each doc from the change stream and apply the updates. Finally, update the doc in FutureUpdates to mark it as complete. Again this update will be atomic.
When you apply the updates to Events and Cards make sure to include the objectID of the doc used to create the update in FutureUpdates.
Now if the program crashes after inserting the update in FutureUpdates you can check the Events and Cards collections for the existence of records containing the objectID of the update. If they are not present then you can reapply the missing updates.
If the updates have been applied but the FutureUpdate doc is not marked as complete we can update that during recovery to complete the process.
Effectively you are continuously atomically updating a doc for each change in FutureUpdates to track progress. Once an update is complete you can archive the old docs or just delete them.
My Question is related to Mongo DB - embedded array vs separate document.
I have following requirement.
An event can have multiple instances of dates(Start Date- End Date), in which for each instance the event details are same except events start date and end date.
User can add many instances per event.
So in Mongo DB which of the following Structure would be suitable.
Having separate record/document for each instance.
Maintain array of instances in event document only.
Thanks in Advance.
Amol
Well I would've gone for second option i-e
Maintain array of instances in event document only.
Because of the following
This would allow me manage the data in more efficient way like if I want to update name of the event for example, I won't have to update in all the rows if I had used the first option. In second case I just need to update a single row.
If I had to delete an event, again I'll have to deal with only one document in second case and in first case, I had to delete all those rows with that event
Don't know how I would have managed to mark different documents as the same event in the first case, instead, In second case I'd always have one unique Id for a particular event.
If I need I can unwind the document to number of documents, to use aggregation for example.
I have a collection with a bunch of documents representing various items. Once in a while, I need to update item properties, but the update takes some time. When properties are updated, the item gets a new timestamp for when it was modified. If I run updates one at a time, then there is no problem. However, if I want to run multiple update processes simultaneously, it's possible that one process starts updating the item, but the next process still sees the item as needing an update and starts updating it as well.
One solution is to mark the item as soon as it is retrieved for update (findAndModify), but it seems wasteful to add a whole extra field to every document just to keep track of items currently being updated.
This should be a very common issue. Maybe there are some built-in functions that exist to address it? If not, is there a standard established method to deal with it?
I apologize if this has been addressed before, but I am having a hard time finding this information. I may just be using the wrong terms.
You could use db.currentOp() to check if an update is already in flight.
My question is clear as in the title. When a request come to my service for updating related record in mongoDb, we use "save" method.
However, I would like to understand whether the save method really updates the record or not.
In other words, I would like to know if the content going to save is the same with the existing content in mongoDb. Accordingly, even if save method is executed without any errors, is it possible to understand whether it is really updated or not?
Thanks in advance
There are several ways to checks this.
The first is after calling Save, is to call the getLastError method. Within the console this is just db.getLastError().
This will tell you if an error occurred during the last operation. More details can be found at te following address http://docs.mongodb.org/manual/core/write-operations/#write-concern.
Another way would be to call findAndModify, this will allow you to update the document and either get the updated document back.
http://docs.mongodb.org/manual/reference/command/findAndModify/
Both of these are available in all of the official drivers.
Save method always writes the record.
There is no situation in Mongo where the write would not happen because the record that is being saved is identical to the record that's already there. The write would simply happen and "overwrite" existing data with new data (which happens to be identical).
The only way you can tell is by comparing the old and new documents - and that's a lot of extra work.
I am using mgo driver for MongoDB under Go.
My application asks for a task (with just a record select in Mongo from a collection called "jobs") and then registers itself as an assignee to complete that task (an update to that same "job" record, setting itself as assignee).
The program will be running on several machines, all talking to the same Mongo. When my program lists the available tasks and then picks one, other instances might have already obtained that assignment, and the current assignment would have failed.
How can I get sure that the record I read and then update does or does not have a certain value (in this case, an assignee) at the time of being updated?
I am trying to get one assignment, no matter which one, so I think I should first select a pending task and try to assign it, keeping it just in the case the updating was successful.
So, my query should be something like:
"From all records on collection 'jobs', update just one that has assignee=null, setting my ID as the assignee. Then, give me that record so I could run the job."
How could I express that with mgo driver for Go?
This is an old question, but just in case someone is still watching at home, this is nicely supported via the Query.Apply method. It does run the findAndModify command as indicated in another answer, but it's conveniently hidden behind Go goodness.
The example in the documentation matches pretty much exactly the question here:
change := mgo.Change{
Update: bson.M{"$inc": bson.M{"n": 1}},
ReturnNew: true,
}
info, err = col.Find(M{"_id": id}).Apply(change, &doc)
fmt.Println(doc.N)
I hope you saw the comments on the answer you selected, but that approach is incorrect. Doing a select and then update will result in a round trip and two machines and be fetching for the same job before one of them can update the assignee. You need to use the findAndModify method instead: http://www.mongodb.org/display/DOCS/findAndModify+Command
The MongoDB guys describe a similar scenario in the official documentation: http://www.mongodb.org/display/DOCS/Atomic+Operations
Basically, all you have to do, is to fetch any job with assignee=null. Let's suppose you get the job with the _id=42 back. You can then go ahead and modify the document locally, by setting assignee="worker1.example.com" and call Collection.Update() with the selector {_id=42, assignee=null} and your updated document. If the database is still able to find a document that matches this selector, it will replace the document atomically. Otherwise you will get a ErrNotFound, indicating that another thread has already claimed the task. If that's the case, try again.