I have a flow like this:
I have a Worker that's processing a "large" batch (say, 1M records) and storing the results in Mongo.
Once the batch is complete, a notification message is sent to Publish, which then pulls all the records from Mongo for final publication.
Let's say the Worker write process is done, i.e. it has sent all 1M records to Mongo through a driver. Mongo is "eventually consistent" so I'm not 100% guaranteed all records are written to physical storage at the time the Notify Publish happens.
When Publish does a 'find' and gets a cursor on the collection holding the batch records, is the cursor smart enough to handle the eventual consistency?
So in practical terms let's imagine 750,000 records are actually physically written by Mongo when Notify Publish happens and Publish does its find. Will the cursor traverse 750,000 records and stop or will it block or otherwise handle the remaining 250,000 as they're eventually written to disk (which presumably is very likely to happen while publishing of the first 750K)?
As #BlakesSeven already noted in the comments, "eventual consistency" refers to the fact that in a replicated environment, when a write is finished on the primary, it will only be written to the secondaries eventually. You can modify this behavior at the cost of reduced write performance by setting the write concern to > 1. Setting it to "majority" basically guarantees that a write operation is durable even in case of a failover – though at a (in some cases) drastically reduced performance.
In general here is what happens when you do a write (simplified) with journaling enabled:
The operation is checked for being syntactically correct.
The query optimizer kicks in and does his stuff. (Irrelevant for this question, so I spare the details).
The write operation is applied to the in memory representation of the data set called "private view".
Every commitIntervalMs, the private view is synced to the journal, with a median of 15 or 50ms, depending on the write concern.
On sync, the operation is applied to the shared view. Iirc, this is the point where a new connection would be provided with the new data.
So in order to ensure that the data will be readable by the new connection, simply delay the publish notification by commitIntervalMs + 1, which, given your batch size, is hardly noticeable.
Even if journaling is on, is there still a chance to lose writes in MongoDB?
"By default, the greatest extent of lost writes, i.e., those not made to the journal, are those made in the last 100 milliseconds."
This is from Manage Journaling, which indicates you could lose writes made since the last time the journal was flushed to disk.
If I want more durability, "To force mongod to commit to the journal more frequently, you can specify j:true. When a write operation with j:true is pending, mongod will reduce journalCommitInterval to a third of the set value."
Even in this case, it looks like flushing the journal to disk is asynchronous so there is still a chance to lose writes. Am I missing something about how to guarantee that writes are not lost?
Posting a new answer to clean this up. I performed tests and read the source code again and I'm sure the irritation comes from an unfortunate sentence in the write concern documentation. With journaling enabled and j:true write concern, the write is durable, and there is no mysterious window for data loss.
Even if journaling is on, is there still a chance to lose writes in MongoDB?
Yes, because the durability also depends on the individual operations write concern.
"By default, the greatest extent of lost writes, i.e., those not made to the journal, are those made in the last 100 milliseconds."
This is from Manage Journaling, which indicates you could lose writes made since the last time the journal was flushed to disk.
That is correct. The journal is flushed by a separate thread asynchronously, so you can lose everything since the last flush.
If I want more durability, "To force mongod to commit to the journal more frequently, you can specify j:true. When a write operation with j:true is pending, mongod will reduce journalCommitInterval to a third of the set value."
This irritated me, too. Here's what it means:
When you send a write operation with j:true, it doesn't trigger the disk flush immediately, and not on the network thread. That makes sense, because there could be dozens of applications talking to the same mongod instance. If every application were to use journaling a lot, the db would be very slow because it's fsyncing all the time.
Instead, what happens is that the 'durability thread' will take all pending journal commits and flush them to disk. The thread is implemented like this (comments mine):
sleepmillis(oneThird); //dur.cpp, line 801
for( unsigned i = 1; i <= 2; i++ ) {
// break, if any j:true write is pending
if( commitJob._notify.nWaiting() )
break;
// or the number of bytes is greater than some threshold
if( commitJob.bytes() > UncommittedBytesLimit / 2 )
break;
// otherwise, sleep another third
sleepmillis(oneThird);
}
// fsync all pending writes
durThreadGroupCommit();
So a pending j:true operation will cause the journal commit thread to commit earlier than it normally would, and it will commit all pending writes to the journal, including those that don't have j:true set.
Even in this case, it looks like flushing the journal to disk is asynchronous so there is still a chance to lose writes. Am I missing something about how to guarantee that writes are not lost?
The write (or the getLastError command) with a j:true journaled write concern will wait for the durability thread to finish syncing, so there's no risk of data loss (as far as the OS and hardware guarantee that).
The sentence "However, there is a window between journal commits when the write operation is not fully durable" probably refers to a mongod running with journaling enabled that accepts a write that does NOT use the j:true write concern. In that case, there's a chance of the write getting lost since the last journal commit.
I filed a docs bug report for this.
Maybe. Yes, it waits for the data to be written, but according to the docs there's a 'there is a window between journal commits when the write operation is not fully durable', whatever that is. I couldn't find out what they refer to.
I'm leaving the edited answer here, but I reversed myself back-and-forth, so it's a bit irritating:
This is a bit tricky, because there are a lot of levers you can pull:
Your MongoDB setup
Assuming that journaling is activated (default for 64 bit), the journal will be committed in regular intervals. The default value for the journalCommitInterval is 100ms if the journal and the data files are on the same block device, or 30ms if they aren't (so it's preferable to have the journal on a separate disk).
You can also change the journalCommitInterval to as little as 2ms, but it will increase the number of write operations and reduce overall write performance.
The Write Concern
You need to specify a write concern that tells the driver and the database to wait until the data is written to disk. However, this won't wait until the data has been actually written to the disk, because that would take 100ms in a bad-case scenario with the default setup.
So, at the very best, there's a 2ms window where data can get lost. That's insufficient for a number of applications, however.
The fsync command forces a disk flush of all data files, but that's unnecessary if you use journaling, and it's inefficient.
Real-Life Durability
Even if you were to journal every write, what is it good for if the datacenter administrator has a bad day and uses a chainsaw on your hardware, or the hardware simply disintegrates itself?
Redundant storage, not on a block device level like RAID, but on a much higher level is a better option for many scenarios: Have the data in different locations or at least on different machines using a replica set and use the w:majority write concern with journaling enabled (journaling will only apply on the primary, though). Use RAID on the individual machines to increase your luck.
This offers the best tradeoff of performance, durability and consistency. Also, it allows you to adjust the write concern for every write and has good availability. If the data is queued for the next fsync on three different machines, it might still be 30ms to the next journal commit on any of the machines (worst case), but the chance of three machines going down within the 30ms interval is probably a millionfold lower than the chainsaw-massacre-admin scenario.
Evidence
TL;DR: I think my answer above is correct.
The documentation can be a little irritating, especially with regards to wtimeout, so I checked the source. I'm not an expert on the mongo source, so take this with a grain of salt:
In write_concern.cpp, we find (edited for brevity):
if ( cmdObj["j"].trueValue() ) {
if( !getDur().awaitCommit() ) {
// --journal is off
result->append("jnote", "journaling not enabled on this server");
} // ...
}
else if ( cmdObj["fsync"].trueValue() ) {
if( !getDur().awaitCommit() ) {
// if get here, not running with --journal
log() << "fsync from getlasterror" << endl;
result->append( "fsyncFiles" , MemoryMappedFile::flushAll( true ) );
}
Note the call MemoryMappedFile::flushAll( true ) if fsync is set. This call is clearly not in the first branch. Otherwise, durability is handled on a sepate thread (relevant files prefixed dur_).
That explains what wtimeout is for: it refers to the time waiting for slaves, and has nothing to do with I/O or fsync on the server.
Journaling is for keeping the data on a particular mongod in a consistent state, even in case of chainsaw madness, however with client settings through writeconcern it can be used to force out durability. About write concern DOCS.
There is an option, j:1, which you can read about here which ensures that the particular write operation waits for acknowledge till it is written to the journal file on disk (so not just in the memory map). However this docs says the opposite. :) I would vote for the first case it makes me feel more comfortable.
If you run lots of commands with such option mongodb will adapt the size of the commit interval of the journal to speed things up, you can read about it here: DOCS this one you also mentioned and as others already said that you can specify an interval between 2-300ms.
Durability is much more ensured in my opinion over the w:2 option while if the update/write operation is acknowledged by two members in a replicaset it is really unlikely to lose both in the same minute (datafile flush interval), but not impossible.
Using both options will cause the situation that when the operation is acknowledged by the database cluster it will reside in memory at two different boxes and on one it will be in a consistent recoverable disk place too.
Generally lost writes are an issue in every system where there is buffering/caching/delayed-write involved between a system's runtime and a permanent (non-volatile) storage, even at the OS level (for example write-behind caching). So there is always a chance to lose writes, even if your concrete provider (MongoDB) provides functionality for transaction durability it's the underlying OS that is responsible for ultimately writing the data, and even then there is caching at the device level... And that's just the lower levels, making the system highly concurrent, distributed and performant only makes matters worse.
In short there is no absolute durability, only practical/eventual/hope-for-the-best durability especially with a NoSQL storage like Mongo, which isn't primarily made for consistency and durability in the first place.
I would have to agree with Sammaye that journoualing has little to do with durability. However, if you want to get an answer to whether you can really trust mongodb to store your data with good consistency, then I would suggest that you read this blog post. There is a reply from 10gen regarding that post, and a reply from the author to the 10gen post. I would suggest that you read into it to make an educated decision. It took me some time to understand all the details on my own, but this post has the basics covered.
The response to the blog post was given here by 10gen, the company that makes mongodb.
And the response to the response was given by the professor on this post.
It explains a lot about how Mongodb can shard data, how it actually functions, and the performance hits it takes if you add on extra safety locks. I strongly want to say that these three writings are the best thing out there, and by far the most comprehensive things out there that talk about the benefits and drawbacks of mongodb, if you think its one sided, look at the comments, and also see what people had to say, because if something received a reply from the company that made the software, then it must have made some good points atleast.
We're using MongoDB 2.2.0 at work. The DB contains about 51GB of data (at the moment) and I'd like to do some analytics on the user data that we've collected so far. Problem is, it's the live machine and we can't afford another slave at the moment. I know MongoDB has a read lock which may affect any writes that happen especially with complex queries. Is there a way to tell MongoDB to treat my (particular) query with the lowest priority?
In MongoDB reads and writes do affect each other. Read locks are shared, but read locks block write locks from being acquired and of course no other reads or writes are happening while a write lock is held. MongoDB operations yield periodically to keep other threads waiting for locks from starving. You can read more about the details of that here.
What does that mean for your use case? Because there is no way to tell MongoDB to access the data without a read lock, nor is there a way to prioritize the requests (at least not yet) whether the reads significantly affect the performance of your writes depends on how much "headroom" you have available while write activity is going on.
One suggestion I can make is when figuring out how to run analytics, rather than scanning the entire data set (i.e. doing an aggregation query over all historical data) try running smaller aggregation queries on short time slices. This will accomplish two things:
reads jobs will be shorter lived and therefore will finish quicker, this will give you a chance to assess what impact the queries have on your "live" performance.
you won't be pulling all old data into RAM at once - by spacing out these analytical queries over time you will minimize the impact it will have on current write performance.
Depending on what it is you can't afford about getting another server - you might consider getting a short lived AWS instance which may be not very powerful but would be available to run a long analytical query against a copy of your data set. Just be careful when making it a copy of your data - doing a full sync off of the production system will place a heavy load on it (more effective way would be to use a recent backup/file snapshot to resume from).
Such operations are best left for slaves of a replica set. For one thing, read locks can be shared to allow many reads at once, but write locks will block reads. And, while you can't prioritize queries, mongodb yields long running read/write queries. Their concurrency docs should help
If you can't afford another server, you can setup a slave on the same machine, provided you have some spare RAM/Disk headroom, and you use the slave lightly/occasionally. You must be careful though, your disk I/O will increase significantly.
MongoDB has a write mechanism of fire and forget. But does this guarantee that my writes will be written to disk eventually? So if i have a statement like this(in pymongo)
collection.insert(doc) #not passing safe=True
I understand that my write in above statement will not immediately reach the disk but is there a guarantee that it would ever reach the disk(maybe the next day or a week later) or can it get lost and never come back. My application needs dont want the writes to be done synchronously but they want the writes to happen even if its hours later.
The meaning of "fire and forget" (aka the default writes of MongoDB) is that the driver will not confirm the write with the server. Your write is placed on the network, the driver confirms it reaches the network transport but past that no other checking is performed.
As long as the server & network is up, it is expected the write will succeed. However, this also means you won't find out about issues such as violating a unique index.
Writes initially occur in memory, and are flushed to disk at least every 60 seconds. If you have journalling turned on, the write will get into the journal within 100 ms – this journal can recover in case of a crash upon mongod restarting.
If you want to verify that a write has made it to mongod and successfully been applied in memory, you should set safe=True
AFAIK Fire and Forget has no guarantees - if you need them then use safe [that's what it is there for!]
See How safe is MongoDB's safe mode on inserts?
It can get lost and never come back. This can happen due to a power outage, hardware failure, or other environmental problem. It can also happen if the doc violates some unique index.
safe=True will ensure that there weren't any database engine errors doing the insert. This makes the driver issue a getLastError command immediately after the write, and it will return an error if the write violates a unique index or there was some underlying Mongo problem. This won't help you if you lose power between the in memory write and the write to the journal, though.
There are numerous other options for getLastError that you can pass through the safe option to ensure that a write either makes it to the journal (durable), or is replicated before the command returns. We generally tell people to use the journaling option at a minimum for writes they care about.
I am working on a project which has some important data in it. This means we cannot to lose any of it if the light or server goes down. We are using MongoDB for the database. I'd like to be sure that my data is in the database after the insert and rollback the whole batch if one element was not inserted. I know it is the philosophy behind Mongo that we do not need transactions but how can I make sure that my data is really safely stored after insert rather than sent to some "black hole".
Should I make a search?
Should I use some specific mongoDB commands?
Should I use sharding even if one server is enough for satisfying
the speed and by the way it doesn't guarantee anything if the light
goes down?
What is the best solution?
Your best bet is to use Write Concerns - these allow you to tell MongoDB how important a piece of data is. The quickest Write Concern is also the least safe - the data is not flushed to disk until the next scheduled flush. The safest will confirm that the data has been written to disk on a number of machines before returning.
The write concern you are looking for is FSYNC_SAFE (at least that is what it is called from the point of view of the Java driver) or REPLICAS_SAFE which confirms that your data has been replicated.
Bear in mind that MongoDB does not have transactions in the traditional sense - your rollback will have to be rolled by hand as you can't tell the Mongo database to do this for you.
The other thing you need to do is either use the relatively new --journal option (which uses a Write Ahead Log), or use replica sets to share your data across many machines in order to maximise data integrity in the event of a crash/power loss.
Sharding is not so much a protection against hardware failure as a method for sharing the load when dealing with particularly large datasets - sharding shouldn't be confused with replica sets which is a way of writing data to more than one disk on more than one machine.
Therefore, if your data is valuable enough, you should definitely be using replica sets, perhaps even siting slaves in other data centres/availability zones/racks/etc in order to provide the resilience you require.
There is/will be (can't remember offhand whether this has been implemented yet) a way to specify the priority of individual nodes in a replica set such that if the master goes down the new master that is elected is one in the same data centre if such a machine is available (ie to stop a slave on the other side of the country from becoming master unless it really is the only other option).
I received a really nice answer from a person called GVP on google groups. I will quote it(basically it adds up to Rich's answer):
I'd like to be sure that my data is in the database after the
insert and rollback the whole batch if one element was not inserted.
This is a complex topic and there are several trade-offs you have to
consider here.
Should I use sharding?
Sharding is for scaling writes. For data safety, you want to look a
replica sets.
Should I use some specific mongoDB commands?
First thing to consider is "safe" mode or "getLastError()" as
indicated by Andreas. If you issue a "safe" write, you know that the
database has received the insert and applied the write. However,
MongoDB only flushes to disk every 60 seconds, so the server can fail
without the data on disk.
Second thing to consider is "journaling"
(v1.8+). With journaling turned on, data is flushed to the journal
every 100ms. So you have a smaller window of time before failure. The
drivers have an "fsync" option (check that name) that goes one step
further than "safe", it waits for acknowledgement that the data has
be flushed to the disk (i.e. the journal file). However, this only
covers one server. What happens if the hard drive on the server just
dies? Well you need a second copy.
Third thing to consider is
replication. The drivers support a "W" parameter that says "replicate
this data to N nodes" before returning. If the write does not reach
"N" nodes before a certain timeout, then the write fails (exception
is thrown). However, you have to configure "W" correctly based on the
number of nodes in your replica set. Again, because a hard drive
could fail, even with journaling, you'll want to look at replication.
Then there's replication across data centers which is too long to get
into here. The last thing to consider is your requirement to "roll
back". From my understanding, MongoDB does not have this "roll back"
capacity. If you're doing a batch insert the best you'll get is an
indication of which elements failed.
Here's a link to the PHP driver on this one: http://it.php.net/manual/en/mongocollection.batchinsert.php You'll have to check the details on replication and the W parameter. I believe the same limitations apply here.