Memcache flush all does not empty slabs? - flush

I am using the using the flush all command to delete all the key/value pair on my Memcache server.
While the values get deleted, there are two things I can't understand when looking at the Memcache server through phpMemcachedAdmin
The memory usage is not reset to 0 after flushing it all. I still have 77% used and 22% wasted (just an example, but you get the spirit). How is that possible?
All the previous slab with the previous items are still there. For example, looking at a specific slab would show all the previous key/value pairs, despite the flush all command. How is that possible?
Thanks

This happens because memcache flushes on read, not on write. When you flush_all, that operations is designed for performance: it just means anything read beyond that time will be instantly expired, even though it is still in the cache. It just updates a single number and then checks that on each fetch.
This optimization significantly improves memcache performance. Imagine if several other people are searching or inserting at the same time as a flush.

Related

PostgreSql Query Caching Logic

So I have database with 140 million entries. First Query takes around 500ms, repeating the query takes 8ms. So there is some caching.
I would like to learn more about how long this cache exists what are the conditions it's refreshed etc - time or number of new entries made to the table.
Adding ~1500 new entries to table still makes queries around 10ms. But adding 150k will reset the cache. So there could be both, or only transaction count.
Anyone has a good resource, so that I could get more information on it?
I made this gist to make the load tests.
PostgreSQL uses a simple clock sweep algorithm to manage the cache (“shared buffers” in PostgreSQL lingo).
Shared buffers are in shared memory, and all access to data is via this cache. An 8kB block that is cached in RAM is also called a buffer.
Whenever a buffer is used, its usage count is increased, up to a maximum of 5.
There is a free list of buffers with usage count 0. If the free list is empty, anybody who searches a "victim" buffer to replace goes through shared buffers in a circular fashion and decreases the usage count of each buffer they encounter. If the usage count is 0, the buffer gets evicted and reused.
If a buffer is "dirty" (it has been modified, but not written to disk yet), it has to be written out before it can be evicted. The majority of dirty buffers get written out during one of the regular checkpoints, and a few of them get written out by the background writer between checkpoints. Occasionally a normal worker process has to do that as well, but that should be an exception.

Mongodb update guarantee using w=0

I have a large collection with more that half a million of docs, which I need to updated continuously. To achieve this, my first approach was to use w=1 to ensure write result, which causes a lot of delay.
collection.update(
{'_id': _id},
{'$set': data},
w=1
)
So I decided to use w=0 in my update method, now the performance got significantly faster.
Since my past bitter experience with mongodb, I'm not sure if all the update are guaranteed when w=0. My question is, is it guaranteed to update using w=0?
Edit: Also, I would like to know how does it work? Does it create an internal queue and perform update asynchronously one by one? I saw using mongostat, that some update is being processed even after the python script quits. Or the update is instant?
Edit 2: According to the answer of Sammaye, link, any error can cause silent failure. But what happens if a heavy load of updates are given? Does some updates fail then?
No, w=0 can fail, it is only:
http://docs.mongodb.org/manual/core/write-concern/#unacknowledged
Unacknowledged is similar to errors ignored; however, drivers will attempt to receive and handle network errors when possible.
Which means that the write can fail silently within MongoDB itself.
It is not reliable if you wish to specifically guarantee. At the end of the day if you wish to touch the database and get an acknowledgment from it then you must wait, laws of physics.
Does w:0 guarantee an update?
As Sammaye has written: No, since there might be a time where the data is only applied to the in memory data and is not written to the journal yet. So if there is an outage during this time, which, depending on the configuration, is somewhere between 10 (with j:1 and the journal and the datafiles living on separate block devices) and 100ms by default, your update may be lost.
Please keep in mind that illegal updates (such as changing the _id of a document) will silently fail.
How does the update work with w:0?
Assuming there are no network errors, the driver will return as soon it has send the operation to the mongod/mongos instance with w:0. But let's look a bit further to give you an idea on what happens under the hood.
Next, the update will be processed by the query optimizer and applied to the in memory data set. After sucessful application of the operation a write with write concern w:1 would return now. The operations applied will be synced to the journal every commitIntervalMs, which is divided by 3 with write concern j:1. If you have a write concern of {j:1}, the driver will return after the operations are stored in the journal successfully. Note that there are still edge cases in which data which made it to the journal won't be applied to replica set members in case a very "well" timed outage occurs now.
By default, every syncPeriodSecs, the data from the journal is applied to the actual data files.
Regarding what you saw in mongostat: It's granularity isn't very high, you might well we operations which took place in the past. As discussed, the update to the in memory data isn't instant, as the update first has to pass the query optimizer.
Will heavy load make updates silently fail with w:0?
In general, it is safe to say "No." And here is why:
For each connection, there is a certain amount of RAM allocated. If the load is so high that mongo can't allocate any further RAM, there would be a connection error – which is dealt with, regardless of the write concern, except for unacknowledged writes.
Furthermore, the application of updates to the in memory data is extremely fast - most likely still faster than they come in in case we are talking of load peaks. If mongod is totally overloaded (e.g. 150k updates a second on a standalone mongod with spinning disks), problems might occur, of course, though even that usually is leveraged from a durability point of view by the underlying OS.
However, updates still may silently disappear in case of an outage when the write concern is w:0,j:0 and the outage happens in the time the update is not synced to the journal.
Notes:
The optimal balance between maximum performance and minimal guaranteed durability is a write concern of j:1. With a proper setup, you can reduce the latency to slightly over 10ms.
To further reduce the latency/update, it might be worth having a look at bulk write operations, if those apply to your use case. In my experience, they do more often than not. Please read and try before dismissing the idea.
Doing write operations with w:0,j:0 is highly discouraged in case you expect any guarantee on data durability. Use a t your own risk. This write concern is only meant for "cheap" data, which is easy to reobtain or where speed concern exceeds the need for durability. Collecting real time weather data in a large scale would be an example – the system still works, even if one or two data points are missing here and there. For most applications, durability is a concern. Conclusion: use w:1,j:1 at least for durable writes.

MongoDB Write and lock processes

I've been read a lot about MongoDB recently, but one topic I can't find any clear material on, is how data is written to the journal and oplog.
So this is what I understand of the process so far, please correct me where I'm wrong
A client connect to mongod and performs a write. The write is stored in the socket buffer
When Mongo is available (not sure what available means at this point), data is written to the journal?
The mongoDB docs then say that writes every 60 seconds are flushed from the journal onto disk. By this I can only assume this mean written to the primary and the oplog. If this is the case, how to writes appear earlier than the 60 seconds sync interval?
Some time later, secondaries suck data from the primary or their sync source and update their oplog and databases. It seems very vague about when exactly this happens and what delays it.
I'm also wondering if journaling was disabled (I understand that's a really bad idea), at what point does the oplog and database get updated?
Lastly I'm a bit stumpted at which points in this process, the write locks get created. Is this just when the database and oplog are updated or at other times too?
Thanks to anyone who can shed some light on this or point me to some reading material.
Simon
Here is what happens as far as I understand it. I simplified a bit, but it should make clear how it works.
A client connects to mongo. No writes done so far, and no connection torn down, because it really depends on the write concern what happens now.Let's assume that we go with the (by the time of this writing) default "acknowledged".
The client sends it's write operation. Here is where I am really not sure. Either after this step or the next one the acknowledgement is sent to the driver.
The write operation is run through the query optimizer. It is here where the acknowledgment is sent because with in an acknowledged write concern, you may be returned a duplicate key error. It is possible that this was checked in the last step. If I should bet, I'd say it is after this one.
The output of the query optimizer is then applied to the data in memory Actually to the data of the memory mapped datafiles, to the memory mapped oplog and to the journal's memory mapped files. Queries are answered from this memory mapped parts or the according data is mapped to memory for answering the query. The oplog is read from memory if present, too.
Every 100ms in general the journal is synced to disk. The precise value is determined by a number of factors, one of them being the journalCommitInterval configuration parameter. If you have a write concern of journaled, the driver will be notified now.
Every syncDelay seconds, the current state of the memory mapped files is synced to disk I think the journal is truncated to the entries which weren't applied to the data yet, but I am not too sure of that since that it should basically never happen that data in the journal isn't yet applied to the current data.
If you have read carefully, you noticed that the data is ready for the oplog as early as it has been run through the query optimizer and was applied to the files mapped into memory. When the oplog entry is pulled by one of the secondaries, it is immediately applied to it's data of the memory mapped files and synced in the disk the same way as on the primary.
Some things to note: As soon as the relatively small data is written to the journal, it is quite safe. If a node goes down between two syncs to the datafiles, both the datafiles and the oplog can be restored from their last state in the datafiles and the journal. In general, the maximum data loss you can have is the operations recorded into the log after the last commit, 50ms in median.
As for the locks. If you have written carefully, there aren't locks imposed on a database level when the data is synced to disk. Write locks may be created in order to assure that only one thread at any given point in time modifies a given document. There are other write locks possible , but in general, they should be rather rare.
Write locks on the filesystem layer are created once, though only implicitly, iirc. During application startup, a lock file is created in the root directory of the dbpath. Any other mongod instance will refuse to do any operation on those datafiles while a valid lock exists. And you shouldn't either ;)
Hope this helps.

Does MongoDB journaling guarantee durability?

Even if journaling is on, is there still a chance to lose writes in MongoDB?
"By default, the greatest extent of lost writes, i.e., those not made to the journal, are those made in the last 100 milliseconds."
This is from Manage Journaling, which indicates you could lose writes made since the last time the journal was flushed to disk.
If I want more durability, "To force mongod to commit to the journal more frequently, you can specify j:true. When a write operation with j:true is pending, mongod will reduce journalCommitInterval to a third of the set value."
Even in this case, it looks like flushing the journal to disk is asynchronous so there is still a chance to lose writes. Am I missing something about how to guarantee that writes are not lost?
Posting a new answer to clean this up. I performed tests and read the source code again and I'm sure the irritation comes from an unfortunate sentence in the write concern documentation. With journaling enabled and j:true write concern, the write is durable, and there is no mysterious window for data loss.
Even if journaling is on, is there still a chance to lose writes in MongoDB?
Yes, because the durability also depends on the individual operations write concern.
"By default, the greatest extent of lost writes, i.e., those not made to the journal, are those made in the last 100 milliseconds."
This is from Manage Journaling, which indicates you could lose writes made since the last time the journal was flushed to disk.
That is correct. The journal is flushed by a separate thread asynchronously, so you can lose everything since the last flush.
If I want more durability, "To force mongod to commit to the journal more frequently, you can specify j:true. When a write operation with j:true is pending, mongod will reduce journalCommitInterval to a third of the set value."
This irritated me, too. Here's what it means:
When you send a write operation with j:true, it doesn't trigger the disk flush immediately, and not on the network thread. That makes sense, because there could be dozens of applications talking to the same mongod instance. If every application were to use journaling a lot, the db would be very slow because it's fsyncing all the time.
Instead, what happens is that the 'durability thread' will take all pending journal commits and flush them to disk. The thread is implemented like this (comments mine):
sleepmillis(oneThird); //dur.cpp, line 801
for( unsigned i = 1; i <= 2; i++ ) {
// break, if any j:true write is pending
if( commitJob._notify.nWaiting() )
break;
// or the number of bytes is greater than some threshold
if( commitJob.bytes() > UncommittedBytesLimit / 2 )
break;
// otherwise, sleep another third
sleepmillis(oneThird);
}
// fsync all pending writes
durThreadGroupCommit();
So a pending j:true operation will cause the journal commit thread to commit earlier than it normally would, and it will commit all pending writes to the journal, including those that don't have j:true set.
Even in this case, it looks like flushing the journal to disk is asynchronous so there is still a chance to lose writes. Am I missing something about how to guarantee that writes are not lost?
The write (or the getLastError command) with a j:true journaled write concern will wait for the durability thread to finish syncing, so there's no risk of data loss (as far as the OS and hardware guarantee that).
The sentence "However, there is a window between journal commits when the write operation is not fully durable" probably refers to a mongod running with journaling enabled that accepts a write that does NOT use the j:true write concern. In that case, there's a chance of the write getting lost since the last journal commit.
I filed a docs bug report for this.
Maybe. Yes, it waits for the data to be written, but according to the docs there's a 'there is a window between journal commits when the write operation is not fully durable', whatever that is. I couldn't find out what they refer to.
I'm leaving the edited answer here, but I reversed myself back-and-forth, so it's a bit irritating:
This is a bit tricky, because there are a lot of levers you can pull:
Your MongoDB setup
Assuming that journaling is activated (default for 64 bit), the journal will be committed in regular intervals. The default value for the journalCommitInterval is 100ms if the journal and the data files are on the same block device, or 30ms if they aren't (so it's preferable to have the journal on a separate disk).
You can also change the journalCommitInterval to as little as 2ms, but it will increase the number of write operations and reduce overall write performance.
The Write Concern
You need to specify a write concern that tells the driver and the database to wait until the data is written to disk. However, this won't wait until the data has been actually written to the disk, because that would take 100ms in a bad-case scenario with the default setup.
So, at the very best, there's a 2ms window where data can get lost. That's insufficient for a number of applications, however.
The fsync command forces a disk flush of all data files, but that's unnecessary if you use journaling, and it's inefficient.
Real-Life Durability
Even if you were to journal every write, what is it good for if the datacenter administrator has a bad day and uses a chainsaw on your hardware, or the hardware simply disintegrates itself?
Redundant storage, not on a block device level like RAID, but on a much higher level is a better option for many scenarios: Have the data in different locations or at least on different machines using a replica set and use the w:majority write concern with journaling enabled (journaling will only apply on the primary, though). Use RAID on the individual machines to increase your luck.
This offers the best tradeoff of performance, durability and consistency. Also, it allows you to adjust the write concern for every write and has good availability. If the data is queued for the next fsync on three different machines, it might still be 30ms to the next journal commit on any of the machines (worst case), but the chance of three machines going down within the 30ms interval is probably a millionfold lower than the chainsaw-massacre-admin scenario.
Evidence
TL;DR: I think my answer above is correct.
The documentation can be a little irritating, especially with regards to wtimeout, so I checked the source. I'm not an expert on the mongo source, so take this with a grain of salt:
In write_concern.cpp, we find (edited for brevity):
if ( cmdObj["j"].trueValue() ) {
if( !getDur().awaitCommit() ) {
// --journal is off
result->append("jnote", "journaling not enabled on this server");
} // ...
}
else if ( cmdObj["fsync"].trueValue() ) {
if( !getDur().awaitCommit() ) {
// if get here, not running with --journal
log() << "fsync from getlasterror" << endl;
result->append( "fsyncFiles" , MemoryMappedFile::flushAll( true ) );
}
Note the call MemoryMappedFile::flushAll( true ) if fsync is set. This call is clearly not in the first branch. Otherwise, durability is handled on a sepate thread (relevant files prefixed dur_).
That explains what wtimeout is for: it refers to the time waiting for slaves, and has nothing to do with I/O or fsync on the server.
Journaling is for keeping the data on a particular mongod in a consistent state, even in case of chainsaw madness, however with client settings through writeconcern it can be used to force out durability. About write concern DOCS.
There is an option, j:1, which you can read about here which ensures that the particular write operation waits for acknowledge till it is written to the journal file on disk (so not just in the memory map). However this docs says the opposite. :) I would vote for the first case it makes me feel more comfortable.
If you run lots of commands with such option mongodb will adapt the size of the commit interval of the journal to speed things up, you can read about it here: DOCS this one you also mentioned and as others already said that you can specify an interval between 2-300ms.
Durability is much more ensured in my opinion over the w:2 option while if the update/write operation is acknowledged by two members in a replicaset it is really unlikely to lose both in the same minute (datafile flush interval), but not impossible.
Using both options will cause the situation that when the operation is acknowledged by the database cluster it will reside in memory at two different boxes and on one it will be in a consistent recoverable disk place too.
Generally lost writes are an issue in every system where there is buffering/caching/delayed-write involved between a system's runtime and a permanent (non-volatile) storage, even at the OS level (for example write-behind caching). So there is always a chance to lose writes, even if your concrete provider (MongoDB) provides functionality for transaction durability it's the underlying OS that is responsible for ultimately writing the data, and even then there is caching at the device level... And that's just the lower levels, making the system highly concurrent, distributed and performant only makes matters worse.
In short there is no absolute durability, only practical/eventual/hope-for-the-best durability especially with a NoSQL storage like Mongo, which isn't primarily made for consistency and durability in the first place.
I would have to agree with Sammaye that journoualing has little to do with durability. However, if you want to get an answer to whether you can really trust mongodb to store your data with good consistency, then I would suggest that you read this blog post. There is a reply from 10gen regarding that post, and a reply from the author to the 10gen post. I would suggest that you read into it to make an educated decision. It took me some time to understand all the details on my own, but this post has the basics covered.
The response to the blog post was given here by 10gen, the company that makes mongodb.
And the response to the response was given by the professor on this post.
It explains a lot about how Mongodb can shard data, how it actually functions, and the performance hits it takes if you add on extra safety locks. I strongly want to say that these three writings are the best thing out there, and by far the most comprehensive things out there that talk about the benefits and drawbacks of mongodb, if you think its one sided, look at the comments, and also see what people had to say, because if something received a reply from the company that made the software, then it must have made some good points atleast.

Memcached LRU and expiry

If an item in memcached is set to never expire, is it exempt from LRU eviction?
The docs that I've seen don't paint a clear picture as to which takes precedence. In my mind, it would be ideal (perhaps very complicated internally) to have LRU only apply to items that had an expiry > 0.
No, it is not exempt. Memcached is a cache, not persistent storage. Any item within it, or the entire cache itself may disappear at any moment (but it's not likely unless it's full, or there's a major problem).
Under heavy memory pressure, the LRU algorithm will remove whatever it feels necessary.
What is memcached's cache?
The cache structure is an LRU (Least Recently Used), plus expiration timeouts. When you store items into memcached, you may state how long it should be valid in the cache. Which is forever, or some time in the future. If the server is out of memory, expired slabs are replaced first, then the oldest unused slabs go next.
If the system has no areas of expired data, it will throw away the least recently used block (slab) of memory.
The doc said that when expirezero_does_not_evict is set to 'true', items with 0 exptime cannot evict.