If an item in memcached is set to never expire, is it exempt from LRU eviction?
The docs that I've seen don't paint a clear picture as to which takes precedence. In my mind, it would be ideal (perhaps very complicated internally) to have LRU only apply to items that had an expiry > 0.
No, it is not exempt. Memcached is a cache, not persistent storage. Any item within it, or the entire cache itself may disappear at any moment (but it's not likely unless it's full, or there's a major problem).
Under heavy memory pressure, the LRU algorithm will remove whatever it feels necessary.
What is memcached's cache?
The cache structure is an LRU (Least Recently Used), plus expiration timeouts. When you store items into memcached, you may state how long it should be valid in the cache. Which is forever, or some time in the future. If the server is out of memory, expired slabs are replaced first, then the oldest unused slabs go next.
If the system has no areas of expired data, it will throw away the least recently used block (slab) of memory.
The doc said that when expirezero_does_not_evict is set to 'true', items with 0 exptime cannot evict.
Related
Really, I only want to know what are slabs in memcached. And could be better if someine who is working know with can answer me.
Thanks for your answers...
Applications that run for long periods of time, like memcached, run into memory fragmentation issues the longer the service runs. On top of that caching applications have the added issue that there are pieces of memory that have been cached for long periods of time as well as newer pieces of memory that were recently allocated.
Memcached has a "slab" allocator that attempts to reduce memory fragmentation in the memcached process. At a high level a slab is a 1MB piece of memory that contains the values of the key-value pairs you store in memcached. There are also different slabs for different value sizes. There might be a slab for 16B values, a a slab for 32B values, a slab for 1024B values, etc. When a new key-value pair is added memcached puts the value in the smallest slab that will hold the value. By allocating memory like this memcached is able to reduce memory fragmentation and as a result reduce the overall amount of memory used by memcached.
Slabs and the slab allocator are internal memcached implementation details. You can get information about them through the stats command, but unless you're trying to debug an issues with memcached itself inspecting the slab information is unlikely to be useful.
For more details about slabs and the slab allocator I found a blog post linked below.
https://holmeshe.me/understanding-memcached-source-code-I/
If you're particularly interested in this kind of architecture then look into how memory allocators work in general since the concepts are similar.
How advantageous is Value Eviction over Full Eviction? In case of Value eviction , I assume meta-data is present in the RAM. How does presence of meta-data help in quicker retrieval of content? Is the NRU documents evicted high watermark is reached? Are there any aspects that we need to consider before changing the eviction policy from value eviction to full eviction.
Value eviction keeps all document meta data in memory and full eviction does not.
Let's say you do a get on a key that does not exist. In value eviction mode you instantly know that the key is not there since it is a memory only operation. In full eviction mode if the meta data for that key is not in memory then you have to do a disk fetch to be sure that they key does not exist.
Basically any operation that requires knowledge of some information about a keys meta data may require a background fetch. Some other operations that may be slow if the meta data is not in memory are CAS set (check and set only if the value has not changed), appends, incrs, decrs, and prepends. Also keep in mind that extra disk activity may cause disk contention that affects other parts of Couchbase.
The NRU is the same across both full eviction and value eviction and Couchbase will do its best to keep your working set in memory.
I would recommend trying to get an idea of what your workload looks like before switching modes and test it out with full eviction because you may see performance issues with will vary depending on your workload.
In addition to Mike's answer it's worth mentioning bloom filters here which is a very powerful feature of Couchbase and can decrease the trips to disk significantly. Bloom filters are also enabled in value-only ejection but Couchbase really leverages their functionality in full ejection mode. I was in the situation where the system had reached its limit with value-only ejections buckets and I tested the two eviction modes and eventually the full ejection ended up being far superior - at least for my case.
We are looking at writing log information to a MongoDB logging database but have essentially zero practical experience running Mongo in a production environment.
Every day we'll be writing a million+ log entries. Logs older than (say) a month need to be purged (say) daily. My concern is how Mongo will handle these deletes.
What are the potential issues with this plan with Mongo?
Do we need to chunk the deletes?
Given we'll be deleting by chronological age (ie: insert order), can I assume fragmentation will not be an issue?
Will the database need to be compacted regularly?
Potential issues: None, if you can live with eventual consistency.
No. A far better approach is to have an (ISO)Date field in your documents and set up a TTL index on it. Assuming the mentioned field holds the time at which the log entry was made, you would setup said index like:
db.yourCollection.createIndex(
{"nameOfDateField":1},
// Seconds in Minutes * Minutes in hour * hours a day * days in month (commercial)
{"expireAfterSeconds": 2592000}
)
This way, a mongod subprocess would take care of deleting the expired data, turning the collection in sort of a round robin database. Less moving parts, less to care about. Please note that the documents will not be deleted the very same second they expire. Under the worst circumstances, it can take up to 2 minutes from their time of expiration (iirc) before they are actually deleted. At median, an expired document should be deleted some 30 seconds after its expiration.
Compacting does not reclaim disk space on mmapv1, only on WiredTiger.Keep in mind that documents are never fragmented. With the fun fact that the database being compacted will be locked, I have yet to find a proper use case for the compact command. If disk space is your concern: Freed space in the datafiles will be reused. So yes, in a worst case scenario you can have a few additional datafiles allocated. Since I don't know the project's requirements and details, it is you who must decide wether reclaiming a few GB of disk space is worth locking the database for extended periods of time.
You can configure MongoDB for log files rotation :
http://docs.mongodb.org/manual/tutorial/rotate-log-files/
You'd certainly be interested by "Manage journaling" section too :
http://docs.mongodb.org/manual/tutorial/manage-journaling/
My last suggestion is about "smallfiles" option :
Set to false to prevent the overhead of journaling in situations where durability is not required. To reduce the impact of the journaling on disk usage, you can leave journal enabled, and set smallfiles to true to reduce the size of the data and journal files.
I am using the using the flush all command to delete all the key/value pair on my Memcache server.
While the values get deleted, there are two things I can't understand when looking at the Memcache server through phpMemcachedAdmin
The memory usage is not reset to 0 after flushing it all. I still have 77% used and 22% wasted (just an example, but you get the spirit). How is that possible?
All the previous slab with the previous items are still there. For example, looking at a specific slab would show all the previous key/value pairs, despite the flush all command. How is that possible?
Thanks
This happens because memcache flushes on read, not on write. When you flush_all, that operations is designed for performance: it just means anything read beyond that time will be instantly expired, even though it is still in the cache. It just updates a single number and then checks that on each fetch.
This optimization significantly improves memcache performance. Imagine if several other people are searching or inserting at the same time as a flush.
Does it make any sense in flushing the CPU cache manually, if it is implemented as a write-through cache?
When a word is written at a write-through (WT) cache by a store instruction, it is also sent to the following level of the memory hierarchy (see cache entry at wikipedia). Hence, cache blocks at the WT cache are clean, that is, are coherent with their copies at the next level, and write-backs would not be necessary.
WT invalidations could be required in case of Direct Memory Acceses (DMA) that make cache contents stale, but, as far as I know, these are not manually operations, but OS or hardware driven.
Related to manually flushing, for example, according to the Intel Architecture Software Developer’s Manual (Volume 2, Instruction Set Reference):
WBINVD — Write Back and Invalidate Cache This instruction writes back all modified cache lines in the processor’s internal cache to main memory and invalidates (flushes) the internal caches.
So I think that, in case of a WT cache, this instruction just invalidates all the cache lines.