How to find the keys being evicted from memcache? - memcached

Is there any inbuilt way/ or a hack by which I can know which key is being evicted from memcache ?
There is one solution of polling for all possible keys inserted into memcache (e.g. get multi), but that is inefficient and certainly not implementable for large number of keys.
The functionality is not needed to be run in production, but during some benchmarking and optimization runs.

Not possible AFAIK, but a really good (and simple) solution is to modify your memcached library and do a print (or whatever you want) in the delete and multidelete methods. You can then get the keys that are being deleted (both by your app and by the library itself). I hope that helps

Related

memcached like software with disk persistence

I have an application that runs on Ubuntu Linux 12.04 which needs to store and retrieve a large number of large serialized objects. Currently the store is implemented by simply saving the serialized streams as files, where the filenames equal the md5 hash of the serialized object. However I would like to speed things up replacing the file-store by one that does in-memory caching of objects that are recently read/written, and preferably does the hashing for me.
The design of my application should not get any more complicated. Hence preferably would be a storing back-end that manages a key-value database and caching in an abstracted and efficient way. I am a bit lost with all of the key/value stores that are out there, and much of the topics/information seems to be outdated. I was initially looking at something like memcached+membase, but maybe there are better solutions out there. I looked into redis, mongodb, couchdb, but it is not quite clear to me if they fit my needs.
My most important requirements:
Transparent saving to a persistent store in a way that the most recently written/read objects are quickly available by automatically caching them in memory.
Store should survive a reboot. Hence in memory objects should be saved on disk asap.
Currently I am calculating the md5 manually. It would actually be nicer if the back-end does this for me. Hence the ability to get the hash-key when an object is stored, and be able to retrieve the object later using the hashkey.
Big plus is that if there are packages available for Ubuntu 12.04, either in universe or through launchpad or whatever.
Other than this, the software should preferably be light not be more complicated than necessary (I don't need distributed map-reduce jobs, etc)
Thanks for any advice!
I would normally suggest Redis because it will be fast and in-memory with asynch persistant store. Plus you'll find you can use their different data types for other purposes so not as single-purpose as memcached. As far as auto-hashing, I don't think it does that as you define your own keys when you store objects (as in most of them).
One downside to Redis is if you're storing a TON of binary objects, you'll be limited to available memory in RAM (unless sharding) so could reach performance limitations. In that case you may store objects on file system, hash them, and store keys in Redis and match that to filename stored on file server and you'd be fine.
--
An alternate option would be to check out ElasticSearch which is like Mongo in that it stores objects native as JSON, but it includes the Lucene search engine on top with RESTful API interface. It "warms up" data in memory for fast response, but is also a persistent store and the nicest part is it auto-shards and auto-clusters using multicast to find other nodes.
--
Hope that helps and if so, share the love! ;-)
I'd look at MongoDB. It caches things efficiently using your OS to page data in and out, and is pretty simple to setup. Redis and Memcached won't be good solutions for you because they keep everything in RAM. Other, simpler solutions like LevelDB or BDB would also probably be suitable. I don't think any database going to compute hashes automatically for you. It sounds like you already have code for this though.

Key-value store for Ruby & Java

I need a recommendation for a key-value store. Here's my criteria:
Doesn't have to be persistent but needs to support lots of records (records are small, 100-1000 bytes)
Insert (put) will happen only occasionally, always in large datasets (bulk)
Get will be random and needs to be fast
Clients will be in Ruby and, perhaps Java
It should be relatively easy to setup and with as little maintenance needed as possible
Redis sounds like the right thing to use here. It's all in memory so it's very fast (The GET and SET operations are both O(1)) and it supports both Ruby and Java clients.
Aerospike would be a perfect because of below reasons:
Key Value based with clients available in Java and Ruby.
Throughput: Better than Redis/Mongo/Couchbase or any other NoSQL solution. See this http://www.aerospike.com/blog/use-1-aerospike-server-not-12-redis-shards/. Have personally seen it work fine with more than 300k read TPS and 100k Write TPS concurrently.
Automatic and efficient data sharding, data re-balancing and data distribution using RIPEMD160.
Highly Available system in case of Failover and/or Network Partitions.
Open sourced from 3.0 version.
Can be used in Caching mode with no persistence.
Supports LRU and TTL.
Little or No maintenance.
An AVL-Tree will give you O(log n) on insert, remove, search and most everything else.
1 and 3 both scream a database engine.
If your number of records isn't insane and you only have one client using this thing at the same time, I would personally recommend sqlite, which works with both Java and Ruby (also would pass #5). Otherwise go with a real database system, like MySql (since you're not on the Microsoft stack).

why memcached instead of hashmap

I am trying to understand what would be the need to go with a solution like memcached. It may seem like a silly question - but what does it bring to the table if all I need is to cache objects? Won't a simple hashmap do ?
Quoting from the memcache web site, memcache is…
Free & open source, high-performance,
distributed memory object caching
system, generic in nature, but
intended for use in speeding up
dynamic web applications by
alleviating database load.
Memcached is an in-memory key-value
store for small chunks of arbitrary
data (strings, objects) from results
of database calls, API calls, or page
rendering. Memcached is simple yet
powerful. Its simple design promotes
quick deployment, ease of development,
and solves many problems facing large
data caches. Its API is available for
most popular languages.
At heart it is a simple Key/Value
store
A key word here is distributed. In general, quoting from the memcache site again,
Memcached servers are generally
unaware of each other. There is no
crosstalk, no syncronization, no
broadcasting. The lack of
interconnections means adding more
servers will usually add more capacity
as you expect. There might be
exceptions to this rule, but they are
exceptions and carefully regarded.
I would highly recommend reading the detailed description of memcache.
Where are you going to put this hashmap? That's what it's doing for you. Any structure you implement on PHP is only there until the request ends. If you throw stuff in a persistent cache, you can fetch it back out for other requests, instead of rebuilding the data.
I know that this question is rather old, but in addition to being able to share a cache across multiple servers, there is also another aspect that is not mentioned in other answers and is the values expiration.
If you store the values in a HashMap, and that HashMap is bound to the Application context, it will keep growing in size, unless you expire items in some ways. Memcached expires object lazily for maximum performance.
When an item is added to the memcache, it can have an expiration time, for instance 600 seconds. After the object is expired it will just remain there, but if another object asks for it, it will purge it and return null.
Similarly, when memcached memory is full, it will look for the first expired item of adequate size and expire it to make room for the new item. Lastly, it can also happen that the cache is full and there isn't any item to expire, in which case it will replace the least used items.
Using a fully flagded cache system usually allow you to replicate the cache on many servers, or just scale to many server just to scale a lot of parallel requestes, all this remaining acceptable fast in term of reply.
There is an (old) article that compares different caching systems used by php:
https://www.percona.com/blog/2006/08/09/cache-performance-comparison/
Basically, file caching is faster than memcached.
So to answer the question, I believe you would have better performances using a file based cache system.
Here are the results from the tests of the article:
Cache Type Cache Gets/sec
Array Cache 365000
APC Cache 98000
File Cache 27000
Memcached Cache (TCP/IP) 12200
MySQL Query Cache (TCP/IP) 9900
MySQL Query Cache (Unix Socket) 13500
Selecting from table (TCP/IP) 5100
Selecting from table (Unix Socket) 7400

Is it possible to get/search Memcached keys by a prefix?

I'm writing to memcached a lot of key/value -> PREFIX_KEY1, PREFIX_KEY2, PREFIX_KEY3
I need to get all the keys that starts with PREFIX_
Is it possible?
Sorry, but no. Memcached uses a hashing algorithm that distributes keys at apparently random places, and so those keys are scattered all over. You'd have to scan everything to find them.
Also you should be aware that, by design, memcached can drop any any key at any time for any reason. If you're putting stuff in, you should be aware that you can't depend on it coming back out. This is absolutely fine for its original use case, a cache to reduce hits on a database. But it can be a severe problem if you want to do something more complicated with it.
If these limitations are a problem, I would suggest that you use Redis instead. It behaves a lot like memcached, except that it will persist data and it lets you store complex data structures. So for your use case you can store a hash in Redis, then pull the whole hash out later.
A quick command to search if a specific key exists (the key name can be a "grep regex")
for i in {1..40}; do (echo "stats cachedump $i 0"; sleep 1; echo "quit";) | telnet localhost 11211 | grep 'APREFIX*\|ANOTHERPREFIX*'; done
i is the slab number
in the example above we search the slabs from 1 to 40
don't miss the grep part 'APREFIX*\|ANOTHERPREFIX*' ;)
based on the discussion at https://groups.google.com/forum/#!topic/memcached/YyzonP9HUi0
While #btilly is correct in saying that memcached does not do this natively, you can emulate it (quite efficiently) by maintaining an index of keys that share your prefix, allowing you to then fetch all entries that match a certain prefix.
Obviously this will only work for specific keys that you choose in advance and not arbitrary data, but it's quite workable if you can live with that limitation. There is a good article on this subject by one of the memcache developers.
You can use Namespace and perform what you need. Here is a PHP library which perform the same. You can use same Memcached for multiple Applications.
https://github.com/vijayabose/n_memcached

why does memcached not support "multi set"

Can anyone explain why memcached folks decided to support multi get but not multi set.
By multi I mean operation involving more than one key (see protocol at http://code.google.com/p/memcached/wiki/NewCommands).
So you can get multiple keys in one shot (basic advantage is the standard saving you get by doing less round trips) but why can not you get bulk sets?
My theory is that it was meant to do less number of sets and that too individually (e.g. on a cache read and miss). But I still do not see how multi-set really conflicts with the general philosophy of memcached.
I looked at the client features at http://code.google.com/p/memcached/wiki/NewCommonFeatures and it seems that some clients potentially do support "Multi-Set" (why only in binary protocol?). I am using Java spy memcached, btw.
It's not supported in the text protocol because it'd be very, very complicated to express, no clients would support it, and it would provide very little that you can't already do from the text protocol.
It's supported in the binary protocol because it's a trivial use case of binary operations.
spymemcached supports it implicitly -- just do a bunch of sets and magic happens:
http://dustin.github.com/2009/09/23/spymemcached-optimizations.html
I don't know a lot about memcache internals, but I assume writes have to be blocking, atomic operations. I assume that allowing multiple set operations to be batched, you could block all reads for a long time (or risk a get occurring while only half of a write had been applied). Forcing writes to be done individually allows them to be interleaved fairly with gets.
I would imagine that the restriction against using multi sets is to avoid collisions when writing cached values to the memcache.
As an object cache, I can't foresee an example of when you would need transactional type writes. This use case seems less suited for a caching layer, but better suited for the underlying database.
If sets come in interleaved from different clients, it is most likely the case that for one key, the last one wins, or is at least close enough, until the cache is invalidated and a newer value is written.
As Gian mentions, there don't seem to be any good reasons to block reads from the cache while several or many writes to the cache happen.