memcached entry expiration set to 0 - memcached

Does memcached entries with expiry time set to 0 not expire - as mentioned in the PHP Memcached docs, but not mentioned in the memcached protocol spec?
I am using spymemcached client and the docs for spymemcached does not say anything about it either.

Setting expiration to 0 means the item has no expiration. Therefore it will never expire.

Related

How to modify default expired time of continue token in Kubernetes?

On this page https://kubernetes.io/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks, there is a continue token that will expire after a short amount of time (by default 5 minutes).
I find that when kubernetes controller manager does cronjob syncall() function in my cluster, this token always expires and stops cronjob creating jobs on schedule.
The following is the log in kubernetes-controller-manager:
E0826 11:26:45.441592 1 cronjob_controller.go:146] Failed to extract cronJobs list: The provided continue parameter is too old to display a consistent list result. You can start a new list without the continue parameter, or use the continue token in this response to retrieve the remainder of the results. Continuing with the provided token results in an inconsistent list - objects that were created, modified, or deleted between the time the first chunk was returned and now may show up in the list.
So I want to know can I modify the default expired time of the continue token in Kubernetes, and how to do it?
Thanks.
This is an etcd default. Any auth request to etcd will incur into that 5 seconds expiry interval. This is due to the compaction interval. The good news is that you can change that as an option in the kube-apiserver with the --etcd-compaction-interval option.
Also, it looks like doing a simple GET within the 5 minutes would actually make it extend the token timeout.
✌️

How to choose the right TTL value for Web-push?

How to choose the right value for the TTL? We need a push messsage delivered reliably, not being dropped, but at the same time we would like it delivered faster, because it is used to initiate live calls. I understand that 0 is not an option for us, since it has a good chance to be dropped? But then should it be 60*60 (an hour) or 60 (a minute) or what is the right way of thinking here?
You must remember that the value of TTL paramater must be a duration from 0 to 2,419,200 seconds, and it corresponds to the maximum period of time of push message to live on the push service before it's delivered.
If you set a TTL of zero, the push service will attempt to deliver the
message immediately, but if the device can't be reached, your message
will be immediately dropped from the push service queue.
You can also consider the following best practice of using TTL:
The higher the TTL, the less frequently caching name servers need to query authoritative name servers.
A higher TTL reduces the perceived latency of a site and decreases the dependency on the authoritative name servers.
The lower the TTL, the sooner the cached record expires. This allows queries for the records to occur more frequently.

Kafka Streams - Low-Level Processor API - RocksDB TimeToLive(TTL)

I'm kind of experimenting with the low level processor API. I'm doing data aggregation on incoming records using the processor API and writing the aggregated records to RocksDB.
However, I want to retain the records added in the rocksdb to be active only for 24hr period. After 24hr period the record should be deleted. This can be done by changing the ttl settings. However, there is not much documentation where I can get some help on this.
how do I change the ttl value? What java api should I use to set the ttl time to 24 hrs and whats the current default ttl settings time?
I believe this is not currently exposed via the api or configuration.
RocksDBStore passes a hard-coded TTL when opening a RocksDB:
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java#L158
and the hardcoded value is simply TTL_SECONDS = TTL_NOT_USED (-1) (see line 79 in that same file).
There are currently 2 open ticket regarding exposing TTL support in the state stores: KAFKA-4212 and KAFKA-4273:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20text%20~%20%22rocksdb%20ttl%22
I suggest you comment on one of them describing your use case to get them moving forward.
In the interim, if you need the TTL functionality right now, state stores are pluggable, and the RocksDBStore sources readily available, so you can fork it and set your TTL value (or, like the pull request associated with KAFKA-4273 proposes, source it from the configs).
I know this is not ideal and sincerely hope someone comes up with a more satisfactory answer.

Auto expiration and deletion of znodes in Zookeeper?

Is there a way to have a znode auto expire and delete (under Zookeeper control) after some length of time? I don't need to keep a znode around after that point. I want to conserve resources.
If your answer is "no, your application has to handle that", would you kindly point me to some documentation that makes that clear? (Right now, I suspect this may be the case, but I don't want to assume it too quickly.)
If the answer is "not currently, but Zookeeper could be extended to do that", then I would be especially thankful to a suggestion of a good starting point for making such an enhancement.
According to Patrick on The Zookeeper Mailing List: ZNODE time to live:
There is no TTL like feature in the current implementation.
That was 26 Apr 2012, which would correspond to version 3.3.5 according to the list of Apache ZooKeeper Releases.
I skimmed the release notes for 3.3.6, 3.4.4, and 3.4.5 and found no mention of "TTL" or "time to live" or anything along those lines.
Zookeeper 3.5.3 contains support for ttl on zknodes.
A new function call is added where you can provide a ttl for a node in milliseconds.

Memcache maximum key expiration time

What's memcached's maximum key expiration time?
If I don't provide an expiration time and the cache gets full, what happens?
You can set key expiration to a date, by supplying a Unix timestamp instead of a number of days. This date can be more than 30 days in the future:
Expiration times are specified in unsigned integer seconds. They can be set from 0, meaning "never expire", to 30 days (60*60*24*30). Any time higher than 30 days is interpreted as a unix timestamp date. If you want to expire an object on january 1st of next year, this is how you do that.
https://github.com/memcached/memcached/wiki/Programming#expiration
But, as you say, if you’re setting key expiration to an amount of time rather than a date, the maximum is 2,592,000 seconds, or 30 days.
If you don't provide expiration and cache gets full then the oldest key-values are expired first:
Memory is also reclaimed when it's time to store a new item. If there are no free chunks, and no free pages in the appropriate slab class, memcached will look at the end of the LRU for an item to "reclaim". It will search the last few items in the tail for one which has already been expired, and is thus free for reuse. If it cannot find an expired item however, it will "evict" one which has not yet expired. This is then noted in several statistical counters
https://github.com/memcached/memcached/wiki/UserInternals#when-are-items-evicted
No there is no limit. The 30 days limit is if you give the amount of seconds it should stay there, but if you give a timestamp, there is only the max long or int value on the machine which can be a limit.
->set('key', 'value', time() + 24*60*60*365) will make the key stay there for a year for example, but yeah if the cache gets full or restarted in between, this value can be deleted.
An expiration time, in seconds. Can be up to 30 days. After 30 days,
is treated as a unix timestamp of an exact date.
https://code.google.com/p/memcached/wiki/NewCommands#Standard_Protocol
OK, I found out that the number of seconds may not exceed 2592000 (30 days). So the maximum expiration time is 30 days.
Looks like some answers are not valid anymore.
I found out a key does not get set at all when the TTL is too high. For example 2992553564.
Tested with the following PHP code:
var_dump($memcached->set($id, "hello", 2992553564); // true
var_dump($memcached->get($id)); // empty!
var_dump($memcached->set($id, "hello", 500); // true
var_dump($memcached->get($id)); // "hello"
Version is memcached 1.4.14-0ubuntu9.
On laravel config.session.lifetime setting that if set to be an equivalent of 30days above, will be considered as a timestamp (this will give an error of token mismatch everytime assuming that memcached is used).
To answer, memcached expiration could be set anytime. (Laravel's default setting (on v5.0) will set you to an already expire timestamp). If you did not set it, the defualt will be used.
If I don't provide an expiration time and the cache gets full, what happens?
If the expiration is not provided (or TTL is set to 0) and the cache gets full then your item may or may not get evicted based on the LRU algorithm.
Memcached provides no guarantee that any item will persist forever. It may be deleted when the overall cache gets full and space has to be allocated for newer items. Also in case of a hard reboot all the items will be lost.
From user internals doc
Items are evicted if they have not expired (an expiration time of 0 or
some time in the future), the slab class is completely out of free
chunks, and there are no free pages to assign to a slab class.
Below is how you can reduce the chance's of your item getting cleaned by the LRU job.
Create an item that you want to expire in a
week? Don't always fetch the item but want it to remain near the top
of the LRU for some reason? add will actually bump a value to the
front of memcached's LRU if it already exists. If the add call
succeeds, it means it's time to recache the value anyway.
source on "touch"
It is also good to monitor overall memory usage of memcached for resource planning and track the eviction statistics counter to know how often cache's are getting evicted due to lack of memory.