How does APNS determines a provider token's age? - jwt

The documentation says:
The claims payload of the token must include:
The issued at (iat) registered claim key, whose value indicates
the time at which the token was generated, in terms of the number of
seconds since Epoch, in UTC
To ensure security, APNs requires new tokens to be generated periodically. A new token has an
updated issued at claim key, whose value indicates the time the token was generated. If the timestamp for token issue is not within the last hour, APNs rejects subsequent push
messages, returning an ExpiredProviderToken (403) error.
Source: https://developer.apple.com/library/archive/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/CommunicatingwithAPNs.html
At another section:
iat | The “issued at” time, whose value indicates the time at which this JSON token was generated. Specify the value as the number of seconds since Epoch, in UTC. The value must be no more than one hour from the current time.
Those rules are so fragmented and repetitive at the same time, so please correct me if I'm wrong:
iat must be a numeric date between 1h ago and 1h from now. Let's say it's 8:30 now, and I set iat to 8 o'clock: does it mean my token is gonna be valid for another half hour, since that's what iat is telling APNS, or doest it start counting by the time APNS receive my push request? What if I set iat to 1h from now... does it mean my token is gonna be valid for 2h?
Another question. Given that:
Refresh Your Token Regularly
For security, APNs requires you to refresh your token regularly. Refresh your token no more than once every 20 minutes and no less than once every 60 minutes. APNs rejects any request whose token contains a timestamp that is more than one hour old. Similarly, APNs reports an error if you recreate your tokens more than once every 20 minutes.
Source: https://developer.apple.com/documentation/usernotifications/setting_up_a_remote_notification_server/establishing_a_token-based_connection_to_apns
Everytime I sign a token (using a Node module for JWT), it generates a different string, even though I use the same iat. Does it counts as a "recreation", causing a TooManyProviderTokenUpdate error if I use it before that 20 minute threshold?

Related

Implementing API throttling with RDB

I would like to implement this API throttling:
A user can only execute the operation once per minute (once executed, following requests will be rejected for 1 minute)
The expected total number of requests from all users is around 2 per second.
I am using PostgreSQL 14.5.
I guess I will need a table for exclusive processing. What kind of SQL/algorithm should I use?
You could store the latest accepted timestamp in a column. Every time a request is processed, the code could check if the interval between the current timestamp and the last accepted timestamp is less than a minute and reject if so.

How exactly does the "Honor Period" work?

PayPal describes a "Honor Period" that lasts for 3 days after you authorize or reauthorize a payment, up until 29 days after the first authorization. The docs don't really go into very much detail about this honor period though, just that you should capture within it and that you can restart an expired honor period by reauthorizing.
I have 3 main questions:
When does the honor period start/end exactly? Is it an exact 72 hour window, to the second, from when you auth/reauth? Does it roll over at midnight or something instead? If so, what timezone?
What is the preferred/recommended way to determine if the honor period for an authorization has expired or else determine the expiration time in the first place? Authorizations have a expiration_time field which marks the end of the 29 day window that an authorization is valid for. Is there a similar explicit time field for the honor period? Is it simply based on the update_time field on the latest auth/reauth?
Is there a way to reauthorize before the previous authorization expires? Or more specifically, is there some way to ensure that the payment is always in an honor period, and that there is zero risk of some issue occurring because their funds weren't being held for a short amount of time before we reauthorized them?
The honor period begins the moment a transaction is created and generally lasts 3 days. During this time, captures will generally succeed. During this time, the amount is generally reserved on the customer's funding source, which may be a credit or debit card, meaning they cannot spend it on other things. The exact behavior may vary depending on the funding source and the country due to different implementations and local regulations. The exact time at which an unused authorization "clears" from the customer's funding source and is no longer visible on their statement can also vary, and might take 10 days to no longer show up in some cases.
The rest of the PayPal authorization valid period -- a "post-honor" period, for lack of a better term -- begins on about day 4 and lasts until the end of day 29. During this time a capture attempt can still be made, and will succeed if money is available from the funding instrument. Such a later capture is roughly equivalent to the buyer themselves attempting a new transaction that is of type immediate capture, in the sense that they will succeed or fail for the same reasons.
Reauthorizations to get a new 3 day honor period (but which do NOT restart the 29-day authorization valid period) are almost always pointless. From day 4 to 29 just do a capture when you are ready, and forget you ever heard of the concept of reauthorization.

Bearer error="invalid_token", error_description="The token is not valid before

My scenario is when we are testing with user logging-in and logging-out multiple times, we are getting error randomly -
Date: Tue, 22 Jun 2021 13:58:41 GMT
WWW-Authenticate: Bearer error="invalid_token", error_description="The token is not valid before '06/22/2021 13:58:42'"
Backend API in dot net core, where we are generating and validating JWT tokens,
Your tokens have the nbf (JWT Not Before) Claim, when verifying a token with nbf the current time must be at or after that timestamp. These timestamps are UNIX timestamps in seconds.
What may be happening is
when you produce these tokens with nbf the claim value is ceiled to the nearest second, instead of being floored.
your clock may be skewed between the producer and consumer
In both cases the recommended way is described in the RFC
Implementers MAY
provide for some small leeway, usually no more than a few minutes, to
account for clock skew.
Some verification option like clock skew or clock tolerance may be present which you need to set to some acceptable value, e.g. 5 seconds to accommodate for tiny clock skew or floor/ceil discrepancies.

How to modify default expired time of continue token in Kubernetes?

On this page https://kubernetes.io/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks, there is a continue token that will expire after a short amount of time (by default 5 minutes).
I find that when kubernetes controller manager does cronjob syncall() function in my cluster, this token always expires and stops cronjob creating jobs on schedule.
The following is the log in kubernetes-controller-manager:
E0826 11:26:45.441592 1 cronjob_controller.go:146] Failed to extract cronJobs list: The provided continue parameter is too old to display a consistent list result. You can start a new list without the continue parameter, or use the continue token in this response to retrieve the remainder of the results. Continuing with the provided token results in an inconsistent list - objects that were created, modified, or deleted between the time the first chunk was returned and now may show up in the list.
So I want to know can I modify the default expired time of the continue token in Kubernetes, and how to do it?
Thanks.
This is an etcd default. Any auth request to etcd will incur into that 5 seconds expiry interval. This is due to the compaction interval. The good news is that you can change that as an option in the kube-apiserver with the --etcd-compaction-interval option.
Also, it looks like doing a simple GET within the 5 minutes would actually make it extend the token timeout.
✌️

Memcache maximum key expiration time

What's memcached's maximum key expiration time?
If I don't provide an expiration time and the cache gets full, what happens?
You can set key expiration to a date, by supplying a Unix timestamp instead of a number of days. This date can be more than 30 days in the future:
Expiration times are specified in unsigned integer seconds. They can be set from 0, meaning "never expire", to 30 days (60*60*24*30). Any time higher than 30 days is interpreted as a unix timestamp date. If you want to expire an object on january 1st of next year, this is how you do that.
https://github.com/memcached/memcached/wiki/Programming#expiration
But, as you say, if you’re setting key expiration to an amount of time rather than a date, the maximum is 2,592,000 seconds, or 30 days.
If you don't provide expiration and cache gets full then the oldest key-values are expired first:
Memory is also reclaimed when it's time to store a new item. If there are no free chunks, and no free pages in the appropriate slab class, memcached will look at the end of the LRU for an item to "reclaim". It will search the last few items in the tail for one which has already been expired, and is thus free for reuse. If it cannot find an expired item however, it will "evict" one which has not yet expired. This is then noted in several statistical counters
https://github.com/memcached/memcached/wiki/UserInternals#when-are-items-evicted
No there is no limit. The 30 days limit is if you give the amount of seconds it should stay there, but if you give a timestamp, there is only the max long or int value on the machine which can be a limit.
->set('key', 'value', time() + 24*60*60*365) will make the key stay there for a year for example, but yeah if the cache gets full or restarted in between, this value can be deleted.
An expiration time, in seconds. Can be up to 30 days. After 30 days,
is treated as a unix timestamp of an exact date.
https://code.google.com/p/memcached/wiki/NewCommands#Standard_Protocol
OK, I found out that the number of seconds may not exceed 2592000 (30 days). So the maximum expiration time is 30 days.
Looks like some answers are not valid anymore.
I found out a key does not get set at all when the TTL is too high. For example 2992553564.
Tested with the following PHP code:
var_dump($memcached->set($id, "hello", 2992553564); // true
var_dump($memcached->get($id)); // empty!
var_dump($memcached->set($id, "hello", 500); // true
var_dump($memcached->get($id)); // "hello"
Version is memcached 1.4.14-0ubuntu9.
On laravel config.session.lifetime setting that if set to be an equivalent of 30days above, will be considered as a timestamp (this will give an error of token mismatch everytime assuming that memcached is used).
To answer, memcached expiration could be set anytime. (Laravel's default setting (on v5.0) will set you to an already expire timestamp). If you did not set it, the defualt will be used.
If I don't provide an expiration time and the cache gets full, what happens?
If the expiration is not provided (or TTL is set to 0) and the cache gets full then your item may or may not get evicted based on the LRU algorithm.
Memcached provides no guarantee that any item will persist forever. It may be deleted when the overall cache gets full and space has to be allocated for newer items. Also in case of a hard reboot all the items will be lost.
From user internals doc
Items are evicted if they have not expired (an expiration time of 0 or
some time in the future), the slab class is completely out of free
chunks, and there are no free pages to assign to a slab class.
Below is how you can reduce the chance's of your item getting cleaned by the LRU job.
Create an item that you want to expire in a
week? Don't always fetch the item but want it to remain near the top
of the LRU for some reason? add will actually bump a value to the
front of memcached's LRU if it already exists. If the add call
succeeds, it means it's time to recache the value anyway.
source on "touch"
It is also good to monitor overall memory usage of memcached for resource planning and track the eviction statistics counter to know how often cache's are getting evicted due to lack of memory.