nbf: Defines the time before which the JWT MUST NOT be accepted for processing
I found this definition about nbf in json web tokens. But still wondering what the usage of nbf is? Why we use this? Does it relate to the term of security?
Any idea would be appreciated.
It definitely is up to how you interpret the time.
One of possible scenarios I could make up is literally - when a token must last from some particular point in time til another point in time.
Say, you're selling some API or resource. And a client purchased access that lasts for one hour and the access starts tomorrow in the midday.
So you issue a JWT with:
iat set to now
nbf set to tomorrow 12:00pm
exp set to tomorrow 1:00pm
There is one more thing to add what #zerkms told, if you want the token to be used from now, then
nbf also need to be current time(now)
Otherwise you'll get error like the token cannot be used prior to this particular time.
It can be given a time of 3 seconds from time of creation to avoid robots and allow only humans users to access the API.
'nbf' means 'Not Before'.
If nbf=3000, then the token cannot be used before 3 seconds of creation. This makes a brute force attack nearly impossible.
Related
Can someone please explain the specifics of how NFT metadata is stored (both on and off chain examples?) and how it's called by apps like Opensea or Decentlraland?
Additionally, would it be especially challenging to utilize dynamic metadata for an NFT, which updates regularly based on changing smart contract variables, which themselves change due to user interactions with the smart contracts? E.g. imagine an updatable "Countdown" NFT where the jpeg shows a picture of the integer "days until X", which updates each day as time passes but can also updated based on changing X in the NFT smart contacts... made this up on the spot but actually an interesting idea? :)
Is this doable? Are there storage challenges? Do apps call metadata repeatedly or would they call it once and never reflect updates?
From my understanding you generally set a baseURI in a erc721 contract. And then if you have an auto incrementing token id. The first token will be the baseURI plus "1".
In other words:
baseURI: "https://myserver.com/api/metadata",
tokenId: 1
the token metadata is hosted at https://myserver.com/api/metadata/1
And that takes some storage off the chain, if every token has it's own unique url it adds a bit of unnecessary junk to the chain.
According to the docs of Identity Server, there's always the following four claims provided in each token:
Issuer name - iss
Client ID - client_id
Lifetime - exp
Scope - nbf
Given the lifetime is computed as expiration occasion, I understand all of the codes except for the last one. What does the code "NBF" stand for?
I even checked the disambiguation on Wikipedia but there's nothing relating tokens at all.
Bonus question. What's the reason behind the codes being of different format? I can't help wondering why client_id isn't following the same pattern and set to cid. I sense some historical context...
Not before
That's what nbf means.
The usage of this claim is optional and it identifies the time before which the token must not be accepted.
See the definition from the RFC 7519:
4.1.5. "nbf" (Not Before) Claim
The nbf (not before) claim identifies the time before which the
JWT MUST NOT be accepted for processing. The processing of the
nbf claim requires that the current date/time MUST be after or
equal to the not-before date/time listed in the nbf claim.
Implementers MAY provide for some small leeway, usually no more
than a few minutes, to account for clock skew. Its value MUST be a
number containing a NumericDate value. Use of this claim is
OPTIONAL.
The lifespan of the token starts after the time stated in the nbf claim and ends at the time stated in the exp claim.
Addressing your bonus question:
It's important to highlight that nbf is a standard claim registered in IANA while client_id is not. But it doesn't prevent client_id to be used.
Claim names can be defined at will by those using JWTs. The claims defined in the RFC 7519, however, are intentionally short:
All the names are short because a core goal of JWTs is for the representation to be compact.
This document also states the following:
Because a core goal of this specification is for the resulting representations to be compact, it is RECOMMENDED that the name be short -- that is, not to exceed 8 characters without a compelling reason to do so.
I'm in the midst of finalising the set of cmdlets for a server application. Part of the application includes security principal management and data object management, and "expiration" of both (timed and manual). After the expiration date, login and access for the security principal is refused and access to the data owned by that principal is optionally prevented (either immediately by deletion or as part of automatic maintenance by marking it as expired).
From the output of Get-Verb, I cannot see an obvious synonym for Expire, which is the most natural choice of verb for the action being undertaken here. Expire on a security principal expires the principal and may also expire all their stored data, while expire of a data object is restricted to that object.
Set- is already in use for both object types, and has a partial overlap in functionality (Expire- forces a date in the past, and removes data, while Set- will allow future or past dates but NOT remove the data).
In this fashion Expire is combining two operations (Set+Remove) and for data-security reasons, we wouldn't want to force separation into the two operations (that's already possible).
For this reason, I also consider that Disable- is not appropriate since it suggests the possibility of reversal with Enable-.
I also think Remove- by itself is inappropriate since there are data records specifically not deleted as part of the operation.
Unpublish seems very close at least for the data, but again it seems that the intent is for Unpublish and Publish to be paired, and in this case it would not be reversible. It also does not make sense when applied to the security principal.
So which (if any) standard verb would you expect to use, if you wanted to expire something?
Looking at the list of approved verbs, two jump out at me:
Deny (dn):
Refuses, objects, blocks, or opposes the state of a resource or process.
Revoke (rk): Specifies an action that does not allow access to a resource. This verb is paired with Grant.
I wouldn't worry too much if there is not a paired operation, since that happens with some of the built-in cmdlets. Stop-Computer, for example, has no paired Start-Computer. There is Remove-Variable, but no Add-Variable (there is New-Variable). I think that it is only important if a paired command exists that it is named consistently.
Another option may be to use something like a Set-ObjectExpiration/Get-ObjectExpiration especially, if it makes sense to want to query when objects are going to expire.
What about Invoke? It could be Invoke-ExpireAppObject Or something like that.
There really isn't an approved verb that fits your scenario based on MS reccomendations
I have been using POST in a REST API to create objects. Every once in a while, the server will create the object, but the client will be disconnected before it receives the 201 Created response. The client only sees a failed POST request, and tries again later, and the server happily creates a duplicate object...
Others must have had this problem, right? But I google around, and everyone just seems to ignore it.
I have 2 solutions:
A) Use PUT instead, and create the (GU)ID on the client.
B) Add a GUID to all objects created on the client, and have the server enforce their UNIQUE-ness.
A doesn't match existing frameworks very well, and B feels like a hack. How does other people solve this, in the real world?
Edit:
With Backbone.js, you can set a GUID as the id when you create an object on the client. When it is saved, Backbone will do a PUT request. Make your REST backend handle PUT to non-existing id's, and you're set.
Another solution that's been proposed for this is POST Once Exactly (POE), in which the server generates single-use POST URIs that, when used more than once, will cause the server to return a 405 response.
The downsides are that 1) the POE draft was allowed to expire without any further progress on standardization, and thus 2) implementing it requires changes to clients to make use of the new POE headers, and extra work by servers to implement the POE semantics.
By googling you can find a few APIs that are using it though.
Another idea I had for solving this problem is that of a conditional POST, which I described and asked for feedback on here.
There seems to be no consensus on the best way to prevent duplicate resource creation in cases where the unique URI generation is unable to be PUT on the client and hence POST is needed.
I always use B -- detection of dups due to whatever problem belongs on the server side.
Detection of duplicates is a kludge, and can get very complicated. Genuine distinct but similar requests can arrive at the same time, perhaps because a network connection is restored. And repeat requests can arrive hours or days apart if a network connection drops out.
All of the discussion of identifiers in the other anwsers is with the goal of giving an error in response to duplicate requests, but this will normally just incite a client to get or generate a new id and try again.
A simple and robust pattern to solve this problem is as follows: Server applications should store all responses to unsafe requests, then, if they see a duplicate request, they can repeat the previous response and do nothing else. Do this for all unsafe requests and you will solve a bunch of thorny problems. Repeat DELETE requests will get the original confirmation, not a 404 error. Repeat POSTS do not create duplicates. Repeated updates do not overwrite subsequent changes etc. etc.
"Duplicate" is determined by an application-level id (that serves just to identify the action, not the underlying resource). This can be either a client-generated GUID or a server-generated sequence number. In this second case, a request-response should be dedicated just to exchanging the id. I like this solution because the dedicated step makes clients think they're getting something precious that they need to look after. If they can generate their own identifiers, they're more likely to put this line inside the loop and every bloody request will have a new id.
Using this scheme, all POSTs are empty, and POST is used only for retrieving an action identifier. All PUTs and DELETEs are fully idempotent: successive requests get the same (stored and replayed) response and cause nothing further to happen. The nicest thing about this pattern is its Kung-Fu (Panda) quality. It takes a weakness: the propensity for clients to repeat a request any time they get an unexpected response, and turns it into a force :-)
I have a little google doc here if any-one cares.
You could try a two step approach. You request an object to be created, which returns a token. Then in a second request, ask for a status using the token. Until the status is requested using the token, you leave it in a "staged" state.
If the client disconnects after the first request, they won't have the token and the object stays "staged" indefinitely or until you remove it with another process.
If the first request succeeds, you have a valid token and you can grab the created object as many times as you want without it recreating anything.
There's no reason why the token can't be the ID of the object in the data store. You can create the object during the first request. The second request really just updates the "staged" field.
Server-issued Identifiers
If you are dealing with the case where it is the server that issues the identifiers, create the object in a temporary, staged state. (This is an inherently non-idempotent operation, so it should be done with POST.) The client then has to do a further operation on it to transfer it from the staged state into the active/preserved state (which might be a PUT of a property of the resource, or a suitable POST to the resource).
Each client ought to be able to GET a list of their resources in the staged state somehow (maybe mixed with other resources) and ought to be able to DELETE resources they've created if they're still just staged. You can also periodically delete staged resources that have been inactive for some time.
You do not need to reveal one client's staged resources to any other client; they need exist globally only after the confirmatory step.
Client-issued Identifiers
The alternative is for the client to issue the identifiers. This is mainly useful where you are modeling something like a filestore, as the names of files are typically significant to user code. In this case, you can use PUT to do the creation of the resource as you can do it all idempotently.
The down-side of this is that clients are able to create IDs, and so you have no control at all over what IDs they use.
There is another variation of this problem. Having a client generate a unique id indicates that we are asking a customer to solve this problem for us. Consider an environment where we have a publicly exposed APIs and have 100s of clients integrating with these APIs. Practically, we have no control over the client code and the correctness of his implementation of uniqueness. Hence, it would probably be better to have intelligence in understanding if a request is a duplicate. One simple approach here would be to calculate and store check-sum of every request based on attributes from a user input, define some time threshold (x mins) and compare every new request from the same client against the ones received in past x mins. If the checksum matches, it could be a duplicate request and add some challenge mechanism for a client to resolve this.
If a client is making two different requests with same parameters within x mins, it might be worth to ensure that this is intentional even if it's coming with a unique request id.
This approach may not be suitable for every use case, however, I think this will be useful for cases where the business impact of executing the second call is high and can potentially cost a customer. Consider a situation of payment processing engine where an intermediate layer ends up in retrying a failed requests OR a customer double clicked resulting in submitting two requests by client layer.
Design
Automatic (without the need to maintain a manual black list)
Memory optimized
Disk optimized
Algorithm [solution 1]
REST arrives with UUID
Web server checks if UUID is in Memory cache black list table (if yes, answer 409)
Server writes the request to DB (if was not filtered by ETS)
DB checks if the UUID is repeated before writing
If yes, answer 409 for the server, and blacklist to Memory Cache and Disk
If not repeated write to DB and answer 200
Algorithm [solution 2]
REST arrives with UUID
Save the UUID in the Memory Cache table (expire for 30 days)
Web server checks if UUID is in Memory Cache black list table [return HTTP 409]
Server writes the request to DB [return HTTP 200]
In solution 2, the threshold to create the Memory Cache blacklist is created ONLY in memory, so DB will never be checked for duplicates. The definition of 'duplication' is "any request that comes into a period of time". We also replicate the Memory Cache table on the disk, so we fill it before starting up the server.
In solution 1, there will be never a duplicate, because we always check in the disk ONLY once before writing, and if it's duplicated, the next roundtrips will be treated by the Memory Cache. This solution is better for Big Query, because requests there are not imdepotents, but it's also less optmized.
HTTP response code for POST when resource already exists
I'm making a site for a writers management company. They get tons of script submissions every day from prospective and often unsolicited writers. The new site will allow a prospective writer to submit a short logline / sample of his or her idea. This idea gets sent to an email account at the management group. If the management group likes what they see, they want to be able to approve that submission from within the email and have a unique link dispatched to the submitter to upload their full script. This link would either only work once, or only for a certain amount of time so that only the intended recipient could use it.
So, can anyone point me in the direction of some sort of (I'm assumine PHP + mySQL) CMS or framework that could accomplish this? I've searched a lot, but I can't seem to figure out the right way to phrase this query to a search engine.
I have moderate programming experience, but not much with PHP outside of some simple Wordpress hacks.
Thanks!
I will just give you general guidelines on a simple way to construct such a system.
I assume that the Writer is somehow Registered into the system, and his/her profile contains a valid mail address.
So, when he submits the sample, you would create an entry on the "Sample" table. Then you would mail a Manager with the sample and a link. This link would point to a script giving the database "id" of the sample as a parameter (this script should verify that the manager is logged on -- if not, show the login screen and after successful login redirect him back).
This script would then be aware of the Manager's intention to allow the Writer to submit his work. Now the fun begins.
There are many possibilities:
You can create an entry in an appropriate "SubmitAuthorizations" DB table containing the id of the Writer and the date this authorization was given (ie, the date when the row was added to your DB). Then you simply send a mail to the Writer with a link like "upload.php?id=42", where the id is the authorization id. This script would check if the logged user is the correct Writer, and if he is within the allowed timeframe (by comparing the stored "authorization date" and the current date).
The next is the one I prefer: without a special table just for handling something trivial (let's say you will never want to "edit" an authorization, nor "cancel" it, but it may still "expire"). You simply simply give the Writer a link with 2 parameters: the date the authorization was given and an authorization key, like: "upload.php?authDate=20091030&key=87a62d726ef7..."
Let me explain how it works.
The script would first verify if the Writer is logged on (if not, show the login page with a redirection after successful login).
So, now it's time to validate the request: that is, check if this is not a "forged" link. How to do this? It's just a "smart" way of construction this authorization key.
You can do something like:
key = hash(concat(userId, ";", authDate, ";", seed));
Well, here hash() is what we call a "one-way function", like MD5, SHA1, etc. Then concat() is simply a string concatenation function. Finally seed is something like a "master password", completely random and that will not change (for if you change it all the issued links would stop working) just to increase security -- let's say a hacker correctly guesses you are using MD5 (which is easy) and the he tries to hack your system by hashing some combinations of the username and the date.
Also, for a request to be valid, it must be in the correct time frame.
So, if both the key is valid, and the date is within the time frame, you are able to accept an upload.
Some points to note:
This is a very simple system, but might be exactly what you need.
You should avoid MD5 for the hashing function, take something like SHA1 instead.
For the link sent to the Writer, you could "obfuscate" the parameter names, ie, call them "k" for the "key" and "d" for the "authDate".
For the date, you could chose another format, more "cryptic", like the unix epoch.
Finally, you can encode the parameters with something like "base64" (or simply apply some character replacing function like rot13 for instance, but that take digits into account aswell) just in order to make them more difficult to guessing
Just for completeness, in the validation script you can also check if the Writer has already sent a file on the time frame, thus making it impossible to him to send many files within the time frame.
I have recently implemented something like this twice on the company I work for, for two completely different uses. Once you get the idea, it is extremelly simple to implement it -- maybe less than 10 lines of code for the whole key-generation and validation process.
On one of them, the agent equivalent to your Writer had no account into the system (actually it would be his first contact with the system) -- there was only his "profile" on the system, managed by someone else. In this case, you would have to include the "Writer"'s id on the parameters to the "Upload" script aswell.
I hope this helps, and that it was clear enough. If I find the time, I will blog about it with an working example on some language.