PKI client behavior when delta CRL has expired - certificate

I have an internal Windows Server 2012 Enterprise Root CA and a couple of CDPs. I am trying to ensure a .NET client application running on Windows Server 2012 does not fail when it builds a certificate chain because the CRL and Delta CRL files it uses as part of the process have expired.
So far it seems like a possible solution would be to have overlapping CRLs being issued, to extend the time during which a failure preventing Base CRL publishing can kill an app depending on them (still looking for detailed/non-theoretical explanation on how exactly that is done, if you have examples, please let me know)
Another possible solution (or even in combination with overlapping CAs) would be to have a long CRL publication interval (e.g. 2 weeks) and a short Delta CRL interval (e.g. 1 hour). The question here is - what happens in scenario like this:
A client has cached the Base and Delta CRLs
For one reason or another the Delta CRL cannot be published to the CDPs for, let's say 6 hours - way past the validity of the Delta CRL, but (in most cases) before the Base CRL has expired
After an hour or so (providing the Delta CRL publishes every hour) the last successfully published Delta CRL would have expired, so the only valid one (for some time) would be the Base CRL. Would the client then continue processing since it has cached the Base CRL? Or would it fail? The closest explanation I have found is from this old article: "If a valid base CRL exists and is available, but no delta or time valid delta is available, the certificate chaining engine returns a warning that no delta CRL is available". It seems the client should continue processing and not throw exception and that makes sense, but this is a very old article and I would feel more comfortable with something more up-to-date... :)
So, bottom line, does the above article still hold for modern systems? And if you have any detailed info on how to setup Overlapping CRL publishing on a Windows Server 2012 Enterprise CA, please share... :)
Thanks!

Related

custom program error: 0x3f metaplex candy machine createSetCollectionDuringMintInstruction

I have a metaplex candy machine and collection that I set up several weeks back. Minting worked initially but is now failing.
The error reported is
custom program error: 0x3f
Which appears to be from the nested instruction to the metadata program. Which should be
set_and_verify_collection
readonly code: number = 0x3f;
readonly name: string = 'DataTypeMismatch';
It can be thrown from metdata deserialize.
https://github.com/metaplex-foundation/metaplex-program-library/blob/master/token-metadata/program/src/state/mod.rs
Which is called for the token metadata and collection metadata data.
I believe those are the only two places it would be thrown from in this method. AccountInfo is resolved for several accounts but it's only deserialized into a typed entity, with size and type considerations for those two entities.
Checking the metadata, on the collection, it's present and the length looks normal for metaplex metadata accounts at 679 bytes.
Now the metadata for the token being minted is not present because the tx failed. However, if, I attempt a transaction without the 'SetCollectionDuringMint' instruction added, the tx succeeds.
Interesting. The metadata account for the token has zero bytes allocated.
I don't recall this changing. In fact, if I go through my source history to older revisions, I've not been explicitly requesting to create the metadata account. I've simply been pre-allocating the account and calling mint nft on the candy machine.
Did the candy machine change to no longer automatically create the metadata account for the minted NFT?
It occurred to me almost as soon as I finished typing up the question, what the likely cause was.
It came to my attention a few weeks back that this older v2 version of the candy machine, does not actually halt transaction execution on constraint violations, but rather, charges the client a fee , for executing the transaction incorrectly.
It's likely the 'bot tax' protocol is allowing the real error, which may be occurring earlier, to get suppressed.
v3 of the candy machhine has made this something you can disable but we are a bit coupled to v2 at the moment.
Anyhow, what I think has happened here is that the bot taxing version of the candy machine, allowed the nft to mint, but didn't actually finish setting it up. Then the next instruction, set collection during mint, was unable to complete.
The real failure is earlier in the transaction, somewhere during the mint, where we no longer meet the mint criteria, and the old version of the candy machine is just charging us and failing silently.
Unfortunately, the root cause is still not clear. One other change that would have occurred between now and then is that the collection is now 'live' having passed the go live date. I'll have to dig through the validation constraints and see if there are any bot tax related short circuits related to this golive date transition.
EDIT: UPDATE: Looks like there were some changes, specific to devnet's token metadata program and my machine was affected. I'll need some new devnet machines.

why kerberos needs TGT?

I am learning the design of KDC, and find the protocol needs 3 rounds of info exchange. But I think the step of TGT is duplicate and unnecessary, for the KDC can just send the ticket in the 1st round.
So why is the design of the second round? What is the use of exchange of TGT?
It's not unnecessary. It's there as a long term optimization.
With Kerberos you have the two flows between the KDC and client:
AS-REQ: Exchanges a human supplied credential into a ticket (e.g. password, certificate, etc.).
TGS-REQ Exchanges a KDC-supplied ticket for another ticket.
The AS-REQ can request any ticket it wants. In practice it only requests krbtgt. The AS-REQ is designed to evaluate the used credential, look up the identity in the backing directory, apply any policy, and whatever else the KDC thinks is actually an expensive operation. Credential verification/derivation/etc. can be an expensive operation. Querying the directory for things like (say in Active Directory's case) group membership is incredibly expensive. This is expensive for the client because it's most likely always doing key derivation, and it's expensive for the KDC because it's always going to query the directory.
If you ask for krbtgt you unlock access to the TGS-REQ flow.
The TGS-REQ flow verifies the krbtgt, looks up the requested service in the directory, and copies the internal contents of the krbtgt ticket into the requested service ticket. That is orders of magnitude faster because it skips most of the stuff that happened in AS-REQ flow. It does still query the directory, but that's cheap compared to everything else. The client doesn't do any key derivation now.
More importantly now you don't need to keep the long term credential in memory anymore because you have the TGT.

Usage of nbf in json web tokens

nbf: Defines the time before which the JWT MUST NOT be accepted for processing
I found this definition about nbf in json web tokens. But still wondering what the usage of nbf is? Why we use this? Does it relate to the term of security?
Any idea would be appreciated.
It definitely is up to how you interpret the time.
One of possible scenarios I could make up is literally - when a token must last from some particular point in time til another point in time.
Say, you're selling some API or resource. And a client purchased access that lasts for one hour and the access starts tomorrow in the midday.
So you issue a JWT with:
iat set to now
nbf set to tomorrow 12:00pm
exp set to tomorrow 1:00pm
There is one more thing to add what #zerkms told, if you want the token to be used from now, then
nbf also need to be current time(now)
Otherwise you'll get error like the token cannot be used prior to this particular time.
It can be given a time of 3 seconds from time of creation to avoid robots and allow only humans users to access the API.
'nbf' means 'Not Before'.
If nbf=3000, then the token cannot be used before 3 seconds of creation. This makes a brute force attack nearly impossible.

Increasing the expiry date of automatic certificate rollover in ADFS 2.0

In a new implementation, we had a requirement to increase the certification duration from the Default one year to a bigger number in ADFS 2.0 . Is there an easy way to do this ?
This blog gives us a detailed explanation about Self signed certificates and pro's/cons while using it.
Use the below command (excerpt from the blog) to increase certificate duration to 3 years (1095 days):
Set-AdfsProperties -CertificateDuration 1095

Avoid duplicate POSTs with REST

I have been using POST in a REST API to create objects. Every once in a while, the server will create the object, but the client will be disconnected before it receives the 201 Created response. The client only sees a failed POST request, and tries again later, and the server happily creates a duplicate object...
Others must have had this problem, right? But I google around, and everyone just seems to ignore it.
I have 2 solutions:
A) Use PUT instead, and create the (GU)ID on the client.
B) Add a GUID to all objects created on the client, and have the server enforce their UNIQUE-ness.
A doesn't match existing frameworks very well, and B feels like a hack. How does other people solve this, in the real world?
Edit:
With Backbone.js, you can set a GUID as the id when you create an object on the client. When it is saved, Backbone will do a PUT request. Make your REST backend handle PUT to non-existing id's, and you're set.
Another solution that's been proposed for this is POST Once Exactly (POE), in which the server generates single-use POST URIs that, when used more than once, will cause the server to return a 405 response.
The downsides are that 1) the POE draft was allowed to expire without any further progress on standardization, and thus 2) implementing it requires changes to clients to make use of the new POE headers, and extra work by servers to implement the POE semantics.
By googling you can find a few APIs that are using it though.
Another idea I had for solving this problem is that of a conditional POST, which I described and asked for feedback on here.
There seems to be no consensus on the best way to prevent duplicate resource creation in cases where the unique URI generation is unable to be PUT on the client and hence POST is needed.
I always use B -- detection of dups due to whatever problem belongs on the server side.
Detection of duplicates is a kludge, and can get very complicated. Genuine distinct but similar requests can arrive at the same time, perhaps because a network connection is restored. And repeat requests can arrive hours or days apart if a network connection drops out.
All of the discussion of identifiers in the other anwsers is with the goal of giving an error in response to duplicate requests, but this will normally just incite a client to get or generate a new id and try again.
A simple and robust pattern to solve this problem is as follows: Server applications should store all responses to unsafe requests, then, if they see a duplicate request, they can repeat the previous response and do nothing else. Do this for all unsafe requests and you will solve a bunch of thorny problems. Repeat DELETE requests will get the original confirmation, not a 404 error. Repeat POSTS do not create duplicates. Repeated updates do not overwrite subsequent changes etc. etc.
"Duplicate" is determined by an application-level id (that serves just to identify the action, not the underlying resource). This can be either a client-generated GUID or a server-generated sequence number. In this second case, a request-response should be dedicated just to exchanging the id. I like this solution because the dedicated step makes clients think they're getting something precious that they need to look after. If they can generate their own identifiers, they're more likely to put this line inside the loop and every bloody request will have a new id.
Using this scheme, all POSTs are empty, and POST is used only for retrieving an action identifier. All PUTs and DELETEs are fully idempotent: successive requests get the same (stored and replayed) response and cause nothing further to happen. The nicest thing about this pattern is its Kung-Fu (Panda) quality. It takes a weakness: the propensity for clients to repeat a request any time they get an unexpected response, and turns it into a force :-)
I have a little google doc here if any-one cares.
You could try a two step approach. You request an object to be created, which returns a token. Then in a second request, ask for a status using the token. Until the status is requested using the token, you leave it in a "staged" state.
If the client disconnects after the first request, they won't have the token and the object stays "staged" indefinitely or until you remove it with another process.
If the first request succeeds, you have a valid token and you can grab the created object as many times as you want without it recreating anything.
There's no reason why the token can't be the ID of the object in the data store. You can create the object during the first request. The second request really just updates the "staged" field.
Server-issued Identifiers
If you are dealing with the case where it is the server that issues the identifiers, create the object in a temporary, staged state. (This is an inherently non-idempotent operation, so it should be done with POST.) The client then has to do a further operation on it to transfer it from the staged state into the active/preserved state (which might be a PUT of a property of the resource, or a suitable POST to the resource).
Each client ought to be able to GET a list of their resources in the staged state somehow (maybe mixed with other resources) and ought to be able to DELETE resources they've created if they're still just staged. You can also periodically delete staged resources that have been inactive for some time.
You do not need to reveal one client's staged resources to any other client; they need exist globally only after the confirmatory step.
Client-issued Identifiers
The alternative is for the client to issue the identifiers. This is mainly useful where you are modeling something like a filestore, as the names of files are typically significant to user code. In this case, you can use PUT to do the creation of the resource as you can do it all idempotently.
The down-side of this is that clients are able to create IDs, and so you have no control at all over what IDs they use.
There is another variation of this problem. Having a client generate a unique id indicates that we are asking a customer to solve this problem for us. Consider an environment where we have a publicly exposed APIs and have 100s of clients integrating with these APIs. Practically, we have no control over the client code and the correctness of his implementation of uniqueness. Hence, it would probably be better to have intelligence in understanding if a request is a duplicate. One simple approach here would be to calculate and store check-sum of every request based on attributes from a user input, define some time threshold (x mins) and compare every new request from the same client against the ones received in past x mins. If the checksum matches, it could be a duplicate request and add some challenge mechanism for a client to resolve this.
If a client is making two different requests with same parameters within x mins, it might be worth to ensure that this is intentional even if it's coming with a unique request id.
This approach may not be suitable for every use case, however, I think this will be useful for cases where the business impact of executing the second call is high and can potentially cost a customer. Consider a situation of payment processing engine where an intermediate layer ends up in retrying a failed requests OR a customer double clicked resulting in submitting two requests by client layer.
Design
Automatic (without the need to maintain a manual black list)
Memory optimized
Disk optimized
Algorithm [solution 1]
REST arrives with UUID
Web server checks if UUID is in Memory cache black list table (if yes, answer 409)
Server writes the request to DB (if was not filtered by ETS)
DB checks if the UUID is repeated before writing
If yes, answer 409 for the server, and blacklist to Memory Cache and Disk
If not repeated write to DB and answer 200
Algorithm [solution 2]
REST arrives with UUID
Save the UUID in the Memory Cache table (expire for 30 days)
Web server checks if UUID is in Memory Cache black list table [return HTTP 409]
Server writes the request to DB [return HTTP 200]
In solution 2, the threshold to create the Memory Cache blacklist is created ONLY in memory, so DB will never be checked for duplicates. The definition of 'duplication' is "any request that comes into a period of time". We also replicate the Memory Cache table on the disk, so we fill it before starting up the server.
In solution 1, there will be never a duplicate, because we always check in the disk ONLY once before writing, and if it's duplicated, the next roundtrips will be treated by the Memory Cache. This solution is better for Big Query, because requests there are not imdepotents, but it's also less optmized.
HTTP response code for POST when resource already exists