How can this be done in iText? I have a PDF with a signature from a client. I need to add an ocsp response to unsigned attributes.
I know how to change the signature itself using
org.bouncycastle.cms.CMSSignedData.replaceSigners(...).getEncoded()
but I don't know how to replace PdfName.CONTENTS in the PDF using new PdfString(newSignature).setHexWriting(true). If I use this code:
PdfDictionary other = new PdfDictionary();
other.put(PdfName.CONTENTS, new PdfString(newSignature).setHexWriting(true));
dicSignature.merge(other);
where dicSignature is the dictionary which contains the signature, then the signature (when the document is opened in Adobe Reader) is broken.
When iText manipulates a document using aPdfStamperin normal mode, it can (and often does) re-arrange the existing PDF objects. This obviously breaks the hash value of any existing integrated signature. Furthermore the byte ranges which would have to be signed, change. This most likely is your problem.
When iText manipulates a document using aPdfStamperin append mode, it leaves the PDF as is and only appends its additions and changes. While this in general is the way to go to keep integrated signatures from breaking, you cannot change the content of a signature this way because there are stricter rules concerning embedding signatures than for PDFs in general. Switching to append mode, therefore, would not fix your problem.
Thus, iText has an explicit method doing a signature insertion without otherwise changing the PDF:
MakeSignature.signDeferred(PdfReader reader,
String fieldName,
OutputStream outs,
ExternalSignatureContainer externalSignatureContainer)
throws DocumentException, IOException, GeneralSecurityException
Its name is due to the fact that this method originally is intended for the use case of deferred signing, i.e. first preparing the PDF for signing (i.e. adding all dictionaries and other necessary structures required to hash the byte ranges, including leaving a gap into which a signature container eventually shall be injected), calculating the hash value, and sending it to some other service while storing the prepared PDF locally. As soon as that other service returns the signature, the prepared PDF is located and the retrieved signature is inserted into it using this method.
The only difference to your use case is that there already is a signature in the gap. That signature, though, will be overwritten by your updated one when using signDeferred.
Having said all this, you may be in for a surprise if you expect that after you add an ocsp response to unsigned attributes, Adobe Reader uses these information for verification. In the context of integrated PDF signatures according to ISO-32000-1, section 12.8.3.3 PKCS#7 Signatures as used in ISO 32000,
the PKCS#7 object should contain [...] Revocation information as an signed attribute (PDF 1.6): This attribute may include all the revocation information that is necessary to carry out revocation checks for the signer's certificate and its issuer certificates. Since revocation information is a signed attribute, it must be obtained before the computation of the digital signature. This means that the software used by the signer must be able to construct the certification path and the associated revocation information. If one of the elements cannot be obtained (e.g. no connection is possible), a signature with this attribute will not be possible.
Related
I am working on a library which follows PKCS#11 standard.
https://www.cryptsoft.com/pkcs11doc/v220/
The library can generate RSA Keypair in token by the function C_GenerateKeyPair and returns appropriate object handles with return value CKR_OK.
The token(applet) not supports load of private/public key except generate key pair. What will be the appropriate return value of create RSA private/public key using C_CreateObject?
Now I am returning CKR_GENERAL_ERROR, is it okay?
Allowed return values are
CKR_ARGUMENTS_BAD, CKR_ATTRIBUTE_READ_ONLY,
CKR_ATTRIBUTE_TYPE_INVALID, CKR_ATTRIBUTE_VALUE_INVALID,
CKR_CRYPTOKI_NOT_INITIALIZED, CKR_DEVICE_ERROR, CKR_DEVICE_MEMORY,
CKR_DEVICE_REMOVED, CKR_DOMAIN_PARAMS_INVALID, CKR_FUNCTION_FAILED,
CKR_GENERAL_ERROR, CKR_HOST_MEMORY, CKR_OK, CKR_PIN_EXPIRED,
CKR_SESSION_CLOSED, CKR_SESSION_HANDLE_INVALID, CKR_SESSION_READ_ONLY,
CKR_TEMPLATE_INCOMPLETE, CKR_TEMPLATE_INCONSISTENT,
CKR_TOKEN_WRITE_PROTECTED, CKR_USER_NOT_LOGGED_IN.
Thanks for your help
Update
I have two types of applet, one supports load of RSA private/public key to token and another not supports. It can only possible to identify if the token supports load of key is the response of transmitted APDU. So I can't take decision only to check the class attribute of C_CreateObject.
If your library does not support C_CreateObject at all then the best choice IMO is CKR_FUNCTION_NOT_SUPPORTED.
Chapter 11 in PKCS#11 v2.20 states:
A Cryptoki library need not support every function in the Cryptoki API. However, even an unsupported function must have a "stub" in the library which simply returns the value CKR_FUNCTION_NOT_SUPPORTED.
If your library does support C_CreateObject for creation of other object types (e.g. certificates, data objects etc.) then the best choice IMO is CKR_ATTRIBUTE_VALUE_INVALID.
Chapter 10.1.1 in PKCS#11 v2.20 states:
If the supplied template specifies an invalid value for a valid attribute, then the attempt should fail with the error code CKR_ATTRIBUTE_VALUE_INVALID.
UPDATE
Now that you have shared more details about your library in the comments I can add more detailed explanation:
It seems I can call your implementation of C_CreateObject with template containing CKA_CLASS=CKO_CERTIFICATE and it will create certificate object on this particular token and return CKR_OK. If I call it with template containing CKA_CLASS=CKO_PRIVATE_KEY then your code will decide to return an error right after the evaluation of the supplied value of this attribute. IMO there is no doubt that chapter 10.1.1 of PKCS#11 v2.20 recommends you to return CKR_ATTRIBUTE_VALUE_INVALID in this case.
However if are not willing to follow behavior recommended by the specification and there is no predefined error code you like, you can introduce your own vendor defined code (see my older answer for more details):
#define CKR_TOKEN_OPERATION_NOT_SUPPORTED (CKR_VENDOR_DEFINED|0x0000001)
IMO confusion level for inexperienced developer will be the same regardless of error code you return. In the end he/she will need to consult your documentation or logs produced by your library to find out the real reason why he/she received the error.
I am implementing my own version of RSA-OAEP with SHA-256. I want to test it by comparing it to the output of the Cipher class in Java using RSA-OAEP and SHA-256. According to PKCS #1, RSA-OAEP requires a label, which by default is an empty string. However, I can't find a way to input a label in the built-in class. My implementation seems to work correctly for both encryption and decryption, but Cipher class produces different output. Is there a default label which the Cipher class uses?
What is called label L in PKCS1v2.1 RSAES-OAEP was called encoding parameters P in v2.0; see the description of pSourceAlgorithm in A.2.1. The Java API keeps the old terminology, presumably for compatibility, and the default is indeed an empty octet string, implemented in Java as a byte array of length 0. See https://docs.oracle.com/javase/7/docs/api/javax/crypto/spec/PSource.PSpecified.html . Note that even when P-call-me-L is empty, its hash which goes in DB before masking is not empty.
When you say 'different output', you do realize that OAEP is randomized (in a way that provably does not leak information to the adversary) and every encryption of the same plaintext should produce a unique ciphertext, but all of them should decrypt back to the same plaintext, right?
I am reading carefully the Digital Signature white paper and ITEXT IN ACTION: CHAPTER 12: PROTECTING YOUR PDF..
I have successfully added multiple signatures in append mode to a source PDF, and I have client who will add 2 or 3, or 4 signatures as a method of approving a source as a change management document.
Question:
Is there a way to treat the 'last' chosen signature as somehow final? We will be already using the field name as the signing persons Id, the Location as the persistent Id of the signing machine, and the reason as well the reason for signing.
This is for internal purposes so are OK with using the computers clock, and at the moment the only method I have come up with is to sign all detached signatures as CMS, except the last as CADES - so that if the last signature in the current file is ETSI rather than ADBE, then I will not allow more signatures. This feels however not very elegant, and if the starting PDF has a validated timestamp then this basic methodology will fail. It also relies on text parsing which also feels a little flimsy.
I have read the section on attaching actions but this seems a huge hammer to crack what should, in theory at least, be a much simpler exercise.
Did you get a chance to read 2.5.5 Locking fields and documents after signing?
In this case, the dictionary defining the signature field has a /Lock entry of which the value is a signature lock dictionary. One of the lock permissions could be LockPermissions.NO_CHANGES_ALLOWED.
The result would then be what you can see in figure 2.31 (locked fields after final approval). In this screen shot, you can see that sig4 locks the document.
I'm trying to build a system that can purge and regenerate URLs as required for a particular system. I previously was having issues with purging when the system located the object by hash but missed the variant as I didn't have a "purge;" in my vcl_miss (only in my vcl_hit, some guides/example vcl files do not mention this need but the main documentation does here).
What I'm trying to figure-out is if I need to do something similar for a REGEN call. From my understanding, "set req.hash_always_miss = true;" will mean that the old hash is missed and a new hash object is generated. Subsequent calls will find the new hash, but may still miss that object if there is not an appropriate variant in the cache.
Could someone confirm for me whether a subsequent request missing the variant in the new object will lead directly to a cache miss and fetch, rather than finding any of the variants from the previous object?
hash_always_miss will only influence the current/ongoing request and the cache contents that it replaces. A fetch will always happen, and the object will be put into the cache using the same rules as any other miss/fetch sequence.
The "old" other variants of the same hash are still valid objects and will be served to a client indicating request headers matching the varied headers.
hash_always_miss will replace the current variant, and nothing else.
To answer your question, the second part of your sentence is most correct.
I have requests like
5|0|7|http://localhost:8080/testproject/|29F4EA1240F157649C12466F01F46F60|com.test.client.GreetingService|greetServer|java.lang.String|myInput1|myInput2|1|2|3|4|2|5|5|6|7|
I would like to know how GWT generates the md5 value 29F4EA1240F157649C12466F01F46F60? Is it based on the client ip and date? Can anyone point me to the correct code? I just find stuff regarding the history token, but that looks different to me.
OK, after some research I think I found the answer.
The keywords you should have been looking for are "strong name" (or "strongName") and/or permutation, since it seems that with the RPC request they send out the permuatation strong name (that MD5 hash), so that you can possibly distinguish on the server side from which permutation the request was send.
The core function is Util.computeStrongName, it computes an MD5 hash (d'oh) of the provided byte array, with the added catch:
/*
* Include the lengths of the contents components in the hash, so that the
* hashed sequence of bytes is in a one-to-one correspondence with the
* possible arguments to this method.
*/
From there, I tracked back to the linkers and then to PermutationResult which is feeding Util.computeStrongName via this function:
/**
* The compiled JavaScript code as UTF8 bytes.
*/
byte[][] getJs();
Eh, I hope that was at least a bit helpful ;) If this still doesn't answer your question (or you were looking for something different), try in trunk/user/src/com/google/gwt/user/client/rpc (start in RpcRequestBuilder.java).
As Igor said, GWT uses MD5 hashes of your application code to produce unique names for each permutation of each version of your application. The specific hash you referenced is a part of the GWT RPC request payload that identifies a .gwt.rpc serialization policy file on the server. That policy file says which Java objects can be serialized as part of the request, response, or thrown exceptions in the GWT RPC service.