Sign sha256 hash with RSA using pkcs#11 api? - rsa

I'm currently using pyPkcs11 to sign files.
The following call works for signing common files with RSA and sha256,
session.sign(privKey, toSign, Mechanism(CKM_SHA256_RSA_PKCS, None)
But some of my files are already hashed (sha256), and the signature needs to give the same output that would be given by this openSSL command:
openssl pkeyutl -sign -pkeyopt digest:sha256 -in <inFilePath> -inkey <keyPath> -out <outFilePath>
I have tried the following call, which does not generate a hash of the file before signature,
session.sign(privKey, toSign, Mechanism(CKM_RSA_PKCS, None)
But the result is not the one i expected, and according to the first answer of this post CKM_RSA_PKCS vs CKM_RSA_X_509 mechanisms in PKCS#11,
CKM_RSA_PKCS on the other hand also performs the padding as defined in the PKCS#1 standards. This padding is defined within EMSA-PKCS1-v1_5, steps 3, 4 and 5. This means that this mechanism should only accept messages that are 11 bytes shorter than the size of the modulus. To create a valid RSASSA-PKCS1-v1_5 signature, you need to perform steps 1 and 2 of EMSA-PKCS1-v1_5 yourself.
After some research it appears that my file contains the first step of the signature described by the RFC 3447, so the missing part is the second one, where the ASN.1 value is generated.
Can I force this operation with pkcs11 and how ?
The PKCS#11 documentation doesn't seem to contain any information about it.

I see two ways to do this; the appropriate one depends on the token (not all tokens/wrappers do all machanisms).
As explained in this other answer, you could decorate the 32-octet SHA-256 that you start from into a full padded message representative. Basically, as explained in PKCS#1, if the RSA key is k octets (with I assume k≥51+11=62 octets, that is a public modulus at least 8⋅62-7=489 bits, which is a must for security), you
Append on the left the 19-octet string
30 31 30 0d 06 09 60 86 48 01 65 03 04 02 01 05 00 04 20
yielding a 51-octet string, which really is the ASN.1 DER encoding for the hash type and value per
DigestInfo ::= SEQUENCE {
digestAlgorithm DigestAlgorithm,
digest OCTET STRING
}
Further append on the left the string of k-51 octets (of which k-51-3 are FF)
00 01 FF FF FF FF..FF FF FF FF 00
yielding a k-octet string
Sign with mechanism CKM_RSA_X_509 (length k octets).
Or, alternatively: perform as in [1.]; skip [2.]; and in [3.] use mechanism CKM_RSA_PKCS (length 51 octets).
Disclaimer: I did not check, and have not used a PKCS#11 device lately.
Note: While there is no known attack against proper implementations of it, use of PKCS#1 v1.5 signature padding is increasingly frowned at; e.g. French authorities recommend
RecomSignAsym-1. Il est recommandé d’employer des mécanismes de signature asymétrique disposant d’une preuve de sécurité.
Or, in English:
It is recommended to use asymmetric signature mechanisms featuring a security proof
They mention RSA-SSA-PSS (sic). As a bonus, the PKCS#11 implementation of that is mechanism CKM_RSA_PKCS_PSS which accepts a hash, rather than the data to sign, making what's asked trivial.

I'm afraid, there is no PKCS#11 function, which can do what you want.
The only solution that I am aware of would be, to apply the PKCS#1 v1.5 padding manually to your hash and then sign the block using the CKM_RSA_X_509 (raw or textbook RSA) mechanism.

Rather than issuing the command,
openssl pkeyutl -sign -pkeyopt digest:sha256 -in <inFilePath> -inkey <keyPath> -out <outFilePath>
try
openssl dgst --sha256 -sign <keyPath> -sha256 -out <signature file> <inFilePath>
This produces 256 bytes signature that would probably be identical to
one generated by Pkcs11 C_Sign

Related

How to allow SHA1 hash function in chrony?

I use chrony to the synchronization between two devices. When I try to create a key with the SHA1 function, the next error appears:
chronyc>keygen 73 SHA1 256
Unknown hash function SHA1
How could I set up the SHA1 hash function?
i ran the command " keygen 73 sha1 256"
and got the same error message, the fix is to type SHA1 capital letters, and that made it working. hope this help someone.

Block Encoding format in PKCS

In PKCS block encoding format for RSA what is difference in Block type 01 and 02? And when are they used?
From https://www.rfc-editor.org/rfc/rfc2313#section-8.1 (Thanks, James):
8.1 Encryption-block formatting
A block type BT, a padding string PS, and the data D shall be
formatted into an octet string EB, the encryption block.
EB = 00 || BT || PS || 00 || D . (1)
The block type BT shall be a single octet indicating the structure of
the encryption block. For this version of the document it shall have
value 00, 01, or 02. For a private- key operation, the block type
shall be 00 or 01. For a public-key operation, it shall be 02.
It's worth noting that the phrase "block type" does not appear in the PKCS#1 v2.0 RFC (https://www.rfc-editor.org/rfc/rfc2437)
Later we see the verification half:
9.4 Encryption-block parsing
The encryption block EB shall be parsed into a block type BT, a
padding string PS, and the data D according to Equation (1).
It is an error if any of the following conditions occurs:
The encryption block EB cannot be parsed
unambiguously (see notes to Section 8.1).
The padding string PS consists of fewer than eight
octets, or is inconsistent with the block type BT.
The decryption process is a public-key operation
and the block type BT is not 00 or 01, or the decryption
process is a private-key operation and the block type is
not 02.
(emphasis mine)
Using the terminology from RFC2313:
Signature Generation is an encryption process which is a private-key operation (BT=01)
Signature Verification is a decryption process which is a public-key operation (BT=01)
Initiate Key Transfer / Enveloping is an encryption process which is a public-key operation (BT=02)
Receive Key Transfer is a decryption process which is a private-key operation (BT=02).
RFC 2437 (PKCS#1 2.0) replaced many of these terms, and got rid of the block type notion altogether. That byte is simply dictated to be 01 for PKCS#1 signature formatting, and dictated to be 02 for PKCS#1 encryption. That byte isn't fixed for either OAEP (encryption) or PSS (signing) padding schemes.

SignatureValue calculation for XML-DSIG

I am trying to write a method that returns a signature of an XML element for XMLDSIG using NET framework components (RSACryptoServiceProvider) in C++/CLI. Could please someone explain this excerpt from XMLDSIG specs ( http://www.w3.org/TR/2002/REC-xmldsig-core-20020212/ ) in simpler words, for I am have very little programming and maths background and therefore have trouble undrestanding this - Or provide an excerpt form a real code as an example where this is implemented?
The SignatureValue content for an RSA signature is the base64 [MIME]
encoding of the octet string computed as per RFC 2437 [PKCS1, section
8.1.1: Signature generation for the RSASSA-PKCS1-v1_5 signature scheme]. As specified in the EMSA-PKCS1-V1_5-ENCODE function RFC 2437
[PKCS1, section 9.2.1], the value input to the signature function MUST
contain a pre-pended algorithm object identifier for the hash
function, but the availability of an ASN.1 parser and recognition of
OIDs is not required of a signature verifier. The PKCS#1 v1.5
representation appears as: CRYPT (PAD (ASN.1 (OID, DIGEST (data))))
Note that the padded ASN.1 will be of the following form: 01 | FF*
| 00 | prefix | hash where "|" is concatenation, "01", "FF", and "00"
are fixed octets of the corresponding hexadecimal value, "hash" is the
SHA1 digest of the data, and "prefix" is the ASN.1 BER SHA1 algorithm
designator prefix required in PKCS1 [RFC 2437], that is, hex 30 21
30 09 06 05 2B 0E 03 02 1A 05 00 04 14 This prefix is included to make
it easier to use standard cryptographic libraries. The FF octet MUST
be repeated the maximum number of times such that the value of the
quantity being CRYPTed is one octet shorter than the RSA modulus.
In other words, if I am have the hash value for a certain XML element (not encoded in base64, is that right?), what do I do with it before sending it to the SignHash (in RSACryptoServiceProvider) function?
I know it's in the text, but I have troubles understanding it.
I don't understand "CRYPT (PAD (ASN.1 (OID, DIGEST (data))))" at all, although I understand parts of it... I don't understand the way to get the OID and then ASN and how to pad it...
Let me try to explain the components, and see if this gets you any closer:
DIGEST(data) is the hash-value you already computed
OID is a globally unique identifier representing the hash-algorithm used. For SHA1 this is 1.3.14.3.2.26
ANS.1 means ANS.1-encoding of the OID and the hash-value as an ASN.1-sequence. This means the hex-values listed in the reference, followed by the actual hash.
PAD means concatenating 01 FF* 01 with the ASN.1-encoded prefix and the hash to get the desired length (FF* means repeat FF an appropriate number of times, the RFC gives details)
CRYPT is the RSA-encryption-function
However, I believe the signHash-function does all of this for you, you just provide the OID and the hash-value.

Base64 Encoding and Decoding

I would appreciate if someone could please explain this to me.
I came across this post (not important just reference) and saw a token encoded with base64 where the guy decoded it.
EYl0htUzhivYzcIo+zrIyEFQUE1PQkk= -> t3+(:APPMOBI
I then tried to encode t3+(:APPMOBI again using base64 to see if I would get the same result, but was very surprised to get:
t3+(:APPMOBI - > dDMrKDpBUFBNT0JJ
Completly different token.
I then tried to decode the original token EYl0htUzhivYzcIo+zrIyEFQUE1PQkk= and got t3+(:APPMOBI with random characters between it. (I got ◄ëtå╒3å+╪═┬(√:╚╚APPMOBI could be wrong, I quickly did it off the top off my head)
What is the reason for the difference in tokens were they not supposed to be the same?
The whole purpose of base64 encoding is to encode binary data into text representation so that they can be transmitted over the network or displayed without corruption. But it ironically happened with the original post you were referring to,
EYl0htUzhivYzcIo+zrIyEFQUE1PQkk= does NOT decode to t3+(:APPMOBI
instead, it contains some binary bytes(not random btw) that you correctly showed. So the problem was due to the original post where either the author or the tool/browser that she/he used "cleaned up", or rather corrupted the decoded binary data.
There is always one-to-one relationship between encoded and decoded data (provided the same "base" is used, i.e. the same set of characters are used for encoded text.)
t3+(:APPMOBI indeed will be encoded into dDMrKDpBUFBNT0JJ
The problem is in the encoding that displayed the output to you, or in the encoding that you used to input the data to base64. This is actually the problem that base64 encoding was invented to help solve.
Instead of trying to copy and paste the non-ASCII characters, save the output as a binary file, then examine it. Then, encode the binary file. You'll see the same base64 string.
c:\TEMP>type b.txt
EYl0htUzhivYzcIo+zrIyEFQUE1PQkk=
c:\TEMP>base64 -d b.txt > b.bin
c:\TEMP>od -t x1 b.bin
0000000 11 89 74 86 d5 33 86 2b d8 cd c2 28 fb 3a c8 c8
0000020 41 50 50 4d 4f 42 49
c:\TEMP>base64 -e b.bin
EYl0htUzhivYzcIo+zrIyEFQUE1PQkk=
od is a tool (octal dump) that outputs binary data using hexadecimal notation, and shows each of the bytes.
EDIT:
You asked about a different string in your comments, dDMrKDpBUFBNT0JJ, and why does that decode to the same thing? Well, it doesn't decode to the same thing. It decodes to this string of bytes: 74 33 2b 28 3a 41 50 50 4d 4f 42 49. Your original string decoded to this string of bytes: 11 89 74 86 d5 33 86 2b d8 cd c2 28 fb 3a c8 c8 41 50 50 4d 4f 42 49.
Notice the differences: your original string decoded to 23 bytes, your second string decoded to only 12 bytes. The original string included non-ASCII bytes like 11, d5, d8, cd, c2, fb, c8, c8. These bytes don't print the same way on every system. You referred to them as "random bytes", but they're not. They're part of the data, and base64 is designed to make sure they can be transmitted.
I think to understand why these strings are different, you need to first understand the nature of character data, what base64 is, and why it exists. Remember that computers work only on numbers, but people need to work with familiar concepts like letters and digits. So ASCII was created as an "encoding" standard that represents a little number (we call this little number a "byte") as a letter or a digit, so that we humans can read it. If we line up a group of bytes, we can spell out a message. 41 50 50 4d 4f 42 49 are the bytes that represent the word APPMOBI. We call a group of bytes like this a "string".
Every letter from A-Z and every digit from 0-9 has a number specified in ASCII that represents it. But there are many extra numbers that are not in the standard, and not all of those represent visible or sensible letters or digits. We say they're non-printable. Your longer message includes many bytes that aren't printable (you called them random.)
When a computer program like email is dealing with a string, if the bytes are printable ASCII characters, it's easy. The email program knows what to do with them. But if your bytes instead represent a picture, the bytes could have values that aren't ASCII, and various email programs won't know what to do with them. Base64 was created to take all kinds of bytes, both printable and non-printable bytes, and translate them into a string of bytes representing only printable letters. Because they're all printable, a program like email or a web server can easily handle them, even if it doesn't know that they actually contain a picture.
Here's the decode of your new string:
c:\TEMP>type c.txt
dDMrKDpBUFBNT0JJ
c:\TEMP>base64 -d c.txt
t3+(:APPMOBI
c:\TEMP>base64 -d c.txt > c.bin
c:\TEMP>od -t x1 c.bin
0000000 74 33 2b 28 3a 41 50 50 4d 4f 42 49
0000014
c:\TEMP>type c.bin
t3+(:APPMOBI
c:\TEMP>

PublicKey from iOS doesn't supported by OpenSSL

I had created key pair on iOS with using SecKeyGeneratePair and then exported keys to publicKey and privateKey with using SecItemCopyMatching (Base64 encoded before exporting of course). Now I have a problem to encrypt data with using public key. I use next OpenSSL command:
openssl rsautl -encrypt -inkey publicKey -pubin -in text.txt -out text.enc
I got "unable to load Public Key" response from OpenSSL.
I have analyzed publicKey and noticed that it contains only next content:
SEQUENCE(2 elem)
| INTEGER(1023 bit)
| INTEGER 65537
when public keys generated by OpenSSL contains additional info about algorithm like that sample which was created by OpenSSL:
SEQUENCE(2 elem)
| SEQUENCE(2 elem)
| | OBJECT IDENTIFIER 1.2.840.113549.1.1.1
| | NULL
| BIT STRING(1 elem)
| | SEQUENCE(2 elem)
| | | INTEGER(1024 bit)
| | | INTEGER 65537
First question is why publicKey contains only 1023 bit for key? OpenSSL's public key has 1024 bit length for that.
I tried to create additional ASN.1 structure for publicKey which was generated by iOS (with using HEX editor and fixing SEQUENCE length). Its format is correct (I have checked that here http://lapo.it/asn1js/), but I still can't use it for OpenSSL. Looks like because public key returned by SecItemCopyMatching has lost byte.
I checked the content of privateKey also, because it contains publicKey inside. The length of the publicKey there also 1023 bits.
Can you help me please? Thanks in advance. Here is a key pair which was generated on iOS device:
publicKey:
MIGIAoGAaXp7vlZ5WmCzaL1rrBKXC8rJuc7EpH7Us/0t4R3hJoDOtRJxywegPY6wm45Oiud7UDh+9loebAg4dcpUP1le5SkbxrC9Qp8XahmvYVMXUYVGDiLTWID3e3PdE7CwEM5/lz1c1vRRWjR+2GzvV4xf5gRwCzZW1tXvXCNWsraqwE8CAwEAAQ==
privateKey:
MIICWwIBAAKBgGl6e75WeVpgs2i9a6wSlwvKybnOxKR+1LP9LeEd4SaAzrUSccsHoD2OsJuOTorne1A4fvZaHmwIOHXKVD9ZXuUpG8awvUKfF2oZr2FTF1GFRg4i01iA93tz3ROwsBDOf5c9XNb0UVo0fths71eMX+YEcAs2VtbV71wjVrK2qsBPAgMBAAECgYBolCowc2hqdUosZPJmbyAXbv5HHXzWY3Hc6v8cHhXnqPpJiXoNhQgZQGpWMOgqzIv0467t7jgPgK8KCosxLBjqvQTVzBkHTsBpBAaJgxzgP04pD8EnJp6uwwx8fZcP3PQOwGkmtWf2KyAcBZD3A+snCxGTRMDOrEPzQe6kBapBwQJBASG9Go92pjIqTRMMam5A5oUt9R1/iNx0wHowStyf2KHik1GRidaENIYkobZEzjKEbskcq3LGJGna163uu/Y55l8CQF0yLFHBdMi9hYX49s8Abzkd+3sGI29hFkLrL01ZB2xV/WceNLQH7jxplRClri9Ccr1QFkMGcaXRv2X+eNu6DBECQQEdlTxZzhQwfBtuPB2nwNa2zL6+rZdj3Lxfc7xGTFQF9MNKcg6P3825rt+qPZWUm45rMpQXVBBOOkO+kAK6xwU3AkBIE8vPFy25K0qfSOOpSQ68QAIFLcQuGgpbiwU0bwycrwyiuevM6O1J7+aHz3udtWiEHfJ5t/whYM0ElwDl/0fhAkEAq0EWoY8mQjHAGPMIhIty48fDbJCeFWFPx8lR+gegR1KwcIzcCGrYnHt8ihrfPm9ySjXwWDLYhBx0A5m+IbRZaA==
OpenSSL requires the key in X.509 format (see RFC 3280):
SubjectPublicKeyInfo ::= SEQUENCE {
algorithm AlgorithmIdentifier,
subjectPublicKey BIT STRING }
AlgorithmIdentifier ::= SEQUENCE {
algorithm OBJECT IDENTIFIER,
parameters ANY DEFINED BY algorithm OPTIONAL }
The "subjectPublicKey" string depends on the algorithm. For RSA it is (RFC 3447):
RSAPublicKey ::= SEQUENCE {
modulus INTEGER, -- n
publicExponent INTEGER -- e
}
I don't think it's a problem that the key is 1023 and not 1024 bits. But you can try to generate a few more and see if they're all 1023 bits long.
What does OpenSSL say when you try to use your own creation (the updated ASN.1 structure)? Can you post it here?
Also, OpenSSL expects it in PEM format with "-----BEGIN RSA PUBLIC KEY-----" and "-----END RSA PUBLIC KEY-----" around the Base64 data.