use a 512 bit asymmetric encryption key to avoid 'contain encryption' in the app - iphone

I have an app that contains a sqlite db with some data is encrypted with a public/private key combination. I generate this pair from the distribution provisional certificate in keychain access (Right click and save as .cer and then again as .p12 with password).
The app is ready to be submitted to apple and I find out that if any encryption is used, I'll have to submit documents for ERN authorization. While reading through the documentation, it mentions that if your key is less than 512 bit for asymmetric encryption, you will be exempt from it.
iii) your app uses, accesses, implements or incorporates encryption with key lengths not exceeding 56 bits symmetric, 512 bits asymmetric and/or 112 bit elliptic curve
(iv) your app is a mass market product with key lengths not exceeding 64 bits symmetric, or if no symmetric algorithms, not exceeding 768 bits asymmetric and/or 128 bits elliptic curve.
Now my problem is if I create a certificate sign request with 512 bit size then I can not create certificate from developer portal with that request.
Is there a way to get around this, other than switching to a symmetric key algorithm? I would like to avoid rewriting that portion. Basically, I would like to create a .cer/.p12 pair using 512 bit encryption instead of the standard 2048. I need something that supports UTF-8. The one I can manually create from mac only supports ASCII.

if anyone ever is confused about this, i changed it to symmetric key and apple approved the app, didn't have to submit any additional documents.

Related

Symmetric Variant of the BFV scheme

Is it possible to encrypt a plaintext using the secret key in SEAL?
Does the symmetric variant help in increasing the noise budget of the ciphertext, or improve the homomorphic evaluation in some other way?
No symmetric key primitives are implemented in SEAL 3.2. There are some benefits:
Smaller initial noise;
Possibility to replace half of a freshly encrypted ciphertext with a random seed, resulting in ~ 50% reduction in message expansion (but only in fresh ciphertexts). This can be significant.
The only problem with the symmetric key schemes is that the ciphertexts can't easily be re-randomized since without the public key there isn't any easy way to create fresh encryptions of zero. As a result, it might be hard or impossible to create provably secure protocols where the computation depends on the private data coming from other sources than the secret key owner (through multiply_plain and add_plain).

RSA verification time

How many clock cycles does RSA1024 with exponent of 65537 need to verify a message ?
Sure Times will be different according to processor, that's why I asked for number of clock cycles
Run openssl speed rsa to have some idea on the speed on different machines.
Number of clock cycles will differ from one machine to another, even if you discount software differences.
Verification is performed using a public key; if the public exponent is small, verification will be fast. If the public exponent is large it may take much longer.
Some CPU's even have a Montgomery multiplier (e.g. the Sun Niagara based processors) to speed up RSA operations.
In other words: it depends.
I found this website that contains benchmarks for all cryptographic hashes

Crypto - Express.js is PBKDF2 HMAC-SHA1 enough?

Using the Express.js framework and crypto to hash a password with pbkdf2 I read that the default algorithm is HMAC-SHA1 but i dont understand why it hasnt been upgraded to one of the other families or SHA.
crypto.pbkdf2(password, salt, iterations, keylen, callback)
Is the keylen that we provide the variation of the the SHA we want? like SHA-256,512 etc?
Also how does HMAC change the output?
And lastly is it strong enough when SHA1 is broken?
Sorry if i am mixing things up.
Is the keylen that we provide the variation of the the SHA we want? like SHA-256,512 etc?
As you state you're hashing a password in particular, #CodesInChaos is right - keylen (i.e. the length of the output from PBKDF2) would be at most the number of bits of your HMAC's native hash function.
For SHA-1, that's 160 bits (20 bytes)
For SHA-256, that's 256 bits (32 bytes), etc.
The reason for this is that if you ask for a longer hash (keylen) than the hash function supports, the first native length is identical, so an attacker only needs to attack bits. This is the problem 1Password found and fixed when the Hashcat team found it.
Example as a proof:
Here's 22 bytes worth of PBKDF2-HMAC-SHA-1 - that's one native hash size + 2 more bytes (taking a total of 8192 iterations! - the first 4096 iterations generate the first 20 bytes, then we do another 4096 iterations for the set after that!):
pbkdf2 sha1 "password" "salt" 4096 22
4b007901b765489abead49d926f721d065a429c12e46
And here's just getting the first 20 bytes of PBKDF2-HMAC-SHA-1 - i.e. exactly one native hash output size (taking a total of 4096 iterations)
pbkdf2 sha1 "password" "salt" 4096 20
4b007901b765489abead49d926f721d065a429c1
Even if you store 22 bytes of PBKDF2-HMAC-SHA-1, an attacker only needs to compute 20 bytes... which takes about half the time, as to get bytes 21 and 22, another entire set of HMAC values is calculated and then only 2 bytes are kept.
Yes, you're correct; 21 bytes takes twice the time 20 does for PBKDF2-HMAC-SHA-1, and 40 bytes takes just as long as 21 bytes in practical terms. 41 bytes, however, takes three times as long as 20 bytes, since 41/20 is between 2 and 3, exclusive.
Also how does HMAC change the output?
HMAC RFC2104 is a way of keying hash functions, particularly those with weaknesses when you simply concatenate key and text together. HMAC-SHA-1 is SHA-1 used in an HMAC; HMAC-SHA-512 is SHA-512 used in an HMAC.
And lastly is it strong enough when SHA1 is broken?
If you have enough iterations (upper tens of thousands to lower hundreds of thousands or more in 2014) then it should be all right. PBKDF2-HMAC-SHA-512 in particular has an advantage that it does much worse on current graphics cards (i.e. many attackers) than it does on current CPU's (i.e. most defenders).
For the gold standard, see the answer #ThomasPornin gave in Is SHA-1 secure for password storage?, a tiny part of which is "The known attacks on MD4, MD5 and SHA-1 are about collisions, which do not impact preimage resistance. It has been shown that MD4 has a few weaknesses which can be (only theoretically) exploited when trying to break HMAC/MD4, but this does not apply to your problem. The 2106 second preimage attack in the paper by Kesley and Schneier is a generic trade-off which applies only to very long inputs (260 bytes; that's a million terabytes -- notice how 106+60 exceeds 160; that's where you see that the trade-off has nothing magic in it)."
SHA-1 is broken, but it does not mean its unsafe to use; SHA-256 (SHA-2) is more or less for future proofing and long term substitute. Broken only means faster than bruteforce, but no necesarily feasible or practical possible (yet).
See also this answer: https://crypto.stackexchange.com/questions/3690/no-sha-1-collision-yet-sha1-is-broken
A function getting broken often only means that we should start
migrating to other, stronger functions, and not that there is
practical danger yet. Attacks only get stronger, so it's a good idea
to consider alternatives once the first cracks begin to appear.

The maximum length of a message that can be hashed with WHIRLPOOL

I'm just wondering what is the maximum length. I read from Wikipedia that it takes a message of any length less than 2^256 bits. Does that mean 2 to the power of 256? Also, would it be more secure to hash a password multiple times? Example:
WHIRLPOOL(WHIRLPOOL(WHIRLPOOL(WHIRLPOOL("passw0rd"))))
Or does that increase the risk of collisions?
Yes, this does mean 2^256 bits. Of course, as there are 2^3 bits in a byte, you will thus have a maximum space of 2^253 bytes. Nothing to worry about.
Yes, it is better to hash multiple times. No, you don't have to worry about "cycles" (much). Many pseudo random number generators are using hashes the same way. Hash algorithms should not loose too much information and should not have a short cycle time.
Passwords hashes are should however be calculated using password based key derivation functions. The "key" is then stored. PBKDF's may use hashes (e.g. PBKDF2) or keyed block ciphers (bcrypt). Most KDF's are using message authentication codes (HMAC or MAC) instead of directly using the underlying hash algorithm or block cipher.
Input to PBKDF's is a salt and iteration count. The iteration count is used to make it harder for an attacker to brute force the system by trying out all kinds of passwords. It's basically the same as what you did above with WHIRLPOOL. Only the iteration count is normally somewhere between 1 and 10 thousand. More data is normally mixed in as well in each iteration.
More importantly, the (password specific) salt is used to make sure that duplicate passwords cannot be detected and to avoid attacks using rainbow tables. Normally the salt is about 64 to 128 bits. The salt and iteration count should be stored with the "hash".
Finally, it is probably better to use a NIST vetted hash algorithm such as SHA-512 instead of WHIRLPOOL.

Aes key length significance/implications

I am using a AES algorithm in my application for encrypting plain text. I am trying to use a key which is a six digit number. But as per the AES spec, the key should be minimum sixteen bytes in length. I am planning to append leading zeros to my six digit number to make it a 16 byte and then use this as a key.
Would it have any security implications ? I mean will it make my ciphertext more prone to attacks.
Please help.
You should use a key derivation function, in particular PBKDF2 is state-of-the-art in obtaining an AES key from a password or PIN.
In particular, PBKDF2 makes more difficult to perform a key search because it:
randomizes the key, therefore making precomputed password dictionaries useless;
increases the computational cost of testing each candidate password increasing the total time required to find a key.
As an additional remark, I would say that 6 digits correspond roughly to 16 bits of password entropy, which are definitely too few. Increase your password length.