I have made some testing and it came out that RSA is lot slower than DSA.
What is usual DSA time complexity?
RSA[ms] DSA [ms]
1125 218 1KiB
1047 188 2KiB
594 17 4KiB
641 234 8KiB
2938 406 16KiB
9063 937 32KiB
39344 3406 64KiB
RSA and DSA using both exponentiation to generate the signature. This is what costs the most time, so they will basically have the same complexity. But the difference is the key length.
In cryptography you try to choose as small keys as possible, but big enough to get the security you want.
RSA needs quite long keys, sth. like 2048 bit or bigger.
DSA has one short (about 256 bit) and one long key (about 2048 bit). The exponent will be not bigger than the short key.
So for DSA you have to compute a 2048 bit number to the power of a 256 bit number (modulo a other number) and for RSA you have to compute a 2048 bit number to the power of an other 2048 bit number.
This is why RSA is much slower than DSA.
Notice: If you choose for DSA a short key of length 2048 bit, it will be as slow as RSA.
Related
My current setup is I have an intermediate CA with a 1024 bit RSA key.
I will be utilising this intermediate CA to sign client certificates in my enterprise, I wish to give these certificates a SHA2 RSA signature.
A colleague advised that using a 1024 bit private key to perform a SHA2 RSA signature isn't possible.
However I can't seem to find any documentation online concerning this.
Is this possible?
1024-bit RSA can do SHA-256 signatures with both PKCS#1v1.5 padding and PSS padding.
PSS padding requires 2 * hashSize + 2 bytes, or 2 * hashSize + 16 bits. So for a SHA-256 signature (256 bit output) the minimum RSA keysize is 512+16 => 528-bit.
PKCS#1 v1.5 signature padding requires a minimum of tLen + 11 bytes, where tLen is the hash size (in bytes) plus a fixed overhead that differs based on the algorithm (tLen is the length of T, the DER-encoded DigestInfo containing the hash). For SHA-256 tLen is 51, so the minimum key size is (51 + 11) * 8 => (62 * 8) => 496-bit.
Security Considerations:
496-bit and 528-bit are both considered atrociously low in 2019.
1024-bit is also considered to be on the low side of the scale (it provides roughly 80 bits of security).
NIST SP 800-57 Part 3 recommends RSA-2048 for CA operations.
I will be signing a token with SHA256 and I am wondering on the length of the secret I should put. Does having a secret key length over 256 bits have any benefits if I am using sha256. So if my key is 300 bits long is this more secure?
The length of the key has to be <= 512 bits because that is the size of the pads. If someone is trying to brute force your key, having a key size of 512 bits will be the most secure.
So to answer your question. Yes, having a key length 300 bits is more secure than one with length 256 bits.
Using the Express.js framework and crypto to hash a password with pbkdf2 I read that the default algorithm is HMAC-SHA1 but i dont understand why it hasnt been upgraded to one of the other families or SHA.
crypto.pbkdf2(password, salt, iterations, keylen, callback)
Is the keylen that we provide the variation of the the SHA we want? like SHA-256,512 etc?
Also how does HMAC change the output?
And lastly is it strong enough when SHA1 is broken?
Sorry if i am mixing things up.
Is the keylen that we provide the variation of the the SHA we want? like SHA-256,512 etc?
As you state you're hashing a password in particular, #CodesInChaos is right - keylen (i.e. the length of the output from PBKDF2) would be at most the number of bits of your HMAC's native hash function.
For SHA-1, that's 160 bits (20 bytes)
For SHA-256, that's 256 bits (32 bytes), etc.
The reason for this is that if you ask for a longer hash (keylen) than the hash function supports, the first native length is identical, so an attacker only needs to attack bits. This is the problem 1Password found and fixed when the Hashcat team found it.
Example as a proof:
Here's 22 bytes worth of PBKDF2-HMAC-SHA-1 - that's one native hash size + 2 more bytes (taking a total of 8192 iterations! - the first 4096 iterations generate the first 20 bytes, then we do another 4096 iterations for the set after that!):
pbkdf2 sha1 "password" "salt" 4096 22
4b007901b765489abead49d926f721d065a429c12e46
And here's just getting the first 20 bytes of PBKDF2-HMAC-SHA-1 - i.e. exactly one native hash output size (taking a total of 4096 iterations)
pbkdf2 sha1 "password" "salt" 4096 20
4b007901b765489abead49d926f721d065a429c1
Even if you store 22 bytes of PBKDF2-HMAC-SHA-1, an attacker only needs to compute 20 bytes... which takes about half the time, as to get bytes 21 and 22, another entire set of HMAC values is calculated and then only 2 bytes are kept.
Yes, you're correct; 21 bytes takes twice the time 20 does for PBKDF2-HMAC-SHA-1, and 40 bytes takes just as long as 21 bytes in practical terms. 41 bytes, however, takes three times as long as 20 bytes, since 41/20 is between 2 and 3, exclusive.
Also how does HMAC change the output?
HMAC RFC2104 is a way of keying hash functions, particularly those with weaknesses when you simply concatenate key and text together. HMAC-SHA-1 is SHA-1 used in an HMAC; HMAC-SHA-512 is SHA-512 used in an HMAC.
And lastly is it strong enough when SHA1 is broken?
If you have enough iterations (upper tens of thousands to lower hundreds of thousands or more in 2014) then it should be all right. PBKDF2-HMAC-SHA-512 in particular has an advantage that it does much worse on current graphics cards (i.e. many attackers) than it does on current CPU's (i.e. most defenders).
For the gold standard, see the answer #ThomasPornin gave in Is SHA-1 secure for password storage?, a tiny part of which is "The known attacks on MD4, MD5 and SHA-1 are about collisions, which do not impact preimage resistance. It has been shown that MD4 has a few weaknesses which can be (only theoretically) exploited when trying to break HMAC/MD4, but this does not apply to your problem. The 2106 second preimage attack in the paper by Kesley and Schneier is a generic trade-off which applies only to very long inputs (260 bytes; that's a million terabytes -- notice how 106+60 exceeds 160; that's where you see that the trade-off has nothing magic in it)."
SHA-1 is broken, but it does not mean its unsafe to use; SHA-256 (SHA-2) is more or less for future proofing and long term substitute. Broken only means faster than bruteforce, but no necesarily feasible or practical possible (yet).
See also this answer: https://crypto.stackexchange.com/questions/3690/no-sha-1-collision-yet-sha1-is-broken
A function getting broken often only means that we should start
migrating to other, stronger functions, and not that there is
practical danger yet. Attacks only get stronger, so it's a good idea
to consider alternatives once the first cracks begin to appear.
I know about length of some small encrypted strings as: 160, 196 ..
What determines the size?
The size in bytes of a single "block" encrypted is the same as the key size, which is the same as the size of the modulus. The private exponent is normally about the same size, but may be smaller. The public exponent can be up to to the key size in size, but is normally much smaller to allow for more efficient encryption or verification. Most of the time it is the fourth number of Fermat, 65537.
Note that this is the size in bits of the encrypted data. The plain data must be padded. PKCS#1 v1.5 uses at most the key size - 11 bytes padding for the plain text. It is certainly smart to keep a higher margin though, say 19 bytes padding minimum (a 16 byte random instead of a 8 byte random for padding).
For this reason, and because it is expensive to perform RSA encryption/decryption, RSA is mostly used in combination with a symmetric primitive such as AES - in the case of AES a random AES symmetric secret key is encrypted instead of the plain text. That key is then used to encrypt the plain text.
Is there a cryptographically secure hashing algorithm which gives a message digest of 60 bits?
I have a unique string (id + timestamp), I need to generate a 60 bit hash from it. What will be the best algorithm to create such a hash?
You can always take a hash algorithm with a larger output size, e.g. sha256, and truncate it to 60 bits. Whether that is appropriate for your needs I cannot say without much more information. 60 bits is generally considered way too short for most security needs.
There is no 60 bit algorithm for encryption. Algorithms are in powers of 2.
I suggest using sha1 to create the hash. It is 128 bit
hash=sha1(id + timestamp)
If you must(not recommended) compress this, use substring to reduce it to 64 bits
smallHash=substr(hash, 0,8)
(8 characters=64 bits)
Any hashing algorithm that has a 60-bit output size can at maximum provide only 30 bits of collision resistance (by the birthday paradox). 30 bits is much too short to be useful in security nowadays.