RSA verification time - rsa

How many clock cycles does RSA1024 with exponent of 65537 need to verify a message ?
Sure Times will be different according to processor, that's why I asked for number of clock cycles

Run openssl speed rsa to have some idea on the speed on different machines.
Number of clock cycles will differ from one machine to another, even if you discount software differences.
Verification is performed using a public key; if the public exponent is small, verification will be fast. If the public exponent is large it may take much longer.
Some CPU's even have a Montgomery multiplier (e.g. the Sun Niagara based processors) to speed up RSA operations.
In other words: it depends.

I found this website that contains benchmarks for all cryptographic hashes

Related

Number of independent AES 256 CBC decryption operations per second with AES-NI or GPU acceleration

AES-NI seems to be optimized to encrypt/decrypt big chunks of data. However I'm trying to decrypt a password and I have many very small bits to try (iv + first cbc block, 32 bytes in total).
I'm using openssl at the moment, calling EVP_DecryptInit_ex, EVP_DecryptUpdate for every cycle (and EVP_CIPHER_CTX_init once per thread).
I can do this around 2 million times per second on a single core.
I assume this is the sort of performance I can expect using AES-NI instructions and I shouldn't worry about optimising this further. Is this correct?
Does anyone have any idea how much faster this might be on a high end GPU or not-too-expensive FPGA?
FPGA: You can convert an input block to an output block on any reasonable FPGA with a 2-cycle throughput at several hundred MHz, with a latency of 16 cycles. So, possibly 256 Mblocks/s pipelined, or maybe 32 Mblocks/s not pipelined. You could get maybe 5 of these on a reasonably cheap FPGA, or 30+ on an expensive one. YMMV.

CRC32 vs CRC32C?

What is the difference of CRC32 and CRC32C? I know CRC32 for a long time, but just heard CRC32C today. Are they basically the same method (i.e. both results in the same hash for a given data)?
The CRC32 found in zip and a lot of other places uses the polynomial 0x04C11DB7; its reversed form 0xEDB88320 is perhaps better known, being often found in little-endian implementations.
CRC32C uses a different polynomial (0x1EDC6F41, reversed 0x82F63B78) but otherwise the computation is the same. The results are different, naturally. This is also known as the Castagnoli CRC32 and most conspicuously found in newer Intel CPUs which can compute a full 32-bit CRC step in 3 cycles. That is the reason why the CRC32C is becoming more popular, since it allows advanced implementations that effectively process one 32-bit word per cycle despite the three-cycle latency (by processing 3 streams of data in parallel and using linear algebra to combine the results).

Crypto - Express.js is PBKDF2 HMAC-SHA1 enough?

Using the Express.js framework and crypto to hash a password with pbkdf2 I read that the default algorithm is HMAC-SHA1 but i dont understand why it hasnt been upgraded to one of the other families or SHA.
crypto.pbkdf2(password, salt, iterations, keylen, callback)
Is the keylen that we provide the variation of the the SHA we want? like SHA-256,512 etc?
Also how does HMAC change the output?
And lastly is it strong enough when SHA1 is broken?
Sorry if i am mixing things up.
Is the keylen that we provide the variation of the the SHA we want? like SHA-256,512 etc?
As you state you're hashing a password in particular, #CodesInChaos is right - keylen (i.e. the length of the output from PBKDF2) would be at most the number of bits of your HMAC's native hash function.
For SHA-1, that's 160 bits (20 bytes)
For SHA-256, that's 256 bits (32 bytes), etc.
The reason for this is that if you ask for a longer hash (keylen) than the hash function supports, the first native length is identical, so an attacker only needs to attack bits. This is the problem 1Password found and fixed when the Hashcat team found it.
Example as a proof:
Here's 22 bytes worth of PBKDF2-HMAC-SHA-1 - that's one native hash size + 2 more bytes (taking a total of 8192 iterations! - the first 4096 iterations generate the first 20 bytes, then we do another 4096 iterations for the set after that!):
pbkdf2 sha1 "password" "salt" 4096 22
4b007901b765489abead49d926f721d065a429c12e46
And here's just getting the first 20 bytes of PBKDF2-HMAC-SHA-1 - i.e. exactly one native hash output size (taking a total of 4096 iterations)
pbkdf2 sha1 "password" "salt" 4096 20
4b007901b765489abead49d926f721d065a429c1
Even if you store 22 bytes of PBKDF2-HMAC-SHA-1, an attacker only needs to compute 20 bytes... which takes about half the time, as to get bytes 21 and 22, another entire set of HMAC values is calculated and then only 2 bytes are kept.
Yes, you're correct; 21 bytes takes twice the time 20 does for PBKDF2-HMAC-SHA-1, and 40 bytes takes just as long as 21 bytes in practical terms. 41 bytes, however, takes three times as long as 20 bytes, since 41/20 is between 2 and 3, exclusive.
Also how does HMAC change the output?
HMAC RFC2104 is a way of keying hash functions, particularly those with weaknesses when you simply concatenate key and text together. HMAC-SHA-1 is SHA-1 used in an HMAC; HMAC-SHA-512 is SHA-512 used in an HMAC.
And lastly is it strong enough when SHA1 is broken?
If you have enough iterations (upper tens of thousands to lower hundreds of thousands or more in 2014) then it should be all right. PBKDF2-HMAC-SHA-512 in particular has an advantage that it does much worse on current graphics cards (i.e. many attackers) than it does on current CPU's (i.e. most defenders).
For the gold standard, see the answer #ThomasPornin gave in Is SHA-1 secure for password storage?, a tiny part of which is "The known attacks on MD4, MD5 and SHA-1 are about collisions, which do not impact preimage resistance. It has been shown that MD4 has a few weaknesses which can be (only theoretically) exploited when trying to break HMAC/MD4, but this does not apply to your problem. The 2106 second preimage attack in the paper by Kesley and Schneier is a generic trade-off which applies only to very long inputs (260 bytes; that's a million terabytes -- notice how 106+60 exceeds 160; that's where you see that the trade-off has nothing magic in it)."
SHA-1 is broken, but it does not mean its unsafe to use; SHA-256 (SHA-2) is more or less for future proofing and long term substitute. Broken only means faster than bruteforce, but no necesarily feasible or practical possible (yet).
See also this answer: https://crypto.stackexchange.com/questions/3690/no-sha-1-collision-yet-sha1-is-broken
A function getting broken often only means that we should start
migrating to other, stronger functions, and not that there is
practical danger yet. Attacks only get stronger, so it's a good idea
to consider alternatives once the first cracks begin to appear.

What is the maximum number of SHA-1 hashes?

Clearly since SHA-1 hashing produces 40 characters each time, there is a finite number of possible hashes—does anyone know exactly how many?
SHA-1 hashes have 160 bits, so 2160 of them.
(2160 = 1461501637330902918203684832716283019655932542976 ~= 1.46 x 1048)
Note that since you have a much larger message space than possible hashes, collisions are bound to occur.
Also note that the probability of collision is much higher than you might think. At just 280 messages the probability of a collision is 50%, thanks to the Birthday paradox. (ie: with just 23 people the probability that 2 people have the same birthday is 50%).
SHA-1 produces 160-bit outputs, and it should be able to produce just about any sequence of 160 bits, There are 2160 such sequences, i.e. close to 1461 billions of billions of billions of billions of billions. That's kind of big.
However we have no proof that every single one of them is reachable. It would be bad for SHA-1 security if the number of possible outputs would be significantly lower than 2160; for instance, if only 1/4 of them were reachable (2158), security against preimage attacks would be divided by 4, and security against collisions would be halved. No such issue is currently known with SHA-1 (there are known weaknesses of SHA-1 when it comes to resistance to collisions, but not that one).
It is possible (but it would be at least mildly surprising) that a few 160-bits outputs cannot be reached. It is expected that this will be remain unknowable. To some extent, being able to prove that SHA-1 possible outputs cover the whole 160-bit space would be worrisome: such a proof would require a good deal of analysis of the mathematical structure of SHA-1, and the security of SHA-1 largely relies on such an analysis being intractable.
SHA-1 is made up of 5 32 bit integers.
That's 4294967296^5 or 2^160
or 1,461,501,637,330,902,918,203,684,832,716,283,019,655,932,542,976 possibilities
To put that into perspective
Total Possible SHA-1 Values: 1,461,501,637,330,902,918,203,684,832,716,283,019,655,932,542,976
Total gallons of Water on Earth: 365,904,000,000,000,000,000
That includes every ocean, sea, lake, swimming pool, bath tub, etc - source
The possibility of collisions is only theoretical at this point. Still waiting to hear of one.

When generating a SHA256 / 512 hash, is there a minimum 'safe' amount of data to hash?

I have heard that when creating a hash, it's possible that if small files or amounts of data are used, the resulting hash is more likely to suffer from a collision. If that is true, is there a minimum "safe" amount of data that should be used to ensure this doesn't happen?
I guess the question could also be phrased as:
What is the smallest amount of data that can be safely and securely hashed?
A hash function accepts inputs of arbitrary (or at least very high) length, and produces a fixed-length output. There are more possible inputs than possible outputs, so collisions must exist. The whole point of a secure hash function is that it is "collision resistant", which means that while collisions must mathematically exist, it is very very hard to actually compute one. Thus, there is no known collision for SHA-256 and SHA-512, and the best known methods for computing one (by doing it on purpose) are so ludicrously expensive that they will not be applied soon (the whole US federal budget for a century would buy only a ridiculously small part of the task).
So, if it cannot be realistically done on purpose, you can expect not to hit a collision out of (bad) luck.
Moreover, if you limit yourself to very short inputs, there is a chance that there is no collision at all. E.g., if you consider 12-byte inputs: there are 296 possible sequences of 12 bytes. That's huge (more than can be enumerated with today's technology). Yet, SHA-256 will map each input to a 256-bit value, i.e. values in a much wider space (of size 2256). We cannot prove it formally, but chances are that all those 296 hash values are distinct from each other. Note that this has no practical consequence: there is no measurable difference between not finding a collision because there is none, and not finding a collision because it is extremely improbable to hit one.
Just to illustrate how low risks of collision are with SHA-256: consider your risks of being mauled by a gorilla escaped from a local zoo or private owner. Unlikely? Yes, but it still may conceivably happen: it seems that a gorilla escaped from the Dallas zoo in 2004 and injured four persons; another gorilla escaped from the same zoo in 2010. Assuming that there is only one rampaging gorilla every 6 years on the whole Earth (not only in the Dallas area) and you happen to be the unlucky chap who is on his path, out of a human population of 6.5 billions, then risks of grievous-bodily-harm-by-gorilla can be estimated at about 1 in 243.7 per day. Now, take 10 thousands of PC and have them work on finding a collision for SHA-256. The chances of hitting a collision are close to 1 in 275 per day -- more than a billion less probable than the angry ape thing. The conclusion is that if you fear SHA-256 collisions but do not keep with you a loaded shotgun at all times, then you are getting your priorities wrong. Also, do not mess with Texas.
There is no minimum input size. SHA-256 algorithm is effectively a random mapping and collision probability doesn't depend on input length. Even a 1 bit input is 'safe'.
Note that the input is padded to a multiple of 512 bits (64 bytes) for SHA-256 (multiple of 1024 for SHA-512). Taking a 12 byte input (as Thomas used in his example), when using SHA-256, there are 2^96 possible sequences of length 64 bytes.
As an example, a 12 byte input Hello There! (0x48656c6c6f20546865726521) will be padded with a one bit, followed by 351 zero bits followed by the 64 bit representation of the length of the input in bits which is 0x0000000000000060 to form a 512 bit padded message. This 512 bit message is used as the input for computing the hash.
More details can be found in RFC: 4634 "US Secure Hash Algorithms (SHA and HMAC-SHA)", http://www.ietf.org/rfc/rfc4634.txt
No, message length does not effect the likeliness of a collision.
If that were the case, the algorithm is broken.
You can try for yourself by running SHA against all one-byte inputs, then against all two-byte inputs and so on, and see if you get a collision. Probably not, because no one has ever found a collision for SHA-256 or SHA-512 (or at least they kept it a secret from Wikipedia)
Τhe hash is 256 bits long, there is a collision for anything longer than 256bits.
Υou cannot compress something into a smaller thing without having collisions, its defying mathmatics.
Yes, because of the algoritm and the 2 to the power of 256 there is a lot of different hashes, but they are not collision free, that is impossible.
Depends very much on your application: if you were simply hashing "YES" and "NO" strings to send across a network to indicate whether you should give me a $100,000 loan, it would be a pretty big failure -- the domain of answers can't be that large, so someone could easily check observed hashes on the wire against a database of 'small input' hash outputs.
If you were to include the date, time, my name, my tax ID, the amount requested, the amount of data being hashed probably won't amount to much, but the chances of that data being in precomputed hash tables is pretty slim.
But I know of no research to point you to beyond my instincts. Sorry.