From what I've read, the MD5 (and other algorithms, such as SHA-1) hashing algorithm is broken. I just was curious as to how broken it was.
Is there any way to take the hash, and reverse it to the original text? Or is it just that occasionally collisions occur?
Under what circumstances would it truly be bad to use MD5 (or any other broken algorithm)? File change detection? Password security? Finding duplicated files?
You can find two inputs that produce the same MD5 hash in about an hour (or so) on a not particularly fast computer. At least as far as I know, there is no currently known attack that will let you find an input that will produce a particular result though.
It is not (and never will be) possible to find the original string that was used to produce a particular hash, except (possibly) by accident.
For finding duplicated files, MD5 is just as good as it ever was. For finding changed files, pretty much the same. In either case, as long as you don't have somebody deliberately trying to attack MD5 to produce collisions, the fact that it's broken from a cryptographic viewpoint is pretty much irrelevant.
For anything related to security, I would avoid MD5. There are much better alternatives easily available, and no reason to favor MD5 over the others.
Related
I am a noob in algorithms and not really so smart. But I have a question in my mind. There are a lot of hashing algorithms available and those might be 10 times more complex than what I wrote, but almost all of them are predictable these days. Recently, I read that writing my own hashing function is not a good idea. But why? I was wondering how a program/programmer can break my logic that (for example) creates a unique hash for each string in 5+ steps. Suppose someone successfully injected a SQL query in my server and got all the hashes stored. How a program (like hashcat) may help him to decrypt those hashes? I can see a strong side of my own algorithm in this case, that it is known by no one and the hacker has no idea how it was implemented. On the other hand, well-known algorithms (like sha-1) are not unpredictable anymore. There are websites available that are highly eligible to efficiently break those hashes. So, my simple question is, why smart people do not recommend to use self-written hashing algorithms?
Security by obscurity can be an advantage, but you should never rely on it. You rely on the fact that your code stays secret, as soon as it becomes known (shared hosting, backups, source-control, ...) the stored passwords are propably not safe anymore.
Inventing a new safe algorithm is extremely difficult, even for cryptographers. There are many points to consider like correct salting or key-stretching, making sure that similar output does not allow to draw conclusions about the similarity of the input, and so on... Only algorithms withstanding years of attacks by other cryptographers are regarded as safe.
There is a better alternative to inventing your own scheme. With inventing an algorithm you actually add a secret to the hashing (your code), only with the knowledge of this code an attacker can start brute-forcing the passwords. A better way to add a secret is:
Hash the passwords with a known proven algorithm (BCrypt, SCrypt, PBKDF2).
Encrypt the resulting hash with a secret server-side key (two-way encryption).
This way you can also add a secret (the server side key). Only if the attacker has privileges on the server he can know the key, in this case (s)he would also know your algorithm. This scheme also allows to exchange the key when necessary, exchanging the hash algorithm would be much more difficult.
I am developing an "open distributed cloud storage system".
By open I mean that anyone can participate in hosting of files.
My current design uses a sha1 hash of the files content as global file id.
It is given that the client already knows this hash value and receives the file from a "bandwidth donor".
The client now needs to verify that the file indeed is the correct one, by generating the hash and comparing it to the expected value.
However my concern is that someone could deliberately modify a file to produce the same hash. As far as I know this is doable easily for hashes of the CRC family. Some "googling" around revealed a lot of claims that the same would be easy for MD5.
Now my question is: Is there a hashing algorithm which satisfies the criteria of beeing
fast for big amounts of data
well distributed in the hashing range (aka "unique")
has a sufficient target range ("bit length")
is resistant to deliberate collision attacks
All other means that I can think of achieving a setup that serves my needs involve a secret component, for example a secret openssl key or a shared secret salt for a hash function.
Unfortunately I cannot work with that.
What you are asking for is a one-way function, whose existence is a major open problem.
With cryptographic hash functions, the specific attack you wanted to avoid is called the "second pre-image attack".
That should help you Googling what you want, but as far as I know there is actually no known practical second pre-image attack for MD5.
First of all, you probably found that it is easy to find two arbitrary files that have the same hash, and to find two different such pairs every time you try.
But it is difficult to generate a file to disguise as some specific file - in other words, it is unlikely that one of the prementioned "two arbitrary files" actually belongs to a non-malicious agent in your storage.
If you're still not satisfied, you might want to try something like SHA-1 or SHA-2 or GOST.
First of all, a hash value can never identify a file, as there will always be collisions.
Having said that, what you are looking for is called a cryptographic hash. These are designed to not (easily, i.e. other than brute force) allow modifications of the data while keeping the hash, or producing new data with a given hash.
As such, the SHA family is ok.
For the moment, SHA1 is adequate. No collisions are known.
It would help a lot to know the average size of the thing you are hashing. But most likely, if your platforms are predominantly 64-bit, SHA512 is your best choice. You can truncate the hash and use only 256-bits of it. If your platforms are predominantly 32-bit, SHA256 is your best choice.
We have a storage of files and the storage uniquely identifies a file on the basis of size appended to crc32.
I wanted to know if this checksum ( crc32 + size ) would be good enough for identifying files or should we consider some other hashing technique like MD5/SHA1?
CRC is most an error detection method than a serious hash function. It helps in identify corrupting files rather than uniquely identify them.
So your choice should be between MD5 and SHA1.
If you don't have strong security needings you can choose MD5 that should be faster.
(remember that MD5 is vulnerable to collision attacks).
If you need more security you better use SHA1 or even SHA2 .
CRC-32 is not good enough; it is trivial to build collisions, i.e. two files (of the same length if you wish it so) which have the same CRC-32. Even in the absence of a malicious attacker, collisions will happen randomly once you have about 65000 distinct files with the same length.
A hash function is designed to avoid collisions. With MD5 or SHA-1, you will not get random collisions. If your setup is security-related (i.e. there is someone, somewhere, who may actively try to create collisions), then you need a secure hash function. MD5 is not secure anymore (creating collisions with MD5 is easy) and SHA-1 is somewhat weak in that respect (no actual collisions were computed, but a method for creating one is known and, while expensive, it is much less expensive than what it ought to be). The usual recommendation is to use SHA-256 or SHA-512 (SHA-256 is enough for security; SHA-512 may be a tad faster on big, 64-bit systems, but file reading bandwidth will be more limitating than hashing speed).
Note: when using a cryptographic hash function, there is no need to store and compare the file lengths; the hash is sufficient to disambiguate files.
In a non-security setup (i.e. you only fear random collisions), then MD4 can be used. It is thoroughly "broken" as a cryptographic hash function, but it still is a very good checksum, and it is really fast (on some ARM-based platforms, it is even faster than CRC-32, for a much better resistance to random collisions). Basically, you should not use MD5: if you have security issues, then MD5 must not be used (it is broken; use SHA-256); and if you do not have security issues then MD4 is faster than MD5.
The space that would be used by a CRC32+size gives you enough room for a bigger CRC which would be a much better choice. If you are not worried about malicious collision that's it in which case Thomas' answer applies.
You didn't specify a language but for example in C++ you got Boost CRC giving you CRC of the size you want (or you can afford to store).
As others have said, CRC doesn't guarantee absence of collisions. However, your problem is be solved simply by giving the files incrementing 64-bit numbers. This is guaranteed to never collide (unless you want to keep gazillion of files in one directory which is not a good idea anyway).
I'm a bit conflicted with an answer when I google for this, as these algos are constantly improving and new exploits are being found and new issues come up all the time... a lot of advice on what algo to use is simply old, or keeping ideas from an older time when they were the best way.
I want to be very clear here: I'm not talking about passwords. I'm talking about message digests, not cryptographic hashes.
I could go ahead and use md5 as my first inkling for message digest (it's right in the name), but then I remembered there's more collisions than more modern algos out there. But then, what makes these newer algos more suitable for the message digest of a file or short string?
So that's my question, what's the modern message digest algo that should be used?
From that perspective, depending on the amount of data you are working with, SHA1 should do fine - if you will be working with larger amounts of data, a SHA-2 algorithm, such as SHA-256 might be more suitable as the fear of collisions in SHA1 is rising due to a flaw in its algorithm, but it isn't extremely serious when working with smallish amounts of data.
MD5 has been shown to be too vulnerable to collisions, as there have been attacks on SSL certificates that used MD5 to create a forged SSL certificate, so I'd stay away from there. Also depending on your application, MD5 is not FIPS 140 compliant, if that is of any importance to you.
SHA1 is ideal over MD5 because it is safer as MD5 is risky to use, and SHA1 has better performance in most common circumstances than SHA-2. The SHA-2 algorithms are by no means slow - but it has an edge. However, SHA1 is slightly riskier because you've probably locked yourself into using it - if collisions start to be found, it might be hard for you to change, so it might be better to invest in a SHA-2 algorithm up-front. The penalty for using SHA-256 over SHA-1 is very little, depending on how you will be using the SHA algorithm. SHA-2 algorithms produce a much larger output than SHA1, but at the benefit of the reduced chances of a collision.
So which one is right? It depends on what you are looking for and what your use case it. Hopefully now you can make a decision.
When in doubt, use SHA-256. The other SHA-2 functions are fine too; however, SHA-384 and SHA-512 may suffer from a non-negligible performance degradation on small (32-bit only) platforms. This may matter for some specific applications.
For non-security related usages (e.g. first pass of indexing in a hash table, or detection of accidental, non-malicious data alteration -- the kind of job where you could use a CRC), consider MD4, a predecessor to MD5. MD4 is even more broken than MD5, but also simpler to implement (with shorter code) and faster (actually, it has been measured to be faster than CRC32 on some ARM platforms).
If someone is purposely trying to modify two files to have the same hash, what are ways to stop them? Can md5 and sha1 prevent the majority case?
I was thinking of writing my own and I figure even if I don't do a good job if the user doesn't know my hash he may not be able to fool mine.
What's the best way to prevent this?
MD5 is generally considered insecure if hash collisions are a major concern. SHA1 is likewise no longer considered acceptable by the US government. There is was a competition under way to find a replacement hash algorithm, but the recommendation at the moment is to use the SHA2 family - SHA-256, SHA-384 or SHA-512. [Update: 2012-10-02 NIST has chosen SHA-3 to be the algorithm Keccak.]
You can try to create your own hash — it would probably not be as good as MD5, and 'security through obscurity' is likewise not advisable.
If you want security, hash with multiple hash algorithms. Being able to simultaneously create files that have hash collisions using a number of algorithms is excessively improbable. [And, in the light of comments, let me make it clear: I mean publish both the SHA-256 and the Whirlpool values for the file — not combining hash algorithms to create a single value, but using separate algorithms to create separate values. Generally, a corrupted file will fail to match any of the algorithms; if, perchance, someone has managed to create a collision value using one algorithm, the chance of also producing a second collision in one of the other algorithms is negligible.]
The Public TimeStamp uses an array of algorithms. See, for example, sqlcmd-86.00.tgz for an illustration.
If the user doesn't know your hashing algorithm he also can't verify your signature on a document that you actually signed.
The best option is to use public-key one-way hashing algorithms that generate the longest hash. SHA-256 creates a 256-bit hash, so a forger would have to try 2255 different documents (on average) before they created one that matched a given document, which is pretty secure. If that's still not secure enough for you, there's SHA-512.
Also, I think it's worth mentioning that a good low-tech way to protect yourself against forged digitally-signed documents is to simply keep a copy of anything you sign. That way, if it comes down to a dispute, you can show that the original document you signed was altered.
There is a hierarchy of difficulty (for an attacker) here. It is easier to find two files with the same hash than to generate one to match a given hash, and easier to do the later if you don't have to respect form/content/lengths restrictions.
Thus, if it is possible to use a well defined document structure and lengths, you can make an attackers life a bit harder no matter what underling hash you use.
Why are you trying to create your own hash algorithm? What's wrong with SHA1HMAC?
Yes, there are repeats for hashes.
Any hash that is shorter than the plaintext is necessarily less information. That means there will be some repeats. The key for hashes is that the repeats are hard to reverse-engineer.
Consider CRC32 - commonly used as a hash. It's a 32-bit quantity. Because there are more than 2^32 messages in the universe, then there will be repeats with CRC32.
The same idea applies to other hashes.
This is called a "hash collision", and the best way to avoid it is to use a strong hash function. MD5 is relatively easy to artificially build colliding files, as seen here. Similarly, it's known there is a relatively efficient method for computing colliding SH1 files, although in this case "relatively efficient" still takes hunreds of hours of compute time.
Generally, MD5 and SHA1 are still expensive to crack, but not impossible. If you're really worried about it, use a stronger hash function, like SHA256.
Writing your own isn't actually a good idea unless you're a pretty expert cryptographer. most of the simple ideas have been tried and there are well-known attacks against them.
If you really want to learn more about it, have a look at Schneier's Applied Cryptography.
I don't think coming up with your own hash algorithm is a good choice.
Another good option is used Salted MD5. For example, the input to your MD5 hash function is appended with string "acidzom!##" before passing to MD5 function.
There is also a good reading at Slashdot.