Related
Why can't you just reverse the algorithm like you could reverse a math function? How is it possible to make an algorithm that isn't reversible?
And if you use a rainbow table, what makes using a salt impossible to crack it? If you are making a rainbow table with brute force to generate it, then it invents each plaintext value possible (to a length), which would end up including the salt for each possible password and each possible salt (the salt and password/text would just come together as a single piece of text).
MD5 is designed to be cryptographically irreversible. In this case, the most important property is that it is computationally unfeasible to find the reverse of a hash, but it is easy to find the hash of any data. For example, let's think about just operating on numbers (binary files after all, could be interpreted as just a very long number).
Let's say we have the number "7", and we want to take the hash of it. Perhaps the first thing we try as our hash function is "multiply by two". As we'll see, this is not a very good hash function, but we'll try it, to illustrate a point. In this case, the hash of the number will be "14". That was pretty easy to calculate. But now, if we look at how hard it is to reverse it, we find that it is also just as easy! Given any hash, we can just divide it by two to get the original number! This is not a good hash, because the whole point of a hash is that it is much harder to calculate the inverse than it is to calculate the hash (this is the most important property in at least some contexts).
Now, let's try another hash. For this one, I'm going to have to introduce the idea of clock arithmetic. On a clock, there aren't an infinite amount of number. In fact, it just goes from 0 to 11 (remember, 0 and 12 are the same on a clock). So if you "add one" to 11, you just get zero. You can extend the ideas of multiplication, addition, and exponentiation to a clock. For example, 8+7=15, but 15 on a clock is really just 3! So on a clock, you would say 8+7=3! 6*6=36, but on a clock, 36=0! so 6*6=0! Now, for the concept of powers, you can do the same thing. 2^4=16, but 16 is just 4. So 2^4=4! Now, here's how it ties into hashing. How about we try the hash function f(x)=5^x, but with clock arithmetic. As you'll see, this leads to some interesting results. Let's try taking the hash of 7 as before.
We see that 5^7=78125 but on a clock, that's just 5 (if you do the math, you see that we've wrapped around the clock 6510 times). So we get f(7)=5. Now, the question is, if I told you that the hash of my number was 5, would you be able to figure out that my number was 7? Well, it's actually very hard to calculate the reverse of this function in the general case. People much smarter than me have proved that in certain cases, reversing this function is way harder than calculating it forward. (EDIT: Nemo has pointed out that this in fact has not been "proven"; in fact, the only guarantee you get is that a lot of smart people have tried a long time to find an easy way to do so, and none of them have succeeded.) The problem of reversing this operation is called the "Discrete Logarithm Problem". Look it up for more in depth coverage. This is at least the beginning of a good hash function.
With real world hash functions, the idea is basically the same: You find some function that is hard to reverse. People much smarter than me have engineered MD5 and other hashes to make them provably hard to reverse.
Now, perhaps earlier the thought has occurred to you: "it would be easy to calculate the inverse! I'd just take the hash of every number until I found the one that matched!" Now, for the case where the numbers are all less than twelve, this would be feasible. But for the analog of a real-world hash function, imagine all the numbers involved are huge. The idea is that it is still relatively easy to calculate the hash function for these large numbers, but to search through all possible inputs becomes harder much quicker. But what you've stumbled upon is the still a very important idea though: searching through the input space for an input which will give a matching output. Rainbow tables are a more complex variation on the idea, which use precomputed tables of input-output pairs in smart ways in order to make it possible to quickly search through a large number of possible inputs.
Now let's say that you are using a hash function to store passwords on your computer. The idea is this: The computer just stores the hash of the correct password. When a user tries to login, you compare the hash of the input password to the hash of the correct password. If they match, you assume the user has the correct password. The reason this is advantageous is because if someone steals your computer, they still don't have access to your password, just the hash of it. Because the hash function was designed by smart people to be hard to take the reverse of, they can't easily retrieve your password from it.
An attacker's best bet is a bruteforce attack, where they try a bunch of passwords. Just like you might try the numbers less that 12 in the previous problem, an attacker might try all the passwords just composed of numbers and letters less than 7 characters long, or all words which show up in the dictionary. The important thing here is that he can't try all possible passwords, because there are way too many possible 16 character passwords, for example, to ever test. So the point is that an attacker has to restrict the possible passwords he tests, otherwise he will never even check a small percentage of them.
Now, as for a salt, the idea is this: What if two users had the same password? They would have the same hash. If you think about it, the attacker doesn't really have to crack every users password individually. He simply goes through every possible input password, and compares the hash to all the hashes. If it matches one of them, then he has found a new password. What we'd really like to force him to do is calculate a new hash for every user+password combination he wants to check. That's the idea of a salt, is that you make the hash function be slightly different for every user, so he can't reuse a single set of precomputed values for all users. The most straightforward way to do this is to tack on some random string to each user's password before you take the hash, where the random string is different for each user. So, for example, if my password is "shittypassword", my hash might show up as MD5("6n93nshittypassword") and if your password is "shittypassword", your hash might show up as MD5("fa9elshittypassword"). This little bit "fa9el" is called the "salt", and it's different for every user. For example, my salt is "6n93n". Now, this little bit which is tacked on to your password is just stored on your computer as well. When you try to login with the password X, the computer can just calculate MD5("fa9el"+X) and see if it matches the stored hash.
So the basic mechanics of logging in remain unchanged, but for an attacker, they are now faced with a more daunting challenge: rather than a list of MD5 hashes, they are faced with a list of MD5 sums and salts. They essentially have two options:
They can ignore the fact that the hashes are salted, and try to crack the passwords with their lookup table as is. However, the chances that they'll actually crack a password are much reduced. For example, even if "shittypassword" is on their list of inputs to check, most likely "fa9elshittypassword" isn't. In order to get even a small percentage of the probability of cracking a password that they had before, they'll need to test orders of magnitude more possible passwords.
They can recalculate the hashes on a per-user basis. So rather than calculating MD5(passwordguess), for each user X, they calculate MD5( Salt_of_user_X + passwordguess). Not only does this force them to calculate a new hash for each user they want to crack, but also most importantly, it prevents them from being able to use precalculated tables (like rainbow table, for example), because they can't know what Salt_of_user_X is before hand, so they can't precalculate the hashes to test.
So basically, if they are trying to use precalculated tables, using a salt effectively greatly increases the possible inputs they have to test in order to crack the password, and even if they aren't using precalculated tables, it still slows them down by a factor of N, where N is the number of passwords you are storing.
Hopefully this answers all your questions.
Think of 2 numbers from 1 to 9999. Add them. Now tell me the final digit.
I can't, from that information, deduce which numbers you originally thought of. That is a very simple example of a one-way hash.
Now, I can think of two numbers which give the same result, and this is where this simple example differs from a 'proper' cryptographic hash like MD5 or SHA1. With those algorithms, it should be computationally difficult to come up with an input which produces a specific hash.
One big reason you can't reverse the hash function is because data is lost.
Consider a simple example function: 'OR'. If you apply that to your input data of 1 and 0, it yields 1. But now, if you know the answer is '1', how do you back out the original data? You can't. It could have been 1,1 or maybe 0,1, or maybe 1,0.
As for salting and rainbow tables. Yes, theoretically, you could have a rainbow table which would encompass all possible salts and passwords, but practically, that's just too big. If you tried every possible combination of lower case letters, upper case, numbers, and twelve punctuation symbols, up to 50 characters long, that's (26+26+10+12)^50 = 2.9 x 10^93 different possibilities. That's more than the number of atoms in the visible universe.
The idea behind rainbow tables is to calculate the hash for a bunch of possible passwords in advance, and passwords are much shorter than 50 characters, so it's possible to do so. That's why you want to add a salt in front: if you add on '57sjflk43380h4ljs9flj4ay' to the front of the password. While someone may have already computed the hash for "pa55w0rd", no one will have already calculated the has for '57sjflk43380h4ljs9flj4aypa55w0rd'.
I don't think the md5 gives you the whole result - so you can't work backwards to find the original things that was md5-ed
md5 is 128bit, that's 3.4*10^38 combinations.
the total number of eight character length passwords:
only lowercase characters and numbers: 36^8 = 2.8*10^12
lower&uppercase and numbers: 62^8 = 2.18*10^14
You have to store 8 bytes for the password, 16 for the md5 value, that's 24 bytes total per entry.
So you need approx 67000G or 5200000G storage for your rainbow table.
The only reason that it's actually possible to figure out passwords is because people use obvious ones.
Hash-functions always create an output with a fixed length, even though the input can be infinitely large.
So how is it possible, that no information is lost here? Shouldn't some inputs result in the same output then?
Yes. Two inputs can result in the same output, resulting in a hash collision.
Hashes are designed so that hashing text is very easy, but reversing the process is difficult. The point of hashing isn't to store information. Instead, hashes are commonly used in security (and also data structures).
For instance, websites will hash a user's passwords and store the hashes instead of the physical passwords. This way, if the website's security is breached, the attacker can only obtain the hashes, which still doesn't let the attacker log in, as it is very difficult to reverse-engineer the password.
The hash set is another application of hashing. By hashing an object and storing only the hashes, you can check whether an object is present or not present in the set in constant time. You only have to search through all of the objects in the hash set that have the same hash as the object that you are checking. As the size of the hash set grows, so does the chance of a hash collision.
So how is it possible, that no information is lost here?
It's not possible, and lots of information is lost.
In the case of a perfect hash there is no collision and we could even argue that information isn't really lost (it's just not contained in the system alone) because we know all possible inputs and know there is no collisions in the hashes produced, but they can be used as an index in a way that isn't possible or as good with the input data, so they are useful.
In the case of a hash-based collection we use a hash code to (hopefully) have few collisions so we get close to O(1) lookup, but have some means to handle it if a collision does happen.
In the case of a cryptographic hash we could have collisions but it's extremely hard to deliberately do so, for similar (roughly speaking) reasons as to why its hard to break modern cryptography, so while you could have two passwords with the same hash you couldn't find it easily (especially if you aren't going to e.g. have a password of several thousand pages of text).
In the case of a checksum hash we could have collisions, but that they're unlikely means that if we have corruption we probably won't have the matching hash.
I understand that hash functions like md5 can be used to tell if two files(or sets of data) are similar or not. Even changing a single bit changes the hash value of any file. Apart from this information is there any other information when comparing two hash function like to what degree are the two files different or the location of the changes. Are there any hash functions that can used to get these information
None if the hash function is cryptographically secure.
If you are presented with two hashes coming from two files, the only thing you can tell is if the files are exactly, bit for bit, identical (same hash) or not.
Some properties of a hash function is that any final bit of the hash depends on multiple bits of the message, and that a change in a single bit in the message will result in a completely different hash, to the extent that this second hash cannot be distinguished from any other possible hash.
Even with a somewhat vulnerable hash function like md5, the main thing an attacker could do is create a second document hashing to the same final hash (a collision). Not really infer the relatedness of two documents. For this to be possible, the hash function would have to be quite weak.
If someone is purposely trying to modify two files to have the same hash, what are ways to stop them? Can md5 and sha1 prevent the majority case?
I was thinking of writing my own and I figure even if I don't do a good job if the user doesn't know my hash he may not be able to fool mine.
What's the best way to prevent this?
MD5 is generally considered insecure if hash collisions are a major concern. SHA1 is likewise no longer considered acceptable by the US government. There is was a competition under way to find a replacement hash algorithm, but the recommendation at the moment is to use the SHA2 family - SHA-256, SHA-384 or SHA-512. [Update: 2012-10-02 NIST has chosen SHA-3 to be the algorithm Keccak.]
You can try to create your own hash — it would probably not be as good as MD5, and 'security through obscurity' is likewise not advisable.
If you want security, hash with multiple hash algorithms. Being able to simultaneously create files that have hash collisions using a number of algorithms is excessively improbable. [And, in the light of comments, let me make it clear: I mean publish both the SHA-256 and the Whirlpool values for the file — not combining hash algorithms to create a single value, but using separate algorithms to create separate values. Generally, a corrupted file will fail to match any of the algorithms; if, perchance, someone has managed to create a collision value using one algorithm, the chance of also producing a second collision in one of the other algorithms is negligible.]
The Public TimeStamp uses an array of algorithms. See, for example, sqlcmd-86.00.tgz for an illustration.
If the user doesn't know your hashing algorithm he also can't verify your signature on a document that you actually signed.
The best option is to use public-key one-way hashing algorithms that generate the longest hash. SHA-256 creates a 256-bit hash, so a forger would have to try 2255 different documents (on average) before they created one that matched a given document, which is pretty secure. If that's still not secure enough for you, there's SHA-512.
Also, I think it's worth mentioning that a good low-tech way to protect yourself against forged digitally-signed documents is to simply keep a copy of anything you sign. That way, if it comes down to a dispute, you can show that the original document you signed was altered.
There is a hierarchy of difficulty (for an attacker) here. It is easier to find two files with the same hash than to generate one to match a given hash, and easier to do the later if you don't have to respect form/content/lengths restrictions.
Thus, if it is possible to use a well defined document structure and lengths, you can make an attackers life a bit harder no matter what underling hash you use.
Why are you trying to create your own hash algorithm? What's wrong with SHA1HMAC?
Yes, there are repeats for hashes.
Any hash that is shorter than the plaintext is necessarily less information. That means there will be some repeats. The key for hashes is that the repeats are hard to reverse-engineer.
Consider CRC32 - commonly used as a hash. It's a 32-bit quantity. Because there are more than 2^32 messages in the universe, then there will be repeats with CRC32.
The same idea applies to other hashes.
This is called a "hash collision", and the best way to avoid it is to use a strong hash function. MD5 is relatively easy to artificially build colliding files, as seen here. Similarly, it's known there is a relatively efficient method for computing colliding SH1 files, although in this case "relatively efficient" still takes hunreds of hours of compute time.
Generally, MD5 and SHA1 are still expensive to crack, but not impossible. If you're really worried about it, use a stronger hash function, like SHA256.
Writing your own isn't actually a good idea unless you're a pretty expert cryptographer. most of the simple ideas have been tried and there are well-known attacks against them.
If you really want to learn more about it, have a look at Schneier's Applied Cryptography.
I don't think coming up with your own hash algorithm is a good choice.
Another good option is used Salted MD5. For example, the input to your MD5 hash function is appended with string "acidzom!##" before passing to MD5 function.
There is also a good reading at Slashdot.
One concept I've always wondered about is the use of cryptographic hash functions and values. I understand that these functions can generate a hash value that is unique and virtually impossible to reverse, but here's what I've always wondered:
If on my server, in PHP I produce:
md5("stackoverflow.com") = "d0cc85b26f2ceb8714b978e07def4f6e"
When you run that same string through an MD5 function, you get the same result on your PHP installation. A process is being used to produce some value, from some starting value.
Doesn't this mean that there is some way to deconstruct what is happening and reverse the hash value?
What is it about these functions that makes the resulting strings impossible to retrace?
The input material can be an infinite length, where the output is always 128 bits long. This means that an infinite number of input strings will generate the same output.
If you pick a random number and divide it by 2 but only write down the remainder, you'll get either a 0 or 1 -- even or odd, respectively. Is it possible to take that 0 or 1 and get the original number?
If hash functions such as MD5 were reversible then it would have been a watershed event in the history of data compression algorithms! Its easy to see that if MD5 were reversible then arbitrary chunks of data of arbitrary size could be represented by a mere 128 bits without any loss of information. Thus you would have been able to reconstruct the original message from a 128 bit number regardless of the size of the original message.
Contrary to what the most upvoted answers here emphasize, the non-injectivity (i.e. that there are several strings hashing to the same value) of a cryptographic hash function caused by the difference between large (potentially infinite) input size and fixed output size is not the important point – actually, we prefer hash functions where those collisions happen as seldom as possible.
Consider this function (in PHP notation, as the question):
function simple_hash($input) {
return bin2hex(substr(str_pad($input, 16), 0, 16));
}
This appends some spaces, if the string is too short, and then takes the first 16 bytes of the string, then encodes it as hexadecimal. It has the same output size as an MD5 hash (32 hexadecimal characters, or 16 bytes if we omit the bin2hex part).
print simple_hash("stackoverflow.com");
This will output:
737461636b6f766572666c6f772e636f6d
This function also has the same non-injectivity property as highlighted by Cody's answer for MD5: We can pass in strings of any size (as long as they fit into our computer), and it will output only 32 hex-digits. Of course it can't be injective.
But in this case, it is trivial to find a string which maps to the same hash (just apply hex2bin on your hash, and you have it). If your original string had the length 16 (as our example), you even will get this original string. Nothing of this kind should be possible for MD5, even if you know the length of the input was quite short (other than by trying all possible inputs until we find one that matches, e.g. a brute-force attack).
The important assumptions for a cryptographic hash function are:
it is hard to find any string producing a given hash (preimage resistance)
it is hard to find any different string producing the same hash as a given string (second preimage resistance)
it is hard to find any pair of strings with the same hash (collision resistance)
Obviously my simple_hash function fulfills neither of these conditions. (Actually, if we restrict the input space to "16-byte strings", then my function becomes injective, and thus is even provable second-preimage resistant and collision resistant.)
There now exist collision attacks against MD5 (e.g. it is possible to produce a pair of strings, even with a given same prefix, which have the same hash, with quite some work, but not impossible much work), so you shouldn't use MD5 for anything critical.
There is not yet a preimage attack, but attacks will get better.
To answer the actual question:
What is it about these functions that makes the
resulting strings impossible to retrace?
What MD5 (and other hash functions build on the Merkle-Damgard construction) effectively do is applying an encryption algorithm with the message as the key and some fixed value as the "plain text", using the resulting ciphertext as the hash. (Before that, the input is padded and split in blocks, each of this blocks is used to encrypt the output of the previous block, XORed with its input to prevent reverse calculations.)
Modern encryption algorithms (including the ones used in hash functions) are made in a way to make it hard to recover the key, even given both plaintext and ciphertext (or even when the adversary chooses one of them).
They do this generally by doing lots of bit-shuffling operations in a way that each output bit is determined by each key bit (several times) and also each input bit. That way you can only easily retrace what happens inside if you know the full key and either input or output.
For MD5-like hash functions and a preimage attack (with a single-block hashed string, to make things easier), you only have input and output of your encryption function, but not the key (this is what you are looking for).
Cody Brocious's answer is the right one. Strictly speaking, you cannot "invert" a hash function because many strings are mapped to the same hash. Notice, however, that either finding one string that gets mapped to a given hash, or finding two strings that get mapped to the same hash (i.e. a collision), would be major breakthroughs for a cryptanalyst. The great difficulty of both these problems is the reason why good hash functions are useful in cryptography.
MD5 does not create a unique hash value; the goal of MD5 is to quickly produce a value that changes significantly based on a minor change to the source.
E.g.,
"hello" -> "1ab53"
"Hello" -> "993LB"
"ZR#!RELSIEKF" -> "1ab53"
(Obviously that's not actual MD5 encryption)
Most hashes (if not all) are also non-unique; rather, they're unique enough, so a collision is highly improbable, but still possible.
A good way to think of a hash algorithm is to think of resizing an image in Photoshop... say you have a image that is 5000x5000 pixels and you then resize it to just 32x32. What you have is still a representation of the original image but it is much much smaller and has effectively "thrown away" certain parts of the image data to make it fit in the smaller size. So if you were to resize that 32x32 image back up to 5000x5000 all you'd get is a blurry mess. However because a 32x32 image is not that large it would be theoretically conceivable that another image could be downsized to produce the exact same pixels!
That's just an analogy but it helps understand what a hash is doing.
A hash collision is much more likely than you would think. Take a look at the birthday paradox to get a greater understanding of why that is.
As the number of possible input files is larger than the number of 128-bit outputs, it's impossible to uniquely assign an MD5 hash to each possible.
Cryptographic hash functions are used for checking data integrity or digital signatures (the hash being signed for efficiency). Changing the original document should therefore mean the original hash doesn't match the altered document.
These criteria are sometimes used:
Preimage resistance: for a given hash function and given hash, it should be difficult to find an input that has the given hash for that function.
Second preimage resistance: for a given hash function and input, it should be difficult to find a second, different, input with the same hash.
Collision resistance: for a given has function, it should be difficult to find two different inputs with the same hash.
These criterial are chosen to make it difficult to find a document that matches a given hash, otherwise it would be possible to forge documents by replacing the original with one that matched by hash. (Even if the replacement is gibberish, the mere replacement of the original may cause disruption.)
Number 3 implies number 2.
As for MD5 in particular, it has been shown to be flawed:
How to break MD5 and other hash functions.
But this is where rainbow tables come into play.
Basically it is just a large amount of values hashed separetely and then the result is saved to disk. Then the reversing bit is "just" to do a lookup in a very large table.
Obviously this is only feasible for a subset of all possible input values but if you know the bounds of the input value it might be possible to compute it.
Chinese scientist have found a way called "chosen-prefix collisions" to make a conflict between two different strings.
Here is an example: http://www.win.tue.nl/hashclash/fastcoll_v1.0.0.5.exe.zip
The source code: http://www.win.tue.nl/hashclash/fastcoll_v1.0.0.5_source.zip
The best way to understand what all the most voted answers meant is to actually try to revert the MD5 algorithm. I remember I tried to revert the MD5crypt algorithm some years ago, not to recover the original message because it is clearly impossible, but just to generate a message that would produce the same hash as the original hash. This, at least theoretically, would provide me a way to login to a Linux device that stored the user:password in the /etc/passwd file using the generated message (password) instead of using the original one. Since both messages would have the same resulting hash, the system would recognize my password (generated from the original hash) as valid. That didn't work at all. After several weeks, if I remember correctly, the use of salt in the initial message killed me. I had to produce not only a valid initial message, but a salted valid initial message, which I was never able to do. But the knowledge that I got from this experiment was nice.
As most have already said MD5 was designed for variable length data streams to be hashed to a fixed length chunk of data, so a single hash is shared by many input data streams.
However if you ever did need to find out the original data from the checksum, for example if you have the hash of a password and need to find out the original password, it's often quicker to just google (or whatever searcher you prefer) the hash for the answer than to brute force it. I have successfully found out a few passwords using this method.
Now a days MD5 hashes or any other hashes for that matter are pre computed for all possible strings and stored for easy access. Though in theory MD5 is not reversible but using such databases you may find out which text resulted in a particular hash value.
For example try the following hash code at http://gdataonline.com/seekhash.php to find out what text i used to compute the hash
aea23489ce3aa9b6406ebb28e0cda430
f(x) = 1 is irreversible. Hash functions aren't irreversible.
This is actually required for them to fulfill their function of determining whether someone possesses an uncorrupted copy of the hashed data. This brings susceptibility to brute force attacks, which are quite powerful these days, particularly against MD5.
There's also confusion here and elsewhere among people who have mathematical knowledge but little cipherbreaking knowledge. Several ciphers simply XOR the data with the keystream, and so you could say that a ciphertext corresponds to all plaintexts of that length because you could have used any keystream.
However, this ignores that a reasonable plaintext produced from the seed password is much, much more likely than another produced by the seed Wsg5Nm^bkI4EgxUOhpAjTmTjO0F!VkWvysS6EEMsIJiTZcvsh#WI$IH$TYqiWvK!%&Ue&nk55ak%BX%9!NnG%32ftud%YkBO$U6o to the extent that anyone claiming that the second was a possibility would be laughed at.
In the same way, if you're trying to decide between the two potential passwords password and Wsg5Nm^bkI4EgxUO, it's not as difficult to do as some mathematicians would have you believe.
By definition, a cryptographic hash function should not be invertible and should have the least collisions possible.
Regarding your question: it is a one way hash. The input (irrespective of length) will generate a fixed size output, which will be padded based on algo (512 bit boundary for MD5). The information is compressed (lost) and practically not possible to generate from reverse transforms.
Additional info on MD5: it is vulnerable to collisions. I have gone through this article recently,
http://www.win.tue.nl/hashclash/Nostradamus/
Open source code for crypto hash implementations (MD5 and SHA) can be found at Mozilla code.
(freebl library).
I like all the various arguments.
It is obvious the real value of hashed values is simply to provide human-unreadable placeholders for strings such as passwords.
It has no specific enhanced security benefit.
Assuming an attacker gained access to a table with hashed passwords, he/she can:
Hash a password of his/her own choice and place the results inside the password table if he/she has writing/edit rights to the table.
Generate hashed values of common passwords and test the existence of similar hashed values in the password table.
In this case weak passwords cannot be protected by the mere fact that they are hashed.