Hash function collisions analysis - hash

How do I ensure that a hash function has low collisions ? Or another way to ask this questions would be, how do I analyze how good a hash generator function is ?

Related

Hashing a string of bounded size

Assuming I have a bounded input string of maximum length 64 characters [0-9,a-z,A-Z]. Given the following code using sha1 hash:
var hash = sha1(str).substring(0,n)
I want to minimize the integer n while still acceptably avoiding collisions.
How to do I calculate the probability of a collision given n and an input set size x?
There is no length that guarantees that there won't be any collision. Even the full 20-byte SHA-1 does not guarantee that there are no collisions: it is computationally expensive to craft collision, but it has been done). Even a 64-byte SHA-512 value does not give a mathematical guarantee that there are no collisions, but the best known ways to find a collision require more energy than is available in the solar system.
If you want a practical guarantee that there are no collisions (even in the face of hostile input), you can use a cryptographic hash that has not been broken, such as SHA-256.
But if this is for indexing rather than security, hashes are usually not a practical way to ensure the absence of collisions. Use a non-cryptographic hash instead. Non-cryptographic hashes make it easy to craft collisions, but they are faster to compute. If there is a collision, use a secondary hash, a binary search in a sorted data structure or a linear search to resolve the ambiguity. This is how data structures such as hash tables work.
There is one case where you can ensure that there are no collisions: when you're working with a fixed data set. In that case, you can calculate a perfect hash function from the data.
Alternatively, hashing may be the wrong tool for the job. Maybe you should keep a central database of indexes instead.

Comparing hashes to test for collisions

I wish to compare hashes to check for collisions (Yes, I know it is time consuming, but never mind that). In checking for collisions, hashes need to be compared. Is the best method to have a single hash in a variable to compare against or to have a list of all hashes previously generated and compare the latest hash to each item in the list.
I would prefer the first option because it is much faster, but is there a recommended method? Are you less likely to find a collision by using the first method?
Is the best method to have a single hash in a variable to compare against or to have a list of all hashes previously generated and compare the latest hash to each item in the list.
Neither.
I would prefer the first option because it is much faster, but is there a recommended method?
I don't understand why you think the first method might work, but then you haven't fully explained your situation. Still, if you want to detect hash values that repeat, you do indeed need to keep track of already-seen hash values: to do that you don't want to search linearly though a list, and should use a set container to store seen hashes; a hash table - as suggested in a comment by gnasher729 a few hours back - would give O(1) performance e.g. in C++ in your hashes are 64 bit, std::unordered_set<uint64_t>), or a balance binary tree for O(logN) performance (e.g. C++ std::set<uint64_t>).
Are you less likely to find a collision by using the first method?
You're very likely to miss collisions.
All that said, you may want to reexamine your premise. The chance of a good (cryptographic quality) hash function producing collisions closely approaches the odds described by the "birthday paradox". As a rule of thumb, if you have 2^N distinct values to hash you're statistically unlikely to see collisions if your hashes are comfortably more than 2*N bits wide: if you allow enough "comfort", you're more likely to be hit on the noggin by a meteor than have your program see a collision. You mentioned MD5 so I'd expect 128 bits: unless you're storing order-of a quadrillion values or more (literally), it's pretty safe to ignore the potential for collisions.
Do note one important use of hash values where collisions happen more often for a different reason, and that's in hash tables, where even non-colliding hash values may collide at the same bucket index after they're "wrapped" - often a la h % N when N is the number of buckets. In general, it's impractical to ignore the potential for collisions in a hash table, and very unwise to try.

Is there a solution to creating a perfect hash table for non-finite inputs?

So hash tables are really cool for constant-time lookups of data in sets, but as I understand they are limited by possible hashing collisions which leads to increased small amounts of time-complexity.
It seems to me like any hashing function that supports a non-finite range of inputs is really a heuristic for reducing collision. Are there any absolute limitations to creating a perfect hash table for any range of inputs, or is it just something that no one has figured out yet?
I think this depends on what you mean by "any range of inputs."
If your goal is to create a hash function that can take in anything and never produce a collision, then there's no way to do what you're asking. This is a consequence of the pigeonhole principle - if you have n objects that can be hashed, you need at least n distinct outputs for your hash function or you're forced to get at least one hash collision. If there are infinitely many possible input objects, then no finite hash table could be built that will always avoid collisions.
On the other hand, if your goal is to build a hash table where lookups are worst-case O(1) (that is, you only have to look at a fixed number of locations to find any element), then there are many different options available. You could use a dynamic perfect hash table or a cuckoo hash table, which supports worst-case O(1) lookups and expected O(1) insertions and deletions. These hash tables work by using a variety of different hash functions rather than any one fixed hash function, which helps circumvent the above restriction.
Hope this helps!

Why Does a Bloom Filter Need Multiple Hash Functions?

I don't really understand why a bloom filter requires multiple hash functions (say, SHA and MD5).
Why not just make a bigger SHA hash, for example, and then break it up into multiple parts and treat them as separate hashes? Isn't that more efficient in terms of speed?
The idea is to use several different but simple hash functions. If you're going to use some cryptographic hash function like SHA or MD5 then you could just vary the input to it. Whether it's more efficient depends how complex your hash functions are.
It's called triple/double hashing, it minimizes the chance of collisions, probability of collision occurring with 5 hash functions, is 5 times smaller than with one hash function.

Explanation about hashing and its use for data compression

I am facing an application that uses hashing, but I cannot still figure out how it works. Here is my problem, hashing is used to generate some index, and with those indexes I access to different tables, and after I add the value of every table that I get using the indexes and with that I get my final value. This is done to reduce the memory requirements. The input to the hashing function is doing the XOR between a random constant number and some parameters from the application.
Is this a typical hashing application?. The thing that I do not understand is how using hashing can we reduce the memory requirements?. Can anyone clarify this?.
Thank you
Hashing alone doesn't have anything to do with memory.
What it is often used for is a hashtable. Hashtables work by computing the hash of what you are keying off of, which is then used as an index into a data structure.
Hashing allows you to reduce the key (string, etc.) into a more compact value like an integer or set of bits.
That might be the memory savings you're referring to--reducing a large key to a simple integer.
Note, though, that hashes are not unique! A good hashing algorithm minimizes collisions but they are not intended to reduce to a unique value--doing so isn't possible (e.g., if your hash outputs a 32bit integer, your hash would have only 2^32 unique values).
Is it a bloom filter you are talking about? This uses hash functions to get a space efficient way to test membership of a set. If so then see the link for an explanation.
Most good hash implementations are memory inefficient, otherwise there would be more computing involved - and that would exactly be missing the point of hashing.
Hash implementations are used for processing efficiency, as they'll provide you with constant running time for operations like insertion, removal and retrieval.
You can think about the quality of hashing in a way that all your data, no matter what type or size, is always represented in a single fixed-length form.
This could be explained if the hashing being done isn't to build a true hash table, but is to just create an index in a string/memory block table. If you had the same string (or memory sequence) 20 times in your data, and you then replaced all 20 instances of that string with just its hash/table index, you could achieve data compression in that way. If there's an actual collision chain contained in that table for each hash value, however, then what I just described is not what's going on; in that case, the reason for the hashing would most likely be to speed up execution (by providing quick access to stored values), rather than compression.