Anyone understand the quality of hash? - perl

The "quality" of a hash is defined as the total number of comparisons needed to access every element once, relative to the expected number needed for a random hash. The value can go over 100%.
The total number of comparisons is equal to the sum of the squares of the number of entries in each bucket. For a random hash of "<n"> keys into "<k"> buckets, the expected value is:
n + n ( n - 1 ) / 2 * k
What exactly is the quality of hash??

It is a measure for how "evenly distributed" the hash is. Ideally, the hash function would place everything into its own bucket, but that does not happen because you cannot have that many buckets (and even then there are hash collisions, so that distinct values still end up in the same bucket).
The performance of the hash (ideally just going to up a bucket and looking at the single element in there) degrades when you have buckets with many elements in them: If that happens, you have to linearly go through all of them.
A quality of 100% is what you would expect for a hash filled with random data. In that case, all buckets should be equally full. If you have more than 100%, your data is unevenly hashed, and lookups take more time.

Related

Don't you get a random number after doing modulo on a hashed number?

I'm trying to understand hash tables, and from what I've seen the modulo operator is used to select which bucket a key will be placed in. I know that hash algorithms are supposed to minimize the same result for different inputs, however I don't understand how the same results for different inputs can be minimal after the modulo operation. Let's just say we have a near-perfect hash function that gives a different hashed value between 0 and 100,000, and then we take the result modulo 20 (in our example we have 20 buckets), isn't the resulting number very close to a random number between 0 and 19? Meaning roughly the probability that the final result is any of a number between 0 and 19 is about 1 in 20? If this is the case, then the original hash function doesn't seem to ensure minimal collisions because after the modulo operation we end up with something like a random number? I must be wrong, but I'm thinking that what ensures minimal collisions the most is not the original hash function but how many buckets we have.
I'm sure I'm misunderstanding this. Can someone explain?
Don't you get a random number after doing modulo on a hashed number?
It depends on the hash function.
Say you have an identify hash for numbers - h(n) = n - then if the keys being hashed are generally incrementing numbers (perhaps with an occasional ommision), then after hashing they'll still generally hit successive buckets (wrapping at some point from the last bucket back to the first), with low collision rates overall. Not very random, but works out well enough. If the keys are random, it still works out pretty well - see the discussion of random-but-repeatable hashing below. The problem is when the keys are neither roughly-incrementing nor close-to-random - then an identity hash can provide terrible collision rates. (You might think "this is a crazy bad example hash function, nobody would do this; actually, most C++ Standard Library implementations' hash functions for integers are identity hashes).
On the other hand, if you have a hash function that say takes the address of the object being hashed, and they're all 8 byte aligned, then if you take the mod and the bucket count is also a multiple of 8, you'll only ever hash to every 8th bucket, having 8 times more collisions than you might expect. Not very random, and doesn't work out well. But, if the number of buckets is a prime, then the addresses will tend to scatter much more randomly over the buckets, and things will work out much better. This is the reason the GNU C++ Standard Library tends to use prime numbers of buckets (Visual C++ uses power-of-two sized buckets so it can utilise a bitwise AND for mapping hash values to buckets, as AND takes one CPU cycle and MOD can take e.g. 30-40 cycles - depending on your exact CPU - see here).
When all the inputs are known at compile time, and there's not too many of them, then it's generally possible to create a perfect hash function (GNU gperf software is designed specifically for this), which means it will work out a number of buckets you'll need and a hash function that avoids any collisions, but the hash function may take longer to run than a general purpose function.
People often have a fanciful notion - also seen in the question - that a "perfect hash function" - or at least one that has very few collisions - in some large numerical hashed-to range will provide minimal collisions in actual usage in a hash table, as indeed this stackoverflow question is about coming to grips with the falsehood of this notion. It's just not true if there are still patterns and probabilities in the way the keys map into that large hashed-to range.
The gold standard for a general purpose high-quality hash function for runtime inputs is to have a quality that you might call "random but repeatable", even before the modulo operation, as that quality will apply to the bucket selection as well (even using the dumber and less forgiving AND bit-masking approach to bucket selection).
As you've noticed, this does mean you'll see collisions in the table. If you can exploit patterns in the keys to get less collisions that this random-but-repeatable quality would give you, then by all means make the most of that. If not, the beauty of hashing is that with random-but-repeatable hashing your collisions are statistically related to your load factor (the number of stored elements divided by the number of buckets).
As an example, for separate chaining - when your load factor is 1.0, 1/e (~36.8%) of buckets will tend to be empty, another 1/e (~36.8%) have one element, 1/(2e) or ~18.4% two elements, 1/(3!e) about 6.1% three elements, 1/(4!e) or ~1.5% four elements, 1/(5!e) ~.3% have five etc.. - the average chain length from non-empty buckets is ~1.58 no matter how many elements are in the table (i.e. whether there are 100 elements and 100 buckets, or 100 million elements and 100 million buckets), which is why we say lookup/insert/erase are O(1) constant time operations.
I know that hash algorithms are supposed to minimize the same result for different inputs, however I don't understand how the same results for different inputs can be minimal after the modulo operation.
This is still true post-modulo. Minimising the same result means each post-modulo value has (about) the same number of keys mapping to it. We're particularly concerned about in-use keys stored in the table, if there's a non-uniform statistical distribution to the use of keys. With a hash function that exhibits the random-but-repeatable quality, there will be random variation in post-modulo mapping, but overall they'll be close enough to evenly balanced for most practical purposes.
Just to recap, let me address this directly:
Let's just say we have a near-perfect hash function that gives a different hashed value between 0 and 100,000, and then we take the result modulo 20 (in our example we have 20 buckets), isn't the resulting number very close to a random number between 0 and 19? Meaning roughly the probability that the final result is any of a number between 0 and 19 is about 1 in 20? If this is the case, then the original hash function doesn't seem to ensure minimal collisions because after the modulo operation we end up with something like a random number? I must be wrong, but I'm thinking that what ensures minimal collisions the most is not the original hash function but how many buckets we have.
So:
random is good: if you get something like the random-but-repeatable hash quality, then your average hash collisions will statistically be capped at low levels, and in practice you're unlikely to ever see a particularly horrible collision chain, provided you keep the load factor reasonable (e.g. <= 1.0)
that said, your "near-perfect hash function...between 0 and 100,000" may or may not be high quality, depending on whether the distribution of values has patterns in it that would produce collisions. When in doubt about such patterns, use a hash function with the random-but-repeatable quality.
What would happen if you took a random number instead of using a hash function? Then doing the modulo on it? If you call rand() twice you can get the same number - a proper hash function doesn't do that I guess, or does it? Even hash functions can output the same value for different input.
This comment shows you grappling with the desirability of randomness - hopefully with earlier parts of my answer you're now clear on this, but anyway the point is that randomness is good, but it has to be repeatable: the same key has to produce the same pre-modulo hash so the post-modulo value tells you the bucket it should be in.
As an example of random-but-repeatable, imagine you used rand() to populate a uint32_t a[256][8] array, you could then hash any 8 byte key (e.g. including e.g. a double) by XORing the random numbers:
auto h(double d) {
uint8_t i[8];
memcpy(i, &d, 8);
return a[i[0]] ^ a[i[1]] ^ a[i[2]] ^ ... ^ a[i[7]];
}
This would produce a near-ideal (rand() isn't a great quality pseudo-random number generator) random-but-repeatable hash, but having a hash function that needs to consult largish chunks of memory can easily be slowed down by cache misses.
Following on from what [Mureinik] said, assuming you have a perfect hash function, say your array/buckets are 75% full, then doing modulo on the hashed function will probably result in a 75% collision probability. If that's true, I thought they were much better. Though I'm only learning about how they work now.
The 75%/75% thing is correct for a high quality hash function, assuming:
closed hashing / open addressing, where collisions are handled by finding an alternative bucket, or
separate chaining when 75% of buckets have one or more elements linked therefrom (which is very likely to mean the load factor (which many people may think of when you talk about how "full" the table is) is already significantly more than 75%)
Regarding "I thought they were much better." - that's actually quite ok, as evidenced by the percentages of colliding chain lengths mentioned earlier in my answer.
I think you have the right understanding of the situation.
Both the hash function and the number of buckets affect the chance of collisions. Consider, for example, the worst possible hash function - one that returns a constant value. No matter how many buckets you have, all the entries will be lumped to the same bucket, and you'd have a 100% chance of collision.
On the other hand, if you have a (near) perfect hash function, the number of buckets would be the main factor for the chance of collision. If your hash table has only 20 buckets, the minimal chance of collision will indeed be 1 in 20 (over time). If the hash values weren't uniformly spread, you'd have a much higher chance of collision in at least one of the buckets. The more buckets you have, the less chance of collision. On the other hand, having too many buckets will take up more memory (even if they are empty), and ultimately reduce performance, even if there are less collisions.

The hash table probability

I still confuse how to find hash table probability. I have hash table of size 20 with open addressing uses the hash function
hash(int x) = x % 20
How many elements need to be inserted in the hash table so that the probability of the next element hitting a collision exceeds 50%.
I use birthday paradox concerns to find it https://en.wikipedia.org/wiki/Birthday_problem and seems get an incorrect answer. Where is my mistake?
calculating
1/2=1-e^(-n^2/(2*20))
ln(1/2)=ln(e)*(-n^2/40)
-0.69314718=-n^2/40
n=scr(27.725887)=5.265538
How many elements need to be inserted in the hash table so that the probability of the next element hitting a collision exceeds 50%.
Well, it depends on a few things.
The simple case is that you've already performed 11 inserts with distinct and effectively random integer keys, such that 11 of the buckets are in use, and your next insertion uses another distinct and effectively random key so it will hash to any bucket with equal probability: clearly there's only a 9/20 chance of that bucket being unused which means your chance of a collision during that 12th insertion exceeds 50% for the first time. This is the answer most formulas, textbooks, people etc. will give you, as it's the most meaningful for situations where hash tables are used with strong hash functions and/or prime numbers of buckets etc. - the scenarios where hash tables shine and are particularly elegant.
Another not-uncommon scenario is that you're putting say customer ids for a business into the hash table, and you're assigning the customers incrementing id numbers starting at 1. Even if you've already inserted customers with ids 1 to 19, you know they're in buckets [1] to [19] with no collisions - your hash just passes the keys through without the mod kicking in. You can then insert customer 20 into bucket [0] (after the mod operation) without a collision. Then, the 21st customer has 100% chance of a collision. (But, if your data's like this, please use an array and index directly using the customer id, or customer_id - 1 if you don't want to waste bucket [0].)
There are many other possible patterns in the keys that can affect when you exceed a 50% probability of a collision: e.g. all the keys being odd or multiples of some value, or being say ages or heights with a particular distribution curve.
The mistake with your use of the Birthday Paradox is thinking it answers your question. When you put "1/2" and "20" into the formula, it's telling you that the point at which your cumulative probability of a collision reaches 1/2, but your question is "the probability of the next element hitting a collision exceeds 50%" (emphasis mine).

Hash tables - complexity of insert, search, and delete

I've been given two homework problems on the complexity of hash tables, but I'm struggling to understand the difference between them.
They are as follows:
Consider a hash function which is to take n inputs and map them to a table of size m.
Write the complexity of insert, search, and deletion for the hash function which distributes all n inputs evenly over the buckets of the hash table.
Write the complexity of insert, search, and deletion for the (supposedly perfect but unrealistic) hash function which will never has two items to the same bucket, i.e. this hash function will never result in a collision.
These two questions seem quite similar to me and I'm not really sure of their differences.
For question one, since the n inputs are distributed evenly we can assume there will be zero or one items in each bucket, so all of insert, search and delete will be O(1). Is this correct?
How then does question two differ in any way? If the function never results in a collision then all the items will be spread evenly so wouldn't this result in O(1) for each operation?
Is my thinking correct for these problems or am I missing something?
EDIT:
I believe I've identified where I've gone wrong. O(1) is correct for every operation in question 3 because the hash function is ideal and never results in collision.
However for question 2, the items are spread evenly BUT DOES NOT MEAN there is only 1 item in each bucket, every bucket could have 20 items in a linked list, for example. So insertion would be O(1).
But what about search? It would be O(1) + cost of searching the linked list. But we don't know the size, only know it's spread evenly. Can we get an expression for the length in terms of n (number of inputs) and m (size of table)?
Your edit is on the right track.
Can we get an expression for the length in terms of n (number of inputs) and m (size of table)?
For 1, if the hash table sizing is inhibited in some way that means the load factor (i.e. number of items per bucket) n/m is greater than 1 and not constant nor within constant bounds, then you can postulate a relationship m = f(n), then the load factor will be n / f(n), so the complexity will be O(n/f(n)) too.
In the second case, the complexity is always O(1).

Making a hash table

So its time for me to index my database file format and after looking at various methods, I decided that a hash table would be my best option. Since I've only familiarized myself with the inner workings of a hash table just today though, heres my understanding of it so please correct me if I'm wrong:
A hash table has a constant size that is equivalent of the maximum value storable in its hash function output size * key value pair size * bucket size + overflow bucket size. So for example, if the hash function makes 16 bit hashes and the bucket size is 4 and the values are 32bit then it would be 2^16 * 4 * 6 = 1572864 or 1.5MB plus overflow.
That in essence would make the hash table a sort of compressed lookup table. If the hash function changes, the whole table has to be reevaluated. Otherwise it just adds stuff to empty slots. Also the hash table can contain the maximum of units that its hash size could address (so for a 16bit hash its 65536) but to perform well without many collisions it would have to be much less.
Ok and heres the things I'm trying to index: (up to) 100 million pairs with 64bit integer keys and a 96bit value. The keys are object ID's(that mostly come in short sequences but can be all over the place) and the values are the object location + length. Reads/writes are equally important and very frequent.
The other options i looked into were various trees but the reason I didn't like them is because it seems to me that i would have to do a lot of sparse reads/writes to look up the data or to restructure the tree each time I go in.
So here are my questions:
It seems to me that I need a hash with a weird number of bits in it, I'm thinking up to ~38 since it would be just about the maximum I can store on a single disk and should be comfy enough for the 100 million. Is the weird bit amount unheard of? I'm thinking I'll probably bottleneck on disk activity way before CPU.
Are there any articles out there on how to design a good hash function for my particular case? Googling gave me an overview of the common methods but I'm looking for explanations behind them.
Any other general tips/pitfalls I should know of?
A hash table has a constant size
...not necessarily - a hash table can support resizing, but that tends to be done in fairly dramatic and invasive chunks where you can reason about the hash table as if it were constant size both before and after.
...that is equivalent of the maximum value storable in its hash function output size * key value pair size * bucket size + overflow bucket size. So for example, if the hash function makes 16 bit hashes and the bucket size is 4 and the values are 32bit then it would be 2^16 * 4 * 6 = 1572864 or 1.5MB plus overflow.
Not at all. A better way to calculate size is to say there are N values of a certain size, and you want to maintain a capacity:size ratio somewhere between say 3:1 and 5:4: the table memory usage is: N * sizeof(Value) * ratio.
The number of bits in the hash value is only relevant in that it indicates the maximum number of distinct buckets you can hash to: if you try to have a bigger table then you'll get more collisions than you would with a hash function generating wider-bit hash values. If you have more bits from your hash function than you need it is not a problem, you e.g. take the modulus with the current table size to find your bucket: hashed_to_bucket = hash_value % num_buckets.
That in essence would make the hash table a sort of compressed lookup table.
That's a good way to look at a hash table.
If the hash function changes, the whole table has to be reevaluated. Otherwise it just adds stuff to empty slots.
Definitely reevaluated/regenerated. Otherwise adding to empty slots is but one of the undesirable consequences.
Also the hash table can contain the maximum of units that its hash size could address (so for a 16bit hash its 65536) but to perform well without many collisions it would have to be much less.
As above, that (e.g. 65536) is not a hard maximum, but "to perform well without collisions" going over that should be avoided. To perform well it does not have to be much less: anything right up to 65536 is perfectly fine if it's a good quality 16-bit hash function.
Ok and heres the things I'm trying to index: (up to) 100 million pairs with 64bit integer keys and a 96bit value. The keys are object ID's(that mostly come in short sequences but can be all over the place) and the values are the object location + length. Reads/writes are equally important and very frequent.
The other options i looked into were various trees but the reason I didn't like them is because it seems to me that i would have to do a lot of sparse reads/writes to look up the data or to restructure the tree each time I go in.
Could be... a lot depends on your access patterns. For example, if you happen to try to access the keys following the "short sequences" then a data organisation model that tends to put them nearby in memory/disk helps. Some types of tree structures do that nicely, and you can sometimes hack your hash function to do it too (but need to balance that up against collision proneness).
It seems to me that I need a hash with a weird number of bits in it, I'm thinking up to ~38 since it would be just about the maximum I can store on a single disk and should be comfy enough for the 100 million. Is the weird bit amount unheard of? I'm thinking I'll probably bottleneck on disk activity way before CPU.
Not so... you have 64 bit integer keys - a 64 bit or larger hash would be desirable. That said, a 32 bit hash may well be fine too - that generates 4 billion distinct values which is greater than your 100 million keys.
Are there any articles out there on how to design a good hash function for my particular case? Googling gave me an overview of the common methods but I'm looking for explanations behind them.
Not that I'm aware of.
Any other general tips/pitfalls I should know of?
For tips... I'd say start simple (e.g. with the hash function returning the key unchanged and using modulus with a hash table capacity that's a prime number, OR using any common hash if you're picking up a hash table implementation that uses e.g. power-of-2 numbers of buckets) and measure your collision rates: that tells you how much effort it's worth putting into improving your hashing.
One very simple way to get "ideal, randomised" hashing in your case is to have 8 tables of 256 32-bit integers - initialised with hardcoded random numbers (you can google for random number download websites). Given any 64-bit key, just slice it into 8 bytes then use each byte as a key in the successive tables, XORing the 32-bit values you look up. A single bit of difference in any of the 64 input bits will then impact all 32 bits in the hash value with equal probability.
uint32_t table[8][256] = { ...add some random numbers... };
uint32_t h(uint64_t n)
{
uint32_t result = 0;
unsigned char* p = (unsigned char*)&n;
for (int i = 0; i < 8; ++i)
result ^= table[i][*p++];
return result;
}

Choosing a minimum hash size for a given allowable number of collisions

I am parsing a large amount of network trace data. I want to split the trace into chunks, hash each chunk, and store a sequence of the resulting hashes rather than the original chunks. The purpose of my work is to identify identical chunks of data - I'm hashing the original chunks to reduce the data set size for later analysis. It is acceptable in my work that we trade off the possibility that collisions occasionally occur in order to reduce the hash size (e.g. 40 bit hash with 1% misidentification of identical chunks might beat 60 bit hash with 0.001% misidentification).
My question is, given a) number of chunks to be hashed and b) allowable percentage of misidentification, how can one go about choosing an appropriate hash size?
As an example:
1,000,000 chunks to be hashed, and we're prepared to have 1% misidentification (1% of hashed chunks appear identical when they are not identical in the original data). How do we choose a hash with the minimal number of bits that satisifies this?
I have looked at materials regarding the Birthday Paradox, though this is concerned specifically with the probability of a single collision. I have also looked at materials which discuss choosing a size based on an acceptable probability of a single collision, but have not been able to extrapolate from this how to choose a size based on an acceptable probability of n (or fewer) collisions.
Obviously, the quality of your hash function matters, but some easy probability theory will probably help you here.
The question is what exactly are you willing to accept, is it good enough that you have an expected number of collisions at only 1% of the data? Or, do you demand that the probability of the number of collisions going over some bound be something? If its the first, then back of the envelope style calculation will do:
Expected number of pairs that hash to the same thing out of your set is (1,000,000 C 2)*P(any two are a pair). Lets assume that second number is 1/d where d is the the size of the hashtable. (Note: expectations are linear, so I'm not cheating very much so far). Now, you say you want 1% collisions, so that is 10000 total. Well, you have (1,000,000 C 2)/d = 10,000, so d = (1,000,000 C 2)/10,000 which is according to google about 50,000,000.
So, you need a 50 million ish possible hash values. That is a less than 2^26, so you will get your desired performance with somewhere around 26 bits of hash (depending on quality of hashing algorithm). I probably have a factor of 2 mistake in there somewhere, so you know, its rough.
If this is an offline task, you cant be that space constrained.
Sounds like a fun exercise!
Someone else might have a better answer, but I'd go the brute force route, provided that there's ample time:
Run the hashing calculation using incremental hash size and record the collision percentage for each hash size.
You might want to use binary search to reduce the search space.