What would be a good hashing function for US phone numbers? Which is basically a 10 digit number? It seems to me that, a simplistic:
(p1 * (areaCode + p2 * exchangeCode) + extensionCode) % r;
where p1 and p2 are some prime numbers and 'r' is the reduced range, should be fast as well as have good hashing properties.
Why not try just the last digit and a 10-"slots" hash table? IMHO this should give a fairly uniform distribution.
Related
For the hash function : h(k) = k mod m;
I understand that m=2^n will always give the last n LSB digits. I also understand that m=2^p-1 when K is a string converted to integers using radix 2^p will give same hash value for every permutation of characters in K. But why exactly "a prime not too close to an exact power of 2" is a good choice? What if I choose 2^p - 2 or 2^p-3? Why are these choices considered bad?
Following is the text from CLRS:
"A prime not too close to an exact power of 2 is often a good choice for m. For
example, suppose we wish to allocate a hash table, with collisions resolved by
chaining, to hold roughly n D 2000 character strings, where a character has 8 bits.
We don’t mind examining an average of 3 elements in an unsuccessful search, and
so we allocate a hash table of size m D 701. We could choose m D 701 because
it is a prime near 2000=3 but not near any power of 2."
Suppose we work with radix 2p.
2p-1 case:
Why that is a bad idea to use 2p-1? Let us see,
k = ∑ai2ip
and if we divide by 2p-1 we just get
k = ∑ai2ip = ∑ai mod 2p-1
so, as addition is commutative, we can permute digits and get the same result.
2p-b case:
Quote from CLRS:
A prime not too close to an exact power of 2 is often a good choice for m.
k = ∑ai2ip = ∑aibi mod 2p-b
So changing least significant digit by one will change hash by one. Changing second least significant bit by one will change hash by two. To really change hash we would need to change digits with bigger significance. So, in case of small b we face problem similar to the case then m is power of 2, namely we depend on distribution of least significant digits.
I want to determine the space complexity of the go to example of a simple streaming algorithm.
If you get a permutation of n-1 different numbers and have to detect the one missing number, you calculate the sum of all numbers 1 to n using the formula n (n + 1) / 2 and then you subtract each incoming number. The result is your missing number. I found a german wikipedia article stating that the space complexity of this algorithm is O(log n). (https://de.wikipedia.org/wiki/Datenstromalgorithmus)
What I do not understand is: The amount of bits needed to store a number n is log2(n). ok.. but I do have to calculate the sum, tough. So n (n + 1) / 2 is larger than n and therefore needs more space than just log (n) right?
Can someone help me with this? Thanks in advance!
If integer A in binary coding requires Na bits and integer B requires Nb bits then A*B requires no more than Na+Nb bits (not Na * Nb). So, expression n(n+1)/2 requires no more than log2(n) + log2(n+1) = O(2log2(n)) = O(log2(n)) bits.
Even more, you may raise n to any fixed power i and it still will use O(log2(n)) space. n itself, n10, n500, n10000000 all require O(log(n)) bits of storage.
This is my first thread here and I would like to ask you a couple of questions for universal hashing of integers.
A universal hashing algorithm is supposed to use this:
equation =
((a*x+b)mod p) mod m
a=random number from 1 to p-1
b=random number from 0 to p-1
x= the Key
p= a prime number >=m
m=the size of the array
I know the numbers I am going to hash are on the range of 1-2969.
But I cannot understand how to use this equation in order to make as low collisions as possible.
At the time a and b are random I cannot do anything about it.
My question is how I am supposed to pick the prime if I have more than one choice, the range of primes I can use are from 2 to 4999.
I tried to pick the first available that corresponds the requirements for the function but sometimes it can return negative numbers. I have searched on Google and Stackoverflow but I could not figure out what I am not doing wrong.
I am coding in C. Also, I can use only universal hashing.
Thank your for your time.
I am implementing cooley-tuckey fft(raddix - 2 DIF / DIT) algorithm in matlab.In that for the bit reversing i want to have reverse of an binary number. so can anyone suggest how can I get the reverse of a binary number(like 100111 -> 111001). One who have worked on fft implementation can help me with the algorithm also.
Topic: How to do bit reversal in Matlab? .
If you're using double precision floating point ('double') numbers
which are integers, you can do this:
dr = bin2dec(fliplr(dec2bin(d,n))); % Bits in dr are in reverse order
where n is the number of bits to be reversed and where 0 <= d < 2^n.
You will experience no precision problems at all as long as the
integers are no more than 52 bits long.
And
Re: How to do bit reversal in Matlab?
How large will the numbers be that you need to reverse? May I ask what
is the purpose of it? Maybe there is a more efficient way to solve the
whole problem. If the numbers are large you can just store the bits as
a string. To reverse it just read the string backwards! Or use
fliplr().
(There may be better places to ask).
If it were VHDL I'd suggest an alias with 'REVERSE'RANGE.
Taken from the help section;
Y = swapbytes(X) reverses the byte ordering of each element in array X, converting little-endian values to big-endian (and vice versa). The input array must contain all full, noncomplex, numeric elements.
I have two lists of fractions;
say A = [ 1/212, 5/212, 3/212, ... ]
and B = [ 4/143, 7/143, 2/143, ... ].
If we define A' = a[0] * a[1] * a[2] * ... and B' = b[0] * b[1] * b[2] * ...
I want to calculate a normalised value of A' and B'
ie specifically the values of A' / (A'+B') and B' / (A'+B')
My trouble is A are B are both quite long and each value is small so calculating the product causes numerical underflow very quickly...
I understand turning the product into a sum through logarithms can help me determine which of A' or B' is greater
ie max( log(a[0])+log(a[1])+..., log(b[0])+log(b[1])+... )
and that using logs I can calculate the value of A' / B' but how do I do A' / A'+B'
My best bet to date is to keep the number representations as fractions, ie A = [ [1,212], [5,212], [3,212], ... ] and implement my own arithmetic but it's getting clumsy and I have a feeling there is a (simple) way of logarithms I'm just missing....
The numerators for A and B don't come from a sequence. They might as well be random for the purpose of this question. If it helps the denominators for all values in A are the same, as are all the denominators for B.
Any ideas most welcome!
( ps. I asked a similar question 24 hours ago regarding the ratio A'/B' but it was actually the wrong question to ask. I'm actually after A'/(A'+B'). Sorry, my mistake. )
I see few ways here
First of all you can notice that
A' / (A'+B') = 1 / (1 + B'/A')
and you know how to calculate B'/A' with logarithms.
Another way is to implement your own rational arithmetic but you don't need to go far about it. Since you know that denominators are the same for the whole array, it immediately gives you
numerator(A') = numerator(a[0]) * numerator(a[1]) ...
denumerator(A') = denumerator(a[0]) ^ A.length
All you now need to do is to sum A' and B' which is easy and then multiply A' and 1/(A'+B') which is also easy. The hardest part here is to normalize resulting value which is done with modulo operation and is trivial.
Alternatively, since you most likely using some popular scripting language, most of them have classes for rational arithmetic built in, Python and Ruby have them for sure.
My best bet to date is to keep the number representations as fractions and implement my own arithmetic but it's getting clumsy
What language are you using? If you can overload operators, it should be really easy to make up a Fraction class that you can treat as a number pretty much everywhere.
As an example, determining whether a fraction A / B is larger than C / D is basically comparing whether A * D is larger than B * C.
Both A and B have the same denominator in each fraction you mention. Is that true for every term in the list? If that's so, why don't you factor that out when you calculate the product? The denominator will simply be X^n, when X is the value and n is the number of terms in the list.
If you do that, you'll have the opposite problem: overflow in the numerator. You know that it can't be smaller than max(X)^n, where max(X) is the maximum value in the numerator and n is the number of terms in the list. If you can calculate that, you can see if your computer will have a problem. You can't put 10 pounds of anything in a 5 pound bag.
Unfortunately, the properties of logarithms limit you to the following simplifications:
(source: equationsheet.com)
and
(source: equationsheet.com)
So you're stuck with:
(source: equationsheet.com)
If you're using a language that supports infinite precision numbers (e.g., Java BigDecimal) it might make your life a little easier. But there's still a good argument for doing some thinking before you compute. Why use brute force when you can be elegant?
Well ... if you know A'(A'+B'), then B'(A'+B') should be one minus that. I personally would not use logarithms. I would use the actual fractions. I would also use some sort of BigInt class to represent the numerator and denominator. Which language are you using? Python can be a good fit.