Hashing - M should be a power of two - hash

I have heard that m should be a power of two in knuth multiplicative hash. Otherwise, a power of two is always a good choice. Could somebody please tell me in an easy way why this is more efficient?
Kind regards

For context, the general form of the Knuth multiplicative hash is this:
If w = 232 and M is 2bits, then this simplifies to
h(K) = A * K >> (32 - bits)
Which is obviously really nice. The trick is to leave the division by w for later, use mod w (which is automatic), then extract from the top however many bits we would have gotten out if it was done the normal way (this corresponds to the division by w, scaling back by M, and doing the floor - all at once).
But that trick relies on w and M being powers of two. If M is not a power of two, there would be an other fixed-point multiplication (instead of just a right shift) to map the intermediary result from
[0 .. 232-1] into [0 .. M-1], and since M would not divide 232 that would also introduce a bias into the distribution.

Related

Circular convolution of binary vectors (mod 2) using NTT

Let x, y be vectors of length n, with entries either 1 or 0. I want to efficiently compute the circular convolution
(x * y) mod 2
Where each component of the result is taken mod 2.
I know how to do it using a Fast Fourier Transform
(multiply Fourier transforms of x and y. transform back. Do the "mod 2")
However, this uses floating point calculations to solve a discrete problem and for large n (I'm interested in n ~ 10^7) it might lead to rounding errors. I expect there is a better way to do this using the number theoretic transform (NTT) but unfortunately I'm not familiar with number theory or NTT.
I looked at this website. Following the procedure there,
let's say n = 10^7. I need
a modulus M (use 10^7).
a prime N=kn+1 for some k. (use N = 3 * 10^7 + 1)
a root ω≡g^k mod N , where g is a generator (e.g. ω=2744)
Do the transform, etc.
Question
This seems promising. However, I would need 32-bit integers to store each bit during this calculation?
Also, this is not making use of the fact that I only need results modulo 2.
Is there a way to make use of this to simplify the procedure?
Since I don't know the number theory, this is not obvious to me.
I'm not asking for a full solution, only for an argument if my "mod 2" significantly simplifies the implementation (both in terms of difficulty to implement the necessary algorithms as well as computational resources).
Another question: If it's not possible to simplify using "mod 2", do you think it would still pay off to use NTT, as opposed to just throwing a well-known FFT library at the floating point problem?
For the NTT, your procedure looks correct. Yes, you would need 32-bit integers for each bit in your original vector. Unfortunately, there's not a lot you can do there to make use of the fact that the end result is mod 2, since you need a root of order 10^7. You may be able to shrink that number by a couple factors of two (and doing the standard DFT for a few base levels of recursion), but it wouldn't change much, relatively speaking.
Note, for your FFT implementation, I believe you could use integer arithmetic since its mod 2, but I'm not convinced it would be at all efficient. See this math stackexchange answer for details.

What's a floating-point operation and how to count them (in MATLAB)?

I have an assignment where I basically need to count the number of floating point operations in a simple program, which involves a loop, a matrix, and operations such as *, + and ^.
From my understanding, a floating-point operation is an operation that involves floating-point numbers, and we may be interested in counting these operations because I think they may be more expensive for the computer. If you want to add more details to this part, it would be nice.
My problem is that I've no idea of knowing exactly which operations involve floating-point numbers, unless I use functions, such as isfloat. In that case, would just one of the numbers in the operation be necessary to be floating-point to the operation be considered a floating-point operation, right? If not, why? Can you add more details on this?
For example, suppose I've the following simple function:
function [r, n] = naive(c, x)
% c is the vector of coefficients of the polynomial
% The coeffiecients should be given as follows
% c(1) = coefficient of x^0 (or 1).
% c(length(c)) = coefficient of the largest power of x
% x is the point to evaluate the polynomial at
% r is the result of the evaluation
% (Assumes that the entries are integers)
r = c(1);
n = 0;
for i=2:length(c)
r = r + c(i) * x^(i - 1);
n = n + 2 + (i - 1);
end
end
which basically calculates a normal polynomial evaluated at x given the coefficients in a vector c.
As you can see from the code, n is actually keeping track of floating-point operations. But actually, I'm counting every mathematical operation (except the assignment) as a floating-point operation, but this of course might not be right, or is it? If yes or no, why?
Both the coefficients and c might be floating-point numbers. So, instead of counting every operation as a floating point operation, should we first check with for example isfloat if the numbers are floating point, and only then increment n?
Note, I'm aware of the function flops, which, from what I understood, it should count the floating-point operations, but it's deprecated, and mostly I would like to learn better these concepts, and therefore try to count them manually.
Thanks for any help!

Find the inverse of a Matrix in MATLAB, is inv(A) or A\eye(size(A)) more precise? [duplicate]

This question already has answers here:
Why is Matlab's inv slow and inaccurate?
(3 answers)
Closed 7 years ago.
The title explains it already. If I need to find an inverse of a matrix, is there any reason I should use A\eye(size(A)) instead of inv(A)?
And before you ask: Yes, I really need the inverse, not only for calculations.
PS:
isequal(inv(A), A\eye(size(A)))
ans =
0
So which one is more precise?
UPDATE: This question was closed as it appeard to be a duplicate of the question "why is inv in MATLAB so slow and inaccurate". This question here differs significantly by not addressing the speed, nor the accuarcy of the function inv but the difference of inv and .\eye to calculate the true inverse of a matrix.
Let's disregard performance (speed) and best practice for a bit.
eps(n) is a command that returns the distance to the next larger double precision number from n in MATLAB. So, eps(1) = 2.2204e-16 means that the first number after 1 is 1 + 2.2204e-16. Similarly, eps(3000) = 4.5475e-13. Now, let's look at the precision of you calculations:
n = 100;
A = rand(n);
inv_A_1 = inv(A);
inv_A_2 = A \ eye(n);
max(max(abs(inv_A_1-inv_A_2)))
ans =
1.6431e-14
eps(127) = 1.4211e-14
eps(128) = 2.8422e-14
For integers, the largest number you can use that has an accuracy higher than the max difference between your two matrices is 127.
Now, let's check how the accuracy when we try to recreate the identity matrix from the two inverse matrices.
error_1 = max(max(abs((A\eye(size(A))*A) - eye(size(A)))))
error_1 =
3.1114e-14
error_2 = max(max(abs((inv(A)*A) - eye(size(A)))))
error_2 =
2.3176e-14
The highest integer with a higher accuracy than the maximum difference between the two approaches is 255.
In summary, inv(A) is more accurate, but once you start using the inverse matrices, they are for all intended purposes identical.
Now, let's have a look at the performance of the two approaches:
n = fix(logspace(1,3,40));
for i = 1:numel(n)
A = rand(round(n(i)));
t1(i) = timeit(#()inv(A));
t2(i) = timeit(#()A\eye(n(i)));
end
loglog(n,[t1;t2])
It appears that which of the two approaches is fastest is dependent on the matrix size. For instance, using inv is slower for n = 255, but faster for n = 256.
In summary, choose approach based on what's important to you. For most intended purposes, the two approaches are identical.
Note that svd and pinv may be of interest if you're working with badly scaled matrices. If it's really really important you should consider the Symbolic toolbox.
I know you said that you "actually need the inverse", but I can't let this go unsaid: Using inv(A)*b is never the best approach for solving a linear equation! I won't explain further as I think you know this already.
If you need the inverse, you should use inv.
The inverse is calculated via LU decomposition, whereas the backslash operator mldivide calculates the solution to your linear system using different methods depending on the properties of your matrix A (see https://scicomp.stackexchange.com/a/1004), which can yield less accurate results for the inverse.
It should be noted that if you want to solve a linear system, the calculation is likely going to be much faster and more accurate using mldivide(\). The MATLAB documentation of inv is basically one big warning not to use inv to solve linear systems.
Just a way trying to check this, not sure if it's completely helpful though: multiply your inverse matrix result back with it's original version and check the deviation from the identity matrix:
A = rand( 111 );
E1 = abs( (A\eye(size(A) ) * A ) - eye( size(A) ) );
E2 = abs( ( inv(A) * A ) - eye( size(A) ) );
mean(E1(:))
mean(E2(:))
inv seems to be more accurate as I would have expected. Maybe somebody can re-evaluate this. ;)

How can I make all-in-one polynomial from multi-polynomial?

I'm not familiar with expert math. so I don't know where to start from.
I have get a some article like this. I am just following this article description. But this is not easy to me.
But I'm not sure how to make just one polynomial equation(or something like that) from above 4 polynomial equations. Is this can be possible way?
If yes, Would you please help me how to get a polynomial(or something like equation)? If not, would you let me know the reason of why?
UPDATE
I'd like to try as following
clear all ;
clc
ab = (H' * H)\H' * y;
y2 = H*ab;
Finally I can get some numbers like this.
So, is this meaning?
As you can see the red curve line, something wrong.
What did I miss anythings?
All the article says is "you can combine multiple data sets into one to get a single polynomial".
You can also go in the other direction: subdivide your data set into pieces and get as many separate ones as you wish. (This is called n-fold validation.)
You start with a collection of n points (x, y). (Keep it simple by having only one independent variable x and one dependent variable y.)
Your first step should be to plot the data, look at it, and think about what kind of relationship between the two would explain it well.
Your next step is to assume some form for the relationship between the two. People like polynomials because they're easy to understand and work with, but other, more complex relationships are possible.
One polynomial might be:
y = c0 + c1*x + c2*x^2 + c3*x^3
This is your general relationship between the dependent variable y and the independent variable x.
You have n points (x, y). Your function can't go through every point. In the example I gave there are only four coefficients. How do you calculate the coefficients for n >> 4?
That's where the matricies come in. You have n equations:
y(1) = c0 + c1*x(1) + c2*x(1)^2 + c3*x(1)^3
....
y(n) = c0 + c1*x(n) + c2*x(n)^2 + c3*x(n)^3
You can write these as a matrix:
y = H * c
where the prime denotes "transpose".
Premultiply both sides by transpose(X):
transpose(X)* y = transpose(H)* H * c
Do a standard matrix inversion or LU decomposition to solve for the unknown vector of coefficients c. These particular coefficients minimize the sum of squares of differences between the function evaluated at each point x and your actual value y.
Update:
I don't know where this fixation with those polynomials comes from.
Your y vector? Wrong. Your H matrix? Wrong again.
If you must insist on using those polynomials, here's what I'd recommend: You have a range of x values in your plot. Let's say you have 100 x values, equally spaced between 0 and your max value. Those are the values to plug into your H matrix.
Use the polynomials to synthesize sets of y values, one for each polynomial.
Combine all of them into a single large problem and solve for a new set of coefficients. If you want a 3rd order polynomial, you'll only have four coefficients and one equation. It'll represent the least squares best approximation of all the synthesized data you created with your four polynomials.

Matlab determinant function has gone awry

The following is an excerpt from a program of mine:
function [P] = abc(M,f);
if det(M) ~= 1, disp(['Matrix M should have determinant 1'])
I allow the option for the user not to enter a value for 'f'.
When I run abc([2 1; 1 1]), the program works fine and it does what it's supposed to. But when I run abc([6 13; 5 11]) I am told "Matrix M should have determinant 1".
What on Earth is going on?
EDIT: In the command window, I entered the following:
M = [6 13; 5 11];
if det(M) ~= 1, disp('Im broken');
end
Matlab then told me itself that it's broken.
Thanks
Welcome to the wonderfully wacky world of floating point arithmetic. MATLAB computes the determinant using an LU decomposition, i.e., linear algebra. It does so since determinant is wildly inefficient for arrays of even mild size unless it did.
A consequence of that LU decomposition, is the determinant is computed as a floating point number. This is not an issue, UNLESS you have entered a problem as trivially simple as you have - the determinant of a 2x2 matrix composed only of small integers. In that case, the determinant itself will also be a (reasonably) small integer. So you could resolve the issue by simply computing the determinant of the 2x2 matrix yourself, using the textbook formula.
D = A(1,1)*A(2,2) - A(1,2)*A(2,1);
This will be exactly correct for small integer matrices A, although even this may show some loss of precision for SOME matrices. For example, consider the simple, 2x2 matrix A:
>> A = [1e8 1;1 1e8];
We know that the determinant of this matrix is 1e16-1.
>> det(A)
ans =
1e+16
Of course, MATLAB displays this as 1e16. But in fact, the number generated by the det function in MATLAB is actually 9999999999999998, so 1e16-2. As bad, had I used the formula I gave above for the 2x2 determinant, it would have returned a result that is still incorrect, 10000000000000000. Both results were off by 1. You can learn more about these issues by looking at the help for eps.
My point is, there are some 2x2 matrices where computation of the determinant will simply be problematic, even though they are integer matrices.
Once your matrices become non-integer, then things really do become true floating point numbers, not integers. This means you simply MUST use comparisons with tolerances on them rather than a test for exact unity. This is a good rule anyway. Always use a tolerance when you make a test for equality, at least until you have learned enough to know when to disobey that rule!
So, you might choose a test like this:
if abs(det(A) - 1) < (10*eps(1))
warning('The sky is falling! det has failed me.')
end
Note that I've used eps(1), since we are comparing things to 1. The fact that I multiplied eps by 10 allows a wee bit of slop in the computation of the determinant.
Finally, you should know that whatever test you are using the determinant for here, it is often a BBBBBBBBBBAAAAAAAAAADDDDDDDD thing to do! Yes, maybe your teacher told you to do this, or you found something in a textbook. But the determinant is just a bad thing to use for numerical computations. There are almost always alternatives to the determinant. Again, this is called judgement, knowing when that which you are told to use is actually the wrong thing to do.
You are running into the standard problems that occur due to the limitations of floating-point numbers. The result of the det function is probably something like 1.000000001.
General rule-of-thumb: Never test floating-point values for equality.
To give you an insight: det is not computed using the old formula you studied in linear algebra, but using more efficient algorithms.
For example, using Gaussian elimination you can transform M in the equivalent upper triangular matrix and then compute the determinant as product of the main diagonal (being the lower triangle all zeros).
M = [6 13; 5 11]
G = M - [0 0; M(2,1)/M(1,1) * M(1,:)];
Theoretically det(M) is equal to det(G), which is 6 * 1/6 = 1, but being G a floating point and not a real number matrix, G(1,1)*G(2,2)~=1!
In fact G(1,1) and G(2,2) are not exactly 1 and 1/6, but they have a very small relative error (see eps, which on most machines is around 2.22e-16). Their real value will be around 6*(1+eps) and 1/6*(1+eps), thus their product will have a small error too.
I'm not sure if Matlab uses the Gaussian elimination or the similar LU decomposition.