Here's some simple code that shows what I've been seeing:
A = randn(1,5e6)+1i*randn(1,5e6);
B = randn(1,5e6)+1i*randn(1,5e6);
sum(A.*conj(B)) - A*B'
sum(A.*conj(B)) - mtimes(A,B')
A*B' - mtimes(A,B')
Now, the three methods shown on the bottom are supposed to do the same thing, so the answers should be zero, right? Wrong! The differences are small, though not small enough that I would consider them negligible. In addition, the error increases as the length of A and B increases.
Does anyone know what the actual difference between these methods is? I understand that there are probably shortcuts written into the code, but I would like to quantify that if possible. Does Matlab post the differences anywhere? I've looked around, but have not found anything.
It probably has something to do with the order in which the operations are performed. For example,
sum(A.*conj(B)) - fliplr(A)*fliplr(B)'
gives a result different than
sum(A.*conj(B)) - A*B'
Or, more strikingly,
A*B' - fliplr(A)*fliplr(B)'
gives a nonzero result, of the same order as your tests.
So my bet is that depending on the method (sum or *) Matlab internally does the operations in a different order, and that may well account for the different roundoff errors you observed.
Given the size of each number, the rounding error in simple operations on this number is of the order 10^-14
You have 5*10^6 numbers, hence if you are really unlucky the rounding error can become anything upto 5*10^-8.
Your observed error is of size 10^10, which is well in the expected range.
Note that the difference is not caused by the complex transpose, but by the product of the sum vs the matrix product.
A = randn(1,5e6)+1i*randn(1,5e6);
B = randn(1,5e6)+1i*randn(1,5e6);
B1 = conj(B);
B2 = B';
isequal(B1(:),B2(:)) % This returns true
A*transpose(conj(B)) - A*B' % Hence this returns zero
sum(A.*transpose(B')) - A*B' % But this returns something like 1e-10
A similar effect occurs for non complex A and B:
N=1e6;
A = 1:N;
B=1:N;
(N * (N + 1) * (2*N + 1))/6 % This will give exactly the right answer
A*B'
fliplr(A)*fliplr(B)'
Note that the two lowest answers only vary a few hundred from eachother, whilst they are actually over 2000 from the correct answer. If this is a problem consider using the symbolic toolbox. That allows you to calculate with arbitrary precision.
Related
Obviously, float comparison is always tricky. I have a lot of assert-check in my (scientific) code, so very often I have to check for equality of sums to one, and similar issues.
Is there a quick-and easy / best-practice way of performing those checks?
The easiest way I can think of is to build a custom function for fixed tolerance float comparison, but that seems quite ugly to me. I'd prefer a built-in solution, or at least something that is extremely clear and straightforward to understand.
I think it's most likely going to have to be a function you write yourself. I use three things pretty constantly for running computational vector tests so to speak:
Maximum absolute error
return max(abs(result(:) - expected(:))) < tolerance
This calculates maximum absolute error point-wise and tells you whether that's less than some tolerance.
Maximum excessive error count
return sum( (abs(result(:) - expected(:))) < tolerance )
This returns the number of points that fall outside your tolerance range. It's also easy to modify to return percentage.
Root mean squared error
return norm(result(:) - expected(:)) < rmsTolerance
Since these and many other criteria exist for comparing arrays of floats, I would suggest writing a function which would accept the calculation result, the expected result, the tolerance and the comparison method. This way you can make your checks very compact, and it's going to be much less ugly than trying to explain what it is that you're doing in comments.
Any fixed tolerance will fail if you put in very large or very small numbers, simplest solution is to use eps to get the double precision:
abs(A-B)<eps(A)*4
The 4 is a totally arbitrary number, which is sufficient in most cases.
Don't know any special build in solution. Maybe something with using eps function?
For example as you probably know this will give False (i.e. 0) as a result:
>> 0.1 + 0.1 + 0.1 == 0.3
ans =
0
But with eps you could do the following and the result is as expected:
>> (0.1+0.1+0.1) - 0.3 < eps
ans =
1
I have had good experience with xUnit, a unit test framework for Matlab. After installing it, you can use:
assertVectorsAlmostEqual(a,b) (checks for normwise closeness between vectors; configurable absolute/relative tolerance and sane defaults)
assertElementsAlmostEqual(a,b) (same check, but elementwise on every single entry -- so [1 1e-12] and [1 -1e-9] will compare equal with the former but not with the latter).
They are well-tested, fast to use and clear enough to read. The function names are quite long, but with any decent editor (or the Matlab one) you can write them as assertV<tab>.
For those who understand both MATLAB and Python (NumPy), it would maybe be useful to check the code of the following Python functions, which do the job:
numpy.allclose(a, b, rtol=1e-05, atol=1e-08)
numpy.isclose(a, b, rtol=1e-05, atol=1e-08, equal_nan=False)
I would like to partition a number into an almost equal number of values in each partition. The only criteria is that each partition must be in between 60 to 80.
For example, if I have a value = 300, this means that 75 * 4 = 300.
I would like to know a method to get this 4 and 75 in the above example. In some cases, all partitions don't need to be of equal value, but they should be in between 60 and 80. Any constraints can be used (addition, subtraction, etc..). However, the outputs must not be floating point.
Also it's not that the total must be exactly 300 as in this case, but they can be up to a maximum of +40 of the total, and so for the case of 300, the numbers can sum up to 340 if required.
Assuming only addition, you can formulate this problem into a linear programming problem. You would choose an objective function that would maximize the sum of all of the factors chosen to generate that number for you. Therefore, your objective function would be:
(source: codecogs.com)
.
In this case, n would be the number of factors you are using to try and decompose your number into. Each x_i is a particular factor in the overall sum of the value you want to decompose. I'm also going to assume that none of the factors can be floating point, and can only be integer. As such, you need to use a special case of linear programming called integer programming where the constraints and the actual solution to your problem are all in integers. In general, the integer programming problem is formulated thusly:
You are actually trying to minimize this objective function, such that you produce a parameter vector of x that are subject to all of these constraints. In our case, x would be a vector of numbers where each element forms part of the sum to the value you are trying to decompose (300 in your case).
You have inequalities, equalities and also boundaries of x that each parameter in your solution must respect. You also need to make sure that each parameter of x is an integer. As such, MATLAB has a function called intlinprog that will perform this for you. However, this function assumes that you are minimizing the objective function, and so if you want to maximize, simply minimize on the negative. f is a vector of weights to be applied to each value in your parameter vector, and with our objective function, you just need to set all of these to -1.
Therefore, to formulate your problem in an integer programming framework, you are actually doing:
(source: codecogs.com)
V would be the value you are trying to decompose (so 300 in your example).
The standard way to call intlinprog is in the following way:
x = intlinprog(f,intcon,A,b,Aeq,beq,lb,ub);
f is the vector that weights each parameter of the solution you want to solve, intcon denotes which of your parameters need to be integer. In this case, you want all of them to be integer so you would have to supply an increasing vector from 1 to n, where n is the number of factors you want to decompose the number V into (same as before). A and b are matrices and vectors that define your inequality constraints. Because you want equality, you'd set this to empty ([]). Aeq and beq are the same as A and b, but for equality. Because you only have one constraint here, you would simply create a matrix of 1 row, where each value is set to 1. beq would be a single value which denotes the number you are trying to factorize. lb and ub are the lower and upper bounds for each value in the parameter set that you are bounding with, so this would be 60 and 80 respectively, and you'd have to specify a vector to ensure that each value of the parameters are bounded between these two ranges.
Now, because you don't know how many factors will evenly decompose your value, you'll have to loop over a given set of factors (like between 1 to 10, or 1 to 20, etc.), place your results in a cell array, then you have to manually examine yourself whether or not an integer decomposition was successful.
num_factors = 20; %// Number of factors to try and decompose your value
V = 300;
results = cell(1, num_factors);
%// Try to solve the problem for a number of different factors
for n = 1 : num_factors
x = intlinprog(-ones(n,1),1:n,[],[],ones(1,n),V,60*ones(n,1),80*ones(n,1));
results{n} = x;
end
You can then go through results and see which value of n was successful in decomposing your number into that said number of factors.
One small problem here is that we also don't know how many factors we should check up to. That unfortunately I don't have an answer to, and so you'll have to play with this value until you get good results. This is also an unconstrained parameter, and I'll talk about this more later in this post.
However, intlinprog was only released in recent versions of MATLAB. If you want to do the same thing without it, you can use linprog, which is the floating point version of integer programming... actually, it's just the core linear programming framework itself. You would call linprog this way:
x = linprog(f,A,b,Aeq,beq,lb,ub);
All of the variables are the same, except that intcon is not used here... which makes sense as linprog may generate floating point numbers as part of its solution. Due to the fact that linprog can generate floating point solutions, what you can do is if you want to ensure that for a given value of n, you could loop over your results, take the floor of the result and subtract with the final result, and sum over the result. If you get a value of 0, this means that you had a completely integer result. Therefore, you'd have to do something like:
num_factors = 20; %// Number of factors to try and decompose your value
V = 300;
results = cell(1, num_factors);
%// Try to solve the problem for a number of different factors
for n = 1 : num_factors
x = linprog(-ones(n,1),[],[],ones(1,n),V,60*ones(n,1),80*ones(n,1));
results{n} = x;
end
%// Loop through and determine which decompositions were successful integer ones
out = cellfun(#(x) sum(abs(floor(x) - x)), results);
%// Determine which values of n were successful in the integer composition.
final_factors = find(~out);
final_factors will contain which number of factors you specified that was successful in an integer decomposition. Now, if final_factors is empty, this means that it wasn't successful in finding anything that would be able to decompose the value into integer factors. Noting your problem description, you said you can allow for tolerances, so perhaps scan through results and determine which overall sum best matches the value, then choose whatever number of factors that gave you that result as the final answer.
Now, noting from my comments, you'll see that this problem is very unconstrained. You don't know how many factors are required to get an integer decomposition of your value, which is why we had to semi-brute-force it. In fact, this is a more general case of the subset sum problem. This problem is NP-complete. Basically, what this means is that it is not known whether there is a polynomial-time algorithm that can be used to solve this kind of problem and that the only way to get a valid solution is to brute-force each possible solution and check if it works with the specified problem. Usually, brute-forcing solutions requires exponential time, which is very intractable for large problems. Another interesting fact is that modern cryptography algorithms use NP-Complete intractability as part of their ciphertext and encrypting. Basically, they're banking on the fact that the only way for you to determine the right key that was used to encrypt your plain text is to check all possible keys, which is an intractable problem... especially if you use 128-bit encryption! This means you would have to check 2^128 possibilities, and assuming a moderately fast computer, the worst-case time to find the right key will take more than the current age of the universe. Check out this cool Wikipedia post for more details in intractability with regards to key breaking in cryptography.
In fact, NP-complete problems are very popular and there have been many attempts to determine whether there is or there isn't a polynomial-time algorithm to solve such problems. An interesting property is that if you can find a polynomial-time algorithm that will solve one problem, you will have found an algorithm to solve them all.
The Clay Mathematics Institute has what are known as Millennium Problems where if you solve any problem listed on their website, you get a million dollars.
Also, that's for each problem, so one problem solved == 1 million dollars!
(source: quickmeme.com)
The NP problem is amongst one of the seven problems up for solving. If I recall correctly, only one problem has been solved so far, and these problems were first released to the public in the year 2000 (hence millennium...). So... it has been about 14 years and only one problem has been solved. Don't let that discourage you though! If you want to invest some time and try to solve one of the problems, please do!
Hopefully this will be enough to get you started. Good luck!
Obviously, float comparison is always tricky. I have a lot of assert-check in my (scientific) code, so very often I have to check for equality of sums to one, and similar issues.
Is there a quick-and easy / best-practice way of performing those checks?
The easiest way I can think of is to build a custom function for fixed tolerance float comparison, but that seems quite ugly to me. I'd prefer a built-in solution, or at least something that is extremely clear and straightforward to understand.
I think it's most likely going to have to be a function you write yourself. I use three things pretty constantly for running computational vector tests so to speak:
Maximum absolute error
return max(abs(result(:) - expected(:))) < tolerance
This calculates maximum absolute error point-wise and tells you whether that's less than some tolerance.
Maximum excessive error count
return sum( (abs(result(:) - expected(:))) < tolerance )
This returns the number of points that fall outside your tolerance range. It's also easy to modify to return percentage.
Root mean squared error
return norm(result(:) - expected(:)) < rmsTolerance
Since these and many other criteria exist for comparing arrays of floats, I would suggest writing a function which would accept the calculation result, the expected result, the tolerance and the comparison method. This way you can make your checks very compact, and it's going to be much less ugly than trying to explain what it is that you're doing in comments.
Any fixed tolerance will fail if you put in very large or very small numbers, simplest solution is to use eps to get the double precision:
abs(A-B)<eps(A)*4
The 4 is a totally arbitrary number, which is sufficient in most cases.
Don't know any special build in solution. Maybe something with using eps function?
For example as you probably know this will give False (i.e. 0) as a result:
>> 0.1 + 0.1 + 0.1 == 0.3
ans =
0
But with eps you could do the following and the result is as expected:
>> (0.1+0.1+0.1) - 0.3 < eps
ans =
1
I have had good experience with xUnit, a unit test framework for Matlab. After installing it, you can use:
assertVectorsAlmostEqual(a,b) (checks for normwise closeness between vectors; configurable absolute/relative tolerance and sane defaults)
assertElementsAlmostEqual(a,b) (same check, but elementwise on every single entry -- so [1 1e-12] and [1 -1e-9] will compare equal with the former but not with the latter).
They are well-tested, fast to use and clear enough to read. The function names are quite long, but with any decent editor (or the Matlab one) you can write them as assertV<tab>.
For those who understand both MATLAB and Python (NumPy), it would maybe be useful to check the code of the following Python functions, which do the job:
numpy.allclose(a, b, rtol=1e-05, atol=1e-08)
numpy.isclose(a, b, rtol=1e-05, atol=1e-08, equal_nan=False)
My Question - part 1: What is the best way to test if a floating point number is an "integer" (in Matlab)?
My current solution for part 1: Obviously, isinteger is out, since this tests the type of an element, rather than the value, so currently, I solve the problem like this:
abs(round(X) - X) <= sqrt(eps(X))
But perhaps there is a more native Matlab method?
My Question - part 2: If my current solution really is the best way, then I was wondering if there is a general tolerance that is recommended? As you can see from above, I use sqrt(eps(X)), but I don't really have any good reason for this. Perhaps I should just use eps(X), or maybe 5 * eps(X)? Any suggestions would be most welcome.
An Example: In Matlab, sqrt(2)^2 == 2 returns False. But in practice, we might want that logical condition to return True. One can achieve this using the method described above, since sqrt(2)^2 actually equals 2 + eps(2) (ie well within the tolerance of sqrt(eps(2)). But does this mean I should always use eps(X) as my tolerance, or is there good reason to use a larger tolerance, such as 5 * eps(X), or sqrt(eps(X))?
UPDATE (2012-10-31): #FakeDIY pointed out that my question is partially a duplicate of this SO question (apologies, not sure how I missed it in my initial search). Given this I'd like to emphasize the "tolerance" part of the question (which is not covered in that link), ie is eps(X) a sensible tolerance, or should I use something larger, like 5 * eps(X), and if so, why?
UPDATE (2012-11-01): Thanks everyone for the responses. I've +1'ed all three answers as I feel they all contribute meaningfully to various aspects of the question. I'm giving the answer tick to Eric Postpischil as that answer really nailed the tolerance part of the question well (and it has the most upvotes at this point in time).
No, there is no general tolerance that is recommended, and there cannot be.
The difference between a computed result and a mathematically ideal result is a function of the operations that produced the computed result. Because those operations are specific to each application, there is no general rule for testing any property of a computed result.
To design a proper test, you must determine what errors may have occurred during computation, determine bounds on the resulting error in the computed result, and test whether the computed result differs from the ideal result (perhaps the nearest integer) by less than those bounds. You must also decide whether those bounds are sufficiently small to satisfy your application’s requirements. (Using a relaxed test that accepts as an integer something that is not an integer decreases false negatives [incorrect rejections of a result as an integer where the ideal result would be an integer] but increases false positives [incorrect acceptances of a result as an integer where the ideal result would not be an integer].)
(Note that it can even be the case the testing as if the error bounds were zero can produce false negatives: It is possible a computation produces a result that is exactly an integer when the ideal result is not an integer, so any error tolerance, even zero, will falsely report this result is an integer. If this is unacceptable for your application, then, in such a case, the computations must be redesigned.)
It is not only not possible to state, without specific knowledge of the application, a numerical tolerance that may be used, it is impossible to state whether the tolerance should be absolute, should be relative to the computed value or to a target value, should be measured in ULPs (units of least precision), or should be set in some other manner. This is because errors may be introduced into computations in a variety of ways. For example, if there is a small relative error in a and a and b are close in value, then a-b has a large relative error. Additionally, if c is large, then (a-b)*c has a large absolute error.
Its probably not the most efficient method but I would use mod for this:
a = 15.0000000000;
b = mod(a,1.0)
c = 15.0000000001;
d = mod(c,1.0)
returns b = 0 and d = 1.0000e-010
There are a number of other alternatives suggested here:
How do I test for integers in MATLAB?
I like the idea of comparing (x == floor(x)) too.
1) I have historically used your method with a simple tolerance, eps(X). The mod methods interested me though, so I benchmarked a couple using Steve Eddins timeit function.
f = #() abs(X - round(X)) <= eps(X);
g = #() X == round(X);
h = #() ~mod(X,1);
For single values, like X=1.0, yours appears to fastest:
timeit(f) = 7.3635e-006
timeit(g) = 9.9677e-006
timeit(h) = 9.9214e-006
For vectors though, like X = 1:0.01:100, the other methods are faster (though round still beats mod):
timeit(f) = 0.00076636
timeit(g) = 0.00028182
timeit(h) = 0.00040539
2) The error bound is really problem dependent. Other answers cover this much better than I am able to.
I have two arrays of data that I'm trying to amalgamate. One contains actual latencies from an experiment in the first column (e.g. 0.345, 0.455... never more than 3 decimal places), along with other data from that experiment. The other contains what is effectively a 'look up' list of latencies ranging from 0.001 to 0.500 in 0.001 increments, along with other pieces of data. Both data sets are X-by-Y doubles.
What I'm trying to do is something like...
for i = 1:length(actual_latency)
row = find(predicted_data(:,1) == actual_latency(i))
full_set(i,1:4) = [actual_latency(i) other_info(i) predicted_info(row,2) ...
predicted_info(row,3)];
end
...in order to find the relevant row in predicted_data where the look up latency corresponds to the actual latency. I then use this to created an amalgamated data set, full_set.
I figured this would be really simple, but the find function keeps failing by throwing up an empty matrix when looking for an actual latency that I know is in predicted_data(:,1) (as I've double-checked during debugging).
Moreover, if I replace find with a for loop to do the same job, I get a similar error. It doesn't appear to be systematic - using different participant data sets throws it up in different places.
Furthermore, during debugging mode, if I use find to try and find a hard-coded value of actual_latency, it doesn't always work. Sometimes yes, sometimes no.
I'm really scratching my head over this, so if anyone has any ideas about what might be going on, I'd be really grateful.
You are likely running into a problem with floating point comparisons when you do the following:
predicted_data(:,1) == actual_latency(i)
Even though your numbers appear to only have three decimal places of precision, they may still differ by very small amounts that are not being displayed, thus giving you an empty matrix since FIND can't get an exact match.
One feature of floating point numbers is that certain numbers can't be exactly represented, since they aren't an integer power of 2. This occurs with the numbers 0.1 and 0.001. If you repeatedly add or multiply one of these numbers you can see some unexpected behavior. Amro pointed out one example in his comment: 0.3 is not exactly equal to 3*0.1. This can also be illustrated by creating your look-up list of latencies in two different ways. You can use the normal colon syntax:
vec1 = 0.001:0.001:0.5;
Or you can use LINSPACE:
vec2 = linspace(0.001,0.5,500);
You'd think these two vectors would be equal to one another, but think again!:
>> isequal(vec1,vec2)
ans =
0 %# FALSE!
This is because the two methods create the vectors by performing successive additions or multiplications of 0.001 in different ways, giving ever so slightly different values for some entries in the vector. You can take a look at this technical solution for more details.
When comparing floating point numbers, you should therefore do your comparisons using some tolerance. For example, this finds the indices of entries in the look-up list that are within 0.0001 of your actual latency:
tolerance = 0.0001;
for i = 1:length(actual_latency)
row = find(abs(predicted_data(:,1) - actual_latency(i)) < tolerance);
...
The topic of floating point comparison is also covered in this related question.
You may try to do the following:
row = find(abs(predicted_data(:,1) - actual_latency(i))) < eps)
EPS is accuracy of floating-point operation.
Have you tried using a tolerance rather than == ?