If I make a single precision operation with the values, it will give a result ending with 8:
>> single(single(6.500001e+02)*single(-64.1775131)*single(0.65)*single(2))
ans = -5.4230008e+004
Then I make any operation using double precision, and the same operation as before, using single precision, the result will be different from the first time I run it:
>> double(6.5000012e+02)*double(-64.1775131)*double(0.65)*double(2)
ans = -5.423000858119204e+004
>> single(single(6.500001e+02)*single(-64.1775131)*single(0.65)*single(2))
ans = -5.4230004e+004
This problem happens in Matlab 2008a 32 bits. This is not a problem in Matlab 2012b 64 bits.
Any thoughts on how to avoid this problem?
Thank you.
I could not test but, from what I could find on MATLAB Central, it seems to be a bug in the Global Workspace #versions R2008*. So, to avoid the problem:
Don't execute code from Command Window;
Stick to double precision, unless under severe memory constraints (it's even faster, because default type is double);
Work in functions rather than scripts (apparently the function local Workspace is not affected by this bug)
Use a R2009+ MATLAB release, which seems to have fixed the bug.
Related
When writing numbers into .csv in Matlab, it seems to modify the data.
This is alarming to me, something I have never seen before.
>> csvwrite('FirstCol.csv',[201210;201211])
>> twodates =csvread('FirstCol.csv')
twodates =
201210
201210
Now compare with xlswrite
>> xlswrite('FirstCol.xls',[201210;201211])
>> aa=xlsread('FirstCol.xls')
aa =
201210
201211
Could the reason be some automatic formatting underneath date-similar numbers? (My explanation is just mystcism)
From the csvwrite documentation
csvwrite writes a maximum of five significant digits. If you need greater precision, use dlmwrite with a precision argument.
So doing:
csvwrite('FirstCol.csv',[201210;201211])
csvread('FirstCol.csv')
you do indeed lose the final digit.
But by using dlmwrite, you can do
dlmwrite('FirstCol.csv',[201210;201211],'precision',6)
dlmread('FirstCol3.csv')
which does indeed result in the correct output.
I am using a Mac and I can't use xlswrite, but obviously that is a reasonable method as well.
I am trying to solve a non-linear system of equations using the Newton-Raphson iterative method, and in order to explore the parameter space of my variables, it is useful to store the previous solutions and use them as my first initial guess so that I stay in the basin of attraction.
I currently save my solutions in a structure array that I store in a .mat file, in about this way:
load('solutions.mat','sol');
str = struct('a',Param1,'b',Param2,'solution',SolutionVector);
sol=[sol;str];
save('solutions.mat','sol');
Now, I do another run, in which I need the above solution for different parameters NewParam1 and NewParam2. If Param1 = NewParam1-deltaParam1, and Param2 = NewParam2 - deltaParam2, then
load('solutions.mat','sol');
index = [sol.a]== NewParam1 - deltaParam1 & [sol.b]== NewParam2 - deltaParam2;
% logical index to find solution from first block
SolutionVector = sol(index).solution;
I sometimes get an error message saying that no such solution exists. The problem lies in the double precisions of my parameters, since 2-1 ~= 1 can happen in Matlab, but I can't seem to find an alternative way to achieve the same result. I have tried changing the numerical parameters to strings in the saving process, but then I ran into problems with logical indexing with strings.
Ideally, I would like to avoid multiplying my parameters by a power of 10 to make them integers as this will make the code quite messy to understand due to the number of parameters. Other than that, any help will be greatly appreciated. Thanks!
You should never use == when comparing double precision numbers in MATLAB. The reason is, as you state in the the question, that some numbers can't be represented precisely using binary numbers the same way 1/3 can't be written precisely using decimal numbers.
What you should do is something like this:
index = abs([sol.a] - (NewParam1 - deltaParam1)) < 1e-10 & ...
abs([sol.b] - (NewParam2 - deltaParam2)) < 1e-10;
I actually recommend not using eps, as it's so small that it might actually fail in some situations. You can however use a smaller number than 1e-10 if you need a very high level of accuracy (but how often do we work with numbers less than 1e-10)?
I stumbled over the following strange thing while using matlab's symbolic toolbox
/ >> syms e
/>> y=11111111111111111^e
y =
11111111111111112^e
Seems like there is a limitation when working with large numbers. Can this be solved without changing to a completely different system, like sage?
I think the problem is that Matlab parses the number into a double before it converts it to
a symbolic expression. As a double has a 52-bit mantissa, you have approximately 16 significant digits but your number is longer.
As an alternative, you could try to create the number directly from a string:
y=sym('11111111111111111')^e
Unfortunately, I do not have Matlab available right now, so this answer is untested.
It's not a bug, it's a feature. And it's called "round-off error"
matlab uses a double format to store normal variable, just like the double in C programming language, and many other languages like C++.
Actually, the "bug" has nothing to do with the "^x",as we can see:
>> clear
>> syms y
>> format bank
>> y=11111111111111111
y =
11111111111111112.00
Even a simple assign triggers the "bug".
And we can see how a double variable is really stored in memory in VS, using debug mode:
As you can see in the screenshot, both a and b are stored as "2ea37c58cccccccc" in the memory, which means the computer can't tell one from the other.
And that's the reason of the "bug" you found.
To avoid this, you can use symbolic constant instead:
>> y=sym('11111111111111111')
y =
11111111111111111
In this way, the computer will store the "y" in memory in a different format, which will avoid round-off error and cost more memory.
I am experiencing problems when I compare results from different runs of my Matlab software with the same input. To narrow the problem, I did the following:
save all relevant variables using Matlab's save() method
call a method which calculates something
save all relevant output variables again using save()
Without changing the called method, I did another run with
load the variables saved above and compare with the current input variables using isequal()
call my method again with the current input variables
load the out variables saved above and compare.
I can't believe the comparison in the last "line" detects slight differences. The calculations include single and double precision numbers, the error is in the magnitude of 1e-10 (the output is a double number).
The only possible explanation I could imagine is that either Matlab looses some precision when saving the variables (which I consider very unlikely, I use the default binary Matlab format) or that there are calculations included like a=b+c+d, which can be calculated as a=(b+c)+d or a=b+(c+d) which might lead to numerical differences.
Do you know what might be the reason for the observations described above?
Thanks a lot!
it really seems to be caused by the single/double mix in the calculations. Since I have switched to double precision only, the problem did not occur anymore. Thanks to everybody for your thoughts.
these could be rounding errors. you can find the floating point accuracy of you system like so:
>> eps('single')
ans =
1.1921e-07
On my system this reports 10^-7 which would explain discrepancies of your order
To ensure reproducible results, especially if you are using any random generating functions (either directly or indirectly), you should restore the same state at the beginning of each run:
%# save state (do this once)
defaultStream = RandStream.getDefaultStream;
savedState = defaultStream.State;
save rndStream.mat savedState
%# load state (do this at before each run)
load rndStream.mat savedState
defaultStream = RandStream.getDefaultStream();
defaultStream.State = savedState;
I spent part of yesterday and today tracking down a bug in some Matlab code. I had thought my problem was indexing (with many structures that I didn't define and am still getting used to), but it turned out to be an overflow bug. I missed this for a very specific reason:
>> uint8(2) - uint8(1)
ans =
1
>> uint8(2) - uint8(2)
ans =
0
>> uint8(2) - uint8(3)
ans =
0
I would have expected the last one to be something like -1 (or 255). In the middle of a big vector, the erroneous 0s were difficult to detect, but a 255 would have stood out easily.
Any tips on how to detect these problems easily in the future? (Ideally, I'd like to turn off the overflow checking to make it work like C.) Changing to double works, of course, but if I don't realize it's a uint8 to begin with, that doesn't help.
You can start by turning on integer warnings:
intwarning('on')
This will give you a warning when integer arithmetic overflows.
Beware though, as outlined here, this does slow down integer arithmetic so only use this during debug.
Starting with release R2010b and later, the function INTWARNING has been removed, along with these warning messages for integer math and conversion:
MATLAB:intConvertNaN
MATLAB:intConvertNonIntVal
MATLAB:intConvertOverflow
MATLAB:intMathOverflow
So using INTWARNING is no longer a viable option for determining when integer overflows occur. An alternative is to use the CLASS function to test the class of your data and recast it accordingly before performing the operation. Here's an example:
if strcmp(class(data),'uint8') %# Check if data is a uint8
data = double(data); %# Convert data to a double
end
You could also use the ISA function as well:
if ~isa(data,'single') %# Check if data is not a single
data = single(data); %# Convert data to a single
end
See INTWARNING function to control warnings on integer operations.
http://www.mathworks.com/access/helpdesk/help/techdoc/ref/intwarning.html