The strange thing in Matlab code [duplicate] - matlab

This question already has answers here:
Is this a Matlab bug? Do you have the same issue? [duplicate]
(3 answers)
Closed 9 years ago.
I tried to compile the following code.
x = 0
while x <= 1
x = x+0.02
end
The last value of x that I should get is equal to 1.02 (which make a false condition in while loop)
However, I am very wondered that I always got the last value of x equal to 1 and the while loop is stopped. I don't know what's wrong with my code.
Could anyone please help me find out?

0.02 cannot be represented exactly in a floating point binary number (double), so you'll get some rounding errors. The result is that you don't reach one, but a number which is slighly larget than one.
Try to append disp(x-1) after your code, to see that x is not exactly 1.
This site shows how 0.02 is represented in IEEE754 double precision floating point: http://www.binaryconvert.com/result_double.html?decimal=048046048050
The essential thing here is that it is a little bit larger than 0.02
One way to solve this would be to use an integer loop variable. It's still of type double, but it only has integer values, which don't have rounding issues unless you use very large numbers (>=2^56?)
for n=0:100
x = n/100;
end

Try format long before running you will get this:
x = 0.920000000000000
x = 0.940000000000001
x = 0.960000000000001
x = 0.980000000000001
x = 1.000000000000000
Note the 1 at the very end. 0.02 cannot be represented exactly with floating point arithmetic. So although it looks like 1 when you compare it, it is actually larger so it comes up as false and exists the loop.

Related

MATLAB showing 0.82-0.80 not equal to 0.02 and similar error in inverting a matrix [duplicate]

This question already has answers here:
Why is 24.0000 not equal to 24.0000 in MATLAB?
(6 answers)
Closed 7 years ago.
I've written the following code*:
function val = newton_backward_difference_interpolation(x,y,z)
% sample input : newton_backward_difference_interpolation([0.70 0.72 0.74 0.76 0.78 0.80 0.82 0.84 086 0.88],[0.8423 0.8771 0.9131 0.9505 0.9893 1.0296 1.0717 1.1156 1.1616 1.2097],0.71)
% sample output:
% X-Input must be equidistant
% -1.1102e-16
n=numel(x);% number of elements in x (which is also equal to number of elements in y)
h=x(2)-x(1);% the constant difference
%error=1e-5;% the computer error in calculating the differences [It was workaround where I took abs(h-h1)>=error]
if n>=3% if total elements are more than 3, then we check if they are equidistant
for i=3:n
h1=x(i)-x(i-1);
if (h1~=h)
disp('X-Input must be equidistant');
disp(h1-h);
return;
end
end
end
...
I also wrote a function for calculating inverse of a matrix** where this:
a=[3 2 -5;1 -3 2;5 -1 4];disp(a^(-1)-inverse(a));
displays this:
1.0e-16 *
0 -0.2082 0.2776
0 0.5551 0
0 0.2776 0
Why is there a small difference of 1e-16 occuring? Am I right in saying that this is because computer calculations (especially divisions are not exact)? Even if they are inexact, why does a^(-1) provide more accurate/exact solution (Sorry If this question is not programming related in your view.)
Can this be avoided/removed? How?
*for newton backward difference interpolation: (whole code at this place), the expected output is 0.85953 or something near that.
**using Gauss-Jordan Elimination: (whole code at this place)
That's in the nature of floating point calculations. There habe been many questions about this topic here already.
E.g.
Floating point inaccuracy examples
What range of numbers can be represented in a 16-, 32- and 64-bit IEEE-754 systems?
https://www.quora.com/Why-is-0-1+0-2-not-equal-to-0-3-in-most-programming-languages
The problem is that non-integer numbers in most cases have no exact representation by a 64bit double or 32bit float, not even in a 10000bit floating point value if that existed.
0.5, 0.25 and all powers of 2 (also those with negative exponents, 0.5 is 2^-1) can be stored exactly, and many others as well. Floating point numbers are saved as 3 separate parts: sign (1bit), mantissa (the 'number') and exponent for the base 2. Every possible combination of mantissa and exponent will result in a precise value, e.g. 0.5 has mantissa 'value' of 1 and exponent -1. the exact formula is
number = (sign ? -1:1) * 2^(exponent) * 1.(mantissa bits)
from
http://www.cprogramming.com/tutorial/floating_point/understanding_floating_point_representation.html
Try this formula and you'll see for example that you can't create a precise 0.1

exponential function with large argument in matlab [duplicate]

This question already has answers here:
How to compute an exponent in matlab without getting inf?
(3 answers)
Closed 7 years ago.
I've got one problem for a longer time and I'd be really grateful if you could help me somehow...
I have a code in MATLAB (version R2012a) where I compute some exponential functions using MATLAB's fuction exp. This means that I have something like this:
y = exp(x);
However, when this "x" is larger than a certain number, the result ("y") is infinity; when "x" is smaller than a certain number, the result is 0. As said on the MathWorks' website:
Numerical exceptions may happen, when the absolute value of the real
part of a floating-point argument x is large. If ℜ(x) < -7.4*10^8,
then exp(x) may return the truncated result 0.0 (protection against
underflow). If ℜ(x) > 7.4*10^8, then exp(x) may return the
floating-point equivalent RD_INF of infinity.
My problem is quite obvious - my "x" are pretty large so I receive infinities and zeros instead of results I need. My question is - how do I get the real results? Thanks in advance for help!
Use vpa with a string input:
>> exp(1000)
ans =
Inf
>> vpa('exp(1000)')
ans =
1.9700711140170469938888793522433*10^434
Note the result of vpa is of class sym.
A variable in any language is stored in a certain amount of bytes in the computer's memory. The more bytes used to hold a variable type, the more precise of a result that variable can hold. If you are using integers, the biggest type uses 64 bytes and is uint64. This is an unsigned integer (meaning it can only be positive) that can range from 0 to 18,446,744,073,709, 551,615. If you need decimals, try using vpa.

two values are seemingly the same, yet matlab says the aren't [duplicate]

This question already has an answer here:
What is -0.0 in matlab?
(1 answer)
Closed 9 years ago.
I cannot explain this. I have two variables having the same value, yet they are not identical. Can anybody tell me what I'm missing here?
>> y
y =
3.4000
>> x
x =
3.4000
>> y==x
ans =
0
>> whos x
Name Size Bytes Class Attributes
x 1x1 8 double
>> whos y
Name Size Bytes Class Attributes
y 1x1 8 double
It's really puzzling for me and I swear it's not a joke.
It's because of floating point precision. Try
format long g
and then look at x and y again. It's better to compare x-y to some small tolerance value rather than doing an equality test on floating point numbers
You're comparing float values, an activity which doesn't work quite how you think in basically any language, due to how floating-point values are handled by computers.
The solution to this is generally to instead test whether the different between the values is less than some small threshold.
See here for a Matlab-related discussion of this.

Compute 4^x mod 2π for large x

I need to compute sin(4^x) with x > 1000 in Matlab, with is basically sin(4^x mod 2π) Since the values inside the sin function become very large, Matlab returns infinite for 4^1000. How can I efficiently compute this?
I prefer to avoid large data types.
I think that a transformation to something like sin(n*π+z) could be a possible solution.
You need to be careful, as there will be a loss of precision. The sin function is periodic, but 4^1000 is a big number. So effectively, we subtract off a multiple of 2*pi to move the argument into the interval [0,2*pi).
4^1000 is roughly 1e600, a really big number. So I'll do my computations using my high precision floating point tool in MATLAB. (In fact, one of my explicit goals when I wrote HPF was to be able to compute a number like sin(1e400). Even if you are doing something for the fun of it, doing it right still makes sense.) In this case, since I know that the power we are interested in is roughly 1e600, then I'll do my computations in more than 600 digits of precision, expecting that I'll lose 600 digits by the subtractive cancellation. This is a massive subtractive cancellation issue. Think about it. That modulus operation is effectively a difference between two numbers that will be identical for the first 600 digits or so!
X = hpf(4,1000);
X^1000
ans =
114813069527425452423283320117768198402231770208869520047764273682576626139237031385665948631650626991844596463898746277344711896086305533142593135616665318539129989145312280000688779148240044871428926990063486244781615463646388363947317026040466353970904996558162398808944629605623311649536164221970332681344168908984458505602379484807914058900934776500429002716706625830522008132236281291761267883317206598995396418127021779858404042159853183251540889433902091920554957783589672039160081957216630582755380425583726015528348786419432054508915275783882625175435528800822842770817965453762184851149029376
What is the nearest multiple of 2*pi that does not exceed this number? We can get that by a simple operation.
twopi = 2*hpf('pi',1000);
twopi*floor(X^1000/twopi)
ans = 114813069527425452423283320117768198402231770208869520047764273682576626139237031385665948631650626991844596463898746277344711896086305533142593135616665318539129989145312280000688779148240044871428926990063486244781615463646388363947317026040466353970904996558162398808944629605623311649536164221970332681344168908984458505602379484807914058900934776500429002716706625830522008132236281291761267883317206598995396418127021779858404042159853183251540889433902091920554957783589672039160081957216630582755380425583726015528348786419432054508915275783882625175435528800822842770817965453762184851149029372.6669043995793459614134256945369645075601351114240611660953769955068077703667306957296141306508448454625087552917109594896080531977700026110164492454168360842816021326434091264082935824243423723923797225539436621445702083718252029147608535630355342037150034246754736376698525786226858661984354538762888998045417518871508690623462425811535266975472894356742618714099283198893793280003764002738670747
As you can see, the first 600 digits were the same. Now, when we subtract the two numbers,
X^1000 - twopi*floor(X^1000/twopi)
ans =
3.333095600420654038586574305463035492439864888575938833904623004493192229633269304270385869349155154537491244708289040510391946802229997388983550754583163915718397867356590873591706417575657627607620277446056337855429791628174797085239146436964465796284996575324526362330147421377314133801564546123711100195458248112849130937653757418846473302452710564325738128590071680110620671999623599726132925263826
This is why I referred to it as a massive subtractive cancellation issue. The two numbers were identical for many digits. Even carrying 1000 digits of accuracy, we lost many digits. When you subtract the two numbers, even though we are carrying a result with 1000 digits, only the highest order 400 digits are now meaningful.
HPF is able to compute the trig function of course. But as we showed above, we should only trust roughly the first 400 digits of the result. (On some problems, the local shape of the sin function might cause us to lose more digits than that.)
sin(X^1000)
ans =
-0.1903345812720831838599439606845545570938837404109863917294376841894712513865023424095542391769688083234673471544860353291299342362176199653705319268544933406487071446348974733627946491118519242322925266014312897692338851129959945710407032269306021895848758484213914397204873580776582665985136229328001258364005927758343416222346964077953970335574414341993543060039082045405589175008978144047447822552228622246373827700900275324736372481560928339463344332977892008702220160335415291421081700744044783839286957735438564512465095046421806677102961093487708088908698531980424016458534629166108853012535493022540352439740116731784303190082954669140297192942872076015028260408231321604825270343945928445589223610185565384195863513901089662882903491956506613967241725877276022863187800632706503317201234223359028987534885835397133761207714290279709429427673410881392869598191090443394014959206395112705966050737703851465772573657470968976925223745019446303227806333289071966161759485260639499431164004196825
So am I right, and we cannot trust all of these digits? I'll do the same computation, once in 1000 digits of precision, then a second time in 2000 digits. Compute the absolute difference, then take the log10. The 2000 digit result will be our reference as essentially exact compared to the 1000 digit result.
double(log10(abs(sin(hpf(4,[1000 0])^1000) - sin(hpf(4,[2000 0])^1000))))
ans =
-397.45
Ah. So of those 1000 digits of precision we started out with, we lost 602 digits. The last 602 digits in the result are non-zero, but still complete garbage. This was as I expected. Just because your computer reports high precision, you need to know when not to trust it.
Can we do the computation without recourse to a high precision tool? Be careful. For example, suppose we use a powermod type of computation? Thus, compute the desired power, while taking the modulus at every step. Thus, done in double precision:
X = 1;
for i = 1:1000
X = mod(X*4,2*pi);
end
sin(X)
ans =
0.955296299215251
Ah, but remember that the true answer was -0.19033458127208318385994396068455455709388...
So there is essentially nothing of significance remaining. We have lost all our information in that computation. As I said, it is important to be careful.
What happened was after each step in that loop, we incurred a tiny loss in the modulus computation. But then we multiplied the answer by 4, which caused the error to grow by a factor of 4, and then another factor of 4, etc. And of course, after each step, the result loses a tiny bit at the end of the number. The final result was complete crapola.
Lets look at the operation for a smaller power, just to convince ourselves what happened. Here for example, try the 20th power. Using double precision,
mod(4^20,2*pi)
ans =
3.55938555711037
Now, use a loop in a powermod computation, taking the mod after every step. Essentially, this discards multiples of 2*pi after each step.
X = 1;
for i = 1:20
X = mod(X*4,2*pi);
end
X
X =
3.55938555711037
But is that the correct value? Again, I'll use hpf to compute the correct value, showing the first 20 digits of that number. (Since I've done the computation in 50 total digits, I'll absolutely trust the first 20 of them.)
mod(hpf(4,[20,30])^20,2*hpf('pi',[20,30]))
ans =
3.5593426962577983146
In fact, while the results in double precision agree to the last digit shown, those double results were both actually wrong past the 5th significant digit. As it turns out, we STILL need to carry more than 600 digits of precision for this loop to produce a result of any significance.
Finally, to fully kill this dead horse, we might ask if a better powermod computation can be done. That is, we know that 1000 can be decomposed into a binary form (use dec2bin) as:
512 + 256 + 128 + 64 + 32 + 8
ans =
1000
Can we use a repeated squaring scheme to expand that large power with fewer multiplications, and so cause less accumulated error? Essentially, we might try to compute
4^1000 = 4^8 * 4^32 * 4^64 * 4^128 * 4^256 * 4^512
However, do this by repeatedly squaring 4, then taking the mod after each operation. This fails however, since the modulo operation will only remove integer multiples of 2*pi. After all, mod really is designed to work on integers. So look at what happens. We can express 4^2 as:
4^2 = 16 = 3.43362938564083 + 2*(2*pi)
Can we just square the remainder however, then taking the mod again? NO!
mod(3.43362938564083^2,2*pi)
ans =
5.50662545075664
mod(4^4,2*pi)
ans =
4.67258771281655
We can understand what happened when we expand this form:
4^4 = (4^2)^2 = (3.43362938564083 + 2*(2*pi))^2
What will you get when you remove INTEGER multiples of 2*pi? You need to understand why the direct loop allowed me to remove integer multiples of 2*pi, but the above squaring operation does not. Of course, the direct loop failed too because of numerical issues.
I would first redefine the question as follows: compute 4^1000 modulo 2pi. So we have split the problem in two.
Use some math trickery:
(a+2pi*K)*(b+2piL) = ab + 2pi*(garbage)
Hence, you can just multiply 4 many times by itself and computing mod 2pi every stage. The real question to ask, of course, is what is the precision of this thing. This needs careful mathematical analysis. It may or may not be a total crap.
Following to Pavel's hint with mod I found a mod function for high powers on mathwors.com.
bigmod(number,power,modulo) can NOT compute 4^4000 mod 2π. Because it just works with integers as modulo and not with decimals.
This statement is not correct anymore: sin(4^x) is sin(bidmod(4,x,2*pi)).

Matlab floor bug? [duplicate]

This question already has answers here:
Why is 24.0000 not equal to 24.0000 in MATLAB?
(6 answers)
Closed 5 years ago.
I think I found a bug in Matlab. My only explanation is, that matlab calculates internally with other values than the ones which are displayed:
K>> calc(1,11)
ans =
4.000000000000000
K>> floor(ans)
ans =
3
Displayed code is an output from the Matlab console. calc(x,y) is just an array of double values.
MATLAB uses the standard IEEE floating point form to store a double.
See that if we subtract off a tiny amount from 4, MATLAB still diplays 4 as the result.
>> format long g
>> 4 - eps(2)
ans =
4
In fact, MATLAB stores the number in a binary form. We can see the decimal version of that number as:
>> sprintf('%.55f',4-eps(2))
ans =
3.9999999999999995559107901499373838305473327636718750000
Clearly MATLAB should not display that entire mess of digits, but by rounding the result to 15 digits, we get 4 for the display.
Clearly, the value in calc(1,11) is such a number, represented internally as less than 4 by just a hair too little that it rounds to 4 on display, but it is NOT exactly 4.
NEVER trust the least significant displayed digit of a result in floating point arithmetic.
Edit:
You seem to think that 3.999999999999999 in MATLAB should be less than 4. Logically, this makes sense. But what happens when you supply that number? AH yes, the granularity of a floating point double is larger than that. MATLAB cannot represent it as a number less than 4. It rounds that number UP to EXACTLY 4 internally.
>> sprintf('%.55f',3.9999999999999999)
ans =
4.0000000000000000000000000000000000000000000000000000000
What you got was a value really close to but lower than 4, and even with format long Matlab has to round to the 15th digit to display the number.
Try this:
format long
asd = 3.9999999999999997 %first not-9 #16th digit
will print 4.000000000000000. Anyone who doesn't know the actual value of asd based on what gets visualized would guess it is at least 4, but running
asd >= 4
gives 0, and so floor(asd) returns 3.
So is a matter of how Matlab rounds the displayed output, the true value stored in the variable is less than 4.
UPDATE:
if you go further with the digits, like 18x9:
>> asd = 3.999999999999999999
asd =
4
>> asd == 4
ans =
1
asd becomes exactly 4! (notice it doesn't display 4.000000000000000 anymore) But that's another story, is no more about rounding the number to have a prettier output, but about how the floating point arithmetic works... Real numbers can be represented up to a certain relative precision: in this case the number you gave is so close to 4 that it becomes 4 itself. Take a look to the Python link posted in the comment by #gokcehan, or here.
I won't go over the problem, instead I will offer a solution: Use the single function:
B = single(A) converts the matrix A to single precision, returning that value in B. A can be any numeric object (such as a double). If A is already single precision, single has no effect. Single-precision quantities require less storage than double-precision quantities, but have less precision and a smaller range.
This is only meant to be a fix to this particular issue, so you would do:
K>> calc(1,11)
ans =
4.000000000000000
K>> floor(single(ans))
ans =
4