Matlab vs Excel differences in computations - matlab

I encountered some problem while using Matlab. I'm doing some computations concerning OTC instruments (pricing, constructing discount curve, etc.), firstly in Excel and after that in Matlab (for comparison). While I`m 100% sure that computations in Excel are good (comparing to market data), it seems that Matlab is producing some differences (i.e. -4,18-05E). Matlab algorithm looks fine. I was wondering - maybe it is because Matlab is rounding some computations - I heard a little bit about it. I'm trying to convert a double numbers to float by function vpa(), but it looks that it is not working with double numbers. Any other ideas?

Excel uses 64 bit double precision floating point numbers compliant with IEEE 754 floating point specification.
The way that Excel treats results like =1/5 and appears to compute them exactly (despite this example not being a dyadic rational) is purely down to formatting. It handles =1/3 + 1/3 + 1/3 similarly. It's quite smart really if you think about it: the implementers of Excel had no real choice given that the average Excel user is not au fait with the finer points of floating point arithmetic and would simply scorn a spreadsheet package that "couldn't even get 1/5 correct".
That all said, you're very unlucky if you get a difference of -4,18-05E between the two systems. That's because double floating point is accurate to around 15 significant figures. Your algorithms would be implemented very poorly indeed for the error terms to bubble up to that magnitude if you're consistently using double precision floating point types.
Most likely (and I too work in finance), the difference will be in the way you're interpolating your discount curve. That's where I would look first if I were you.

Given the value of the error compared to the default format settings, this is almost certainly because of using the default format short and comparing the output on the command line to the real value.
x = 5.4444418
Output:
x =
5.4444
Then:
x-5.4444
Output:
ans =
4.1800e-05
The value stored in x remains at 5.4444418, it is only the measure output to the command line that changes.

Related

Double precision and quadruple precision in MATLAB

I want to convert data(double precision,15 decimal points) to data of another type(quadruple precision,34 decimal points). So, I used vpa function like this:
data = sin(2*pi*frequency*time);
quad_data = vpa(data,34);
But, the type of the result is sym, not double. And when I checked each cell of the sym type data, 1x1 sym was created in each cell. I tried to use fft function using 'quad_data', but it didn't work. Is there any solution that I can change the decimal point of double type from 15 to 34?
The only numeric floating point types that MATLAB currently supports is double, single, and half. Extended precision types can be achieved via the Symbolix Toolbox (e.g., vpa) or 3rd party code (e.g., John D'Errico's FEX submission High Precision Floating HPF class). But even then, only a subset of floating point functions will typically be supported. If the function you are trying to use doesn't support the variable type, then you would have to supply your own function.
Also, you are not building vpa objects properly in the first place. Typically you would convert the operands to vpa first and then do arithmetic on them. Doing the arithmetic in double precision first as you are doing with data, and then converting to extended precision vpa, just adds garbage to the values. E.g., set the digits first and then use vpa('pi') to get the full extended precision version of pi as a vpa variable.
There is a commercial 3rd-party toolbox for this purpose, called the Multiprecision Computing Toolbox for MATLAB.
This tool implements many of the mathematical operations you would expect from double inputs, and according to benchmarks on the website, it's much faster than vpa.
Disclosure: I am not affiliated with the creators of this tool in any way, however I can say that we had a good experience with this tool for one of our lab's projects.
The other suggestion I can give is doing the high-precision arithmetic in an another language\environment to which MATLAB provides interfaces (e.g., C, python, java), and which should have the quad data type implemented.

Numerical convergence and minimum number size

I have a program which calculates probability values
(p-values),
but it is entering a very large negative number into the
exp function
exp(-626294.830) which evaluates to zero instead of the very small
positive number that it should be.
How can I get this to evaluate as a very small floating point number?
I have tried
Math::BigFloat,
bignum, and
bigrat
but all have failed.
Wolfram Alpha says that exp(-626294.830) is 4.08589×10^-271997... zero is a pretty close approximation to that ;-) Although you've edited and removed the context from your question, do you really need to work with such tiny numbers, or perhaps there is some way you could optimize your algorithm or scale your numbers?
Anyway, you are correct that code like Math::BigFloat->new("-626294.830")->bexp seems to take quite some time, even with the support of use Math::BigFloat lib => 'GMP';.
The only alternative I can offer at the moment is Math::Prime::Util::GMP's expreal, although you need to specify a precision to it.
use Math::Prime::Util::GMP qw/expreal/;
use Math::BigFloat;
my $e = Math::BigFloat->new(expreal(-626294.830,272000));
print $e->bnstr,"\n";
__END__
4.086e-271997
But on my machine, even that still takes ~20s to run, which brings us back to the question of potential optimization in other places.
Floating point numbers do not have infinite precision. Assuming the number is represented as an IEEE 754 double, we have 52 bits for a fraction, 11 bits for the exponent, and one bit for the sign. Due to the way exponents are encoded, the smallest positive number that can be represented is 2^-1022.
If we look at your number e^-626294.830, we can do a change of base and see that it equals 2^(log_2 e · -626294.830) = 2^-903552.445, which is significantly smaller than 2^-1022. Approximating your number as zero is therefore correct.
Instead of calculating this value using arbitrary-precision numerics, you are likely better off solving the necessary equations by hand, then coding this in a way that does not require extreme precision. For example, it is unlikely that you need the exact value of e^-626294.830, but perhaps just the magnitude. Then, you can calculate the logarithm instead of using exp().

How to turn off denormal number support in MATLAB?

I am trying to turn off denormal number support in matlab, so that basically any two computations that would result in a denormal number would instead just result in zero (DAZ, FTZ)
I've researched several sites include the one below, but I haven't found anything about doing this.
http://blogs.mathworks.com/cleve/2014/07/21/floating-point-denormals-insignificant-but-controversial-2/
I've never heard of such an option in Matlab. It would likely require deep manipulation of a lot of the floating-point math, effectively requiring a new datatype to be supported if this were to be an easily toggle-able option in Matlab. You could write your own mex C code to do this (more here and here) for an individual function.
And of course you can get something like this with one line of Matlab – here's an example:
a = [1e-300 1e-310 1e-310];
b = [1e-301 1e-311 1e-310];
x = a-b;
x(abs(x(:)) < realmin(class(x))) = 0;
where realmin is the smallest normalized floating-point number. However, the floating point math is still performed using the extended denormal/subnormal values in a. It's just the output that's clipped to zero.
Unless you're doing this for fun an experimentation, or possibly running code on an embedded platform, I'd really recommend against disabling denormals as a form of optimization. Instead, focus on why your values are so small and how you might rescale your problem to avoid the issue entirely.

Matlab precision: simple subtraction is not zero

I compute this simple sum on Matlab:
2*0.04-0.5*0.4^2 = -1.387778780781446e-017
but the result is not zero. What can I do?
Aabaz and Jim Clay have good explanations of what's going on.
It's often the case that, rather than exactly calculating the value of 2*0.04 - 0.5*0.4^2, what you really want is to check whether 2*0.04 and 0.5*0.4^2 differ by an amount that is small enough to be within the relevant numerical precision. If that's the case, than rather than checking whether 2*0.04 - 0.5*0.4^2 == 0, you can check whether abs(2*0.04 - 0.5*0.4^2) < thresh. Here thresh can either be some arbitrary smallish number, or an expression involving eps, which gives the precision of the numerical type you're working with.
EDIT:
Thanks to Jim and Tal for suggested improvement. Altered to compare the absolute value of the difference to a threshold, rather than the difference.
Matlab uses double-precision floating-point numbers to store real numbers. These are numbers of the form m*2^e where m is an integer between 2^52 and 2^53 (the mantissa) and e is the exponent. Let's call a number a floating-point number if it is of this form.
All numbers used in calculations must be floating-point numbers. Often, this can be done exactly, as with 2 and 0.5 in your expression. But for other numbers, most notably most numbers with digits after the decimal point, this is not possible, and an approximation has to be used. What happens in this case is that the number is rounded to the nearest floating-point number.
So, whenever you write something like 0.04 in Matlab, you're really saying "Get me the floating-point number that is closest to 0.04. In your expression, there are 2 numbers that need to be approximated: 0.04 and 0.4.
In addition, the exact result of operations like addition and multiplication on floating-point numbers may not be a floating-point number. Although it is always of the form m*2^e the mantissa may be too large. So you get an additional error from rounding the results of operations.
At the end of the day, a simple expression like yours will be off by about 2^-52 times the size of the operands, or about 10^-17.
In summary: the reason your expression does not evaluate to zero is two-fold:
Some of the numbers you start out with are different (approximations) to the exact numbers you provided.
The intermediate results may also be approximations of the exact results.
What you are seeing is quantization error. Matlab uses doubles to represent numbers, and while they are capable of a lot of precision, they still cannot represent all real numbers because there are an infinite number of real numbers. I'm not sure about Aabaz's trick, but in general I would say there isn't anything you can do, other than perhaps massaging your inputs to be double-friendly numbers.
I do not know if it is applicable to your problem but often the simplest solution is to scale your data.
For example:
a=0.04;
b=0.2;
a-0.2*b
ans=-6.9389e-018
c=a/min(abs([a b]));
d=b/min(abs([a b]));
c-0.2*d
ans=0
EDIT: of course I did not mean to give a universal solution to these kind of problems but it is still a good practice that can make you avoid a few problems in numerical computation (curve fitting, etc ...). See Jim Clay's answer for the reason why you are experiencing these problems.
I'm pretty sure this is a case of ye olde floating point accuracy issues.
Do you need 1e-17 accuracy? Is this merely a case of wanting 'pretty' output?
In that case, you can just use a formatted sprintf to display the number of significant digits you want.
Realize that this is not a matlab problem, but a fundamental limitation of how numbers are represented in binary.
For fun, work out what .1 is in binary...
Some references:
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
http://www.mathworks.com/support/tech-notes/1100/1108.html

Problem with very small numbers?

I tried to assign a very small number to a double value, like so:
double verySmall = 0.000000001;
9 fractional digits. For some reason, when I multiplicate this value by 10, I get something like 0.000000007. I slighly remember there were problems writing big numbers like this in plain text into source code. Do I have to wrap it in some function or a directive in order to feed it correctly to the compiler? Or is it fine to type in such small numbers in text?
The problem is with floating point arithmetic not with writing literals in source code. It is not designed to be exact. The best way around is to not use the built in double - use integers only (if possible) with power of 10 coefficients, sum everything up and display the final useful figure after rounding.
Standard floating point numbers are not stored in a perfect format, they're stored in a format that's fairly compact and fairly easy to perform math on. They are imprecise at surprisingly small precision levels. But fast. More here.
If you're dealing with very small numbers, you'll want to see if Objective-C or Cocoa provides something analagous to the java.math.BigDecimal class in Java. This is precisely for dealing with numbers where precision is more important than speed. If there isn't one, you may need to port it (the source to BigDecimal is available and fairly straightforward).
EDIT: iKenndac points out the NSDecimalNumber class, which is the analogue for java.math.BigDecimal. No port required.
As usual, you need to read stuff like this in order to learn more about how floating-point numbers work on computers. You cannot expect to be able to store any random fraction with perfect results, just as you can't expect to store any random integer. There are bits at the bottom, and their numbers are limited.