Differences between format long and format long g - matlab

I searched differences between format long and format long g in Matlab.But I'm not sure exactly what the differences are between them.Does the difference between them affect accuracy?
Thank you...

The commands format X in Matlab affect the display of text.
So format short will display 4 digits after the decimal point. This is the default display option. The command format long will display more digits after the decimal point. The addition of a G will tell Matlab to additionally display in scientific notation if it is more compact.
There is not difference in terms of computed values or accuracy, just display. From the doc format or here Display Format Matlab article:
format does not affect how MATLAB computations are done. Computations
on float variables, namely single or double, are done in appropriate
floating point precision, no matter how those variables are displayed.
Computations on integer variables are done natively in integer.

format long basically a fixed decimal format. It always show 7 digits after decimal point for single accuracy and 15 digits after decimal point for double accuracy.
format longG are going to show up to 7 digits after decimal point for single accuracy and 15 digits after decimal point for double accuracy. But, if there is a zeros behind, it won't be shown. So, longG is more compact.
It won't affect the accuracy for calculation. Accuracy are depends on single or double precision.
Look at my example below, if I've saved a as 12.999012 and called for pi, MATLAB will response like this :
Format longG
Format long
If you are also asking for the format short and format shortG, the difference just the 4 digits behind decimal point.
Cheers!

Related

Matlab mantissa base

I've been using matlab to solve some boundary value problems lately, and I've noticed an annoying quirk. Suppose I start with the interval [0,1], and I want to search inside it. Naturally, one would perform a binary search, so I would subdivide the interval into [0,0.5] and [0.5,1]. Excellent: let's now suppose we narrow down our search to [0.5,1]. Now we divide the interval [0.5,0.75] and [0.75,1]. No apparent problem yet. However, as we keep going, representation of powers of 2 in base 10 becomes less and less natural. For example, 2^-22 in binary is just 22 bits, while in decimal it is 16 digits. However, keep in mind that each digit of decimal is really encoding ~ 4 bits. In other words, representing these fractions as decimal is extremely inefficient.
Matlab's precision only extends to 16 digit decimal floats, so a binary search going to 2^-22 is as good as you can do. However, 2^-22 ~ 10^-7, which is much bigger than 10^-16, so the best search strategy in matlab seems to be a decimal search! In any case, this is what I have done so far: to take full advantage of the 16 digit precision, I've had to subdivide the interval [0,1] into 10 pieces.
Hopefully I've made my problem clear. So, my question is: how do I make matlab count in native binary? I want to work with 64 bit binary floats!

Matlab vs Excel differences in computations

I encountered some problem while using Matlab. I'm doing some computations concerning OTC instruments (pricing, constructing discount curve, etc.), firstly in Excel and after that in Matlab (for comparison). While I`m 100% sure that computations in Excel are good (comparing to market data), it seems that Matlab is producing some differences (i.e. -4,18-05E). Matlab algorithm looks fine. I was wondering - maybe it is because Matlab is rounding some computations - I heard a little bit about it. I'm trying to convert a double numbers to float by function vpa(), but it looks that it is not working with double numbers. Any other ideas?
Excel uses 64 bit double precision floating point numbers compliant with IEEE 754 floating point specification.
The way that Excel treats results like =1/5 and appears to compute them exactly (despite this example not being a dyadic rational) is purely down to formatting. It handles =1/3 + 1/3 + 1/3 similarly. It's quite smart really if you think about it: the implementers of Excel had no real choice given that the average Excel user is not au fait with the finer points of floating point arithmetic and would simply scorn a spreadsheet package that "couldn't even get 1/5 correct".
That all said, you're very unlucky if you get a difference of -4,18-05E between the two systems. That's because double floating point is accurate to around 15 significant figures. Your algorithms would be implemented very poorly indeed for the error terms to bubble up to that magnitude if you're consistently using double precision floating point types.
Most likely (and I too work in finance), the difference will be in the way you're interpolating your discount curve. That's where I would look first if I were you.
Given the value of the error compared to the default format settings, this is almost certainly because of using the default format short and comparing the output on the command line to the real value.
x = 5.4444418
Output:
x =
5.4444
Then:
x-5.4444
Output:
ans =
4.1800e-05
The value stored in x remains at 5.4444418, it is only the measure output to the command line that changes.

Eliminating common factor in display of MATLAB matrices

What I am trying to do is probably relatively simple.
As you see, this matrix is being displayed such that all terms are to be multiplied by 1.0e+04. I want to disable this feature and see the numbers as they "really" are.
The format command can fix this for you. From the docs:
FORMAT SHORT Scaled fixed point format with 5 digits.
FORMAT LONG Scaled fixed point format with 15 digits for double
and 7 digits for single.
These are the defaults, and what you want is:
FORMAT SHORTG Best of fixed or floating point format with 5
digits.
FORMAT LONGG Best of fixed or floating point format with 15
digits for double and 7 digits for single.

Matlab MATPOWER Output in long format

I've been trying to use MATPOWER to do a power flow analysis for a network i have but all the outputs are coming with only 2 decimal places. Is there a way to configure the output to have the long format?
Trota,
You can modify the number of decimal places printed on screen or forwarded to the output text file.
If you browse the printpf.m file, simply change the required %9.xf to the required number of decimal places, where x is the number of digits after the decimal point. For bus data, these can be found at line 412 onwards.
Hope this helps.
You cannot change the number of decimal places used in the pretty-printed output of MATPOWER, but all of the results are available in full precision in the results struct returned by runpf() or runopf(). You can display these results with whatever precision you choose using standard Matlab commands such as disp() or fprintf().

iPhone and floating point math

I have following code:
float totalSpent;
int intBudget;
float moneyLeft;
totalSpent += Amount;
moneyLeft = intBudget - totalSpent;
And this is how it looks in debugger: http://www.braginski.com/math.tiff
Why would moneyLeft calculated by the code above is .02 different compared to the expression calculated by the debugger?
Expression windows is correct, yet code above produces wrong by .02 result. It only happens for a very large numbers (yet way below int limit)
thanks
A single-precision float has 23 bits of precision. That means that every calculation is rounded to 23 binary digits. This means that if you have a computation that, say, adds a very small number to a very large number, rounding may result in strange results.
Imagine that you are doing math in scientific notation decimal by hand, under the rule that you may only have four significant figures. Let's say I ask you to write twelve in scientific notation, with four significant figures. Remembering junior high school, you write:
1.200 × 101
Now I say compute the square of 12, and then add 0.5. That is easy enough:
1.440×102 + 0.005×102 = 1.445×102
How about twelve cubed plus 0.75:
1.728×103 + 0.00075×103 = 1.72875×103
But remember, I only gave you room for four significant digits, so you must round; then we get:
1.728×103 + 7.5×10-1 = 1.729×103
See? The lack of precision can make the computation come out with unexpected results.
In your example, you've got 999999 in a calculation where you're trying to be precise to 0.01. log2(999999) = 19.93 and log2(0.01) = -6.64. The difference is more than 23; therefore you would need more than 23 binary digits to perform this calculation accurately.
Because floating point mathematics rounds-off precision by its very nature, it is usually a bad choice for currency computation, where you must be accurate to the last cent. But are you really concerned with fractions of a cent in your application? If not, then why not do away with the decimal point altogether, and simply store cents (instead of dollars) in a 64-bit integer? 264¢ is more than the GDP of the entire planet.
Floating point will always produce strange results with money type calculations.
The golden rule is that floating point is good for things you measure litres,yards,lightyears,bushels etc. etc. but not for things you count like
sheep, beans, buttons etc.
Most money calculations are to do with counting pennies so use integer math
and you wont get the strange results. Either use a fixed decimal arithimatic
library (which would probably be overkill on an iPhone) or store your amounts
as whole numbers of cents and only convert to $ and cents on display.