Float to text behavior of MATLAB's fprintf() - matlab

When using fprintf to convert floats to text in a decimal representation, the output is a series of decimal digits (potentially beginning with 0).
How does this representation work?
>>fprintf('%tu\n',pi)
>>1078530011
>>fprintf('%bu\n',pi)
>>04614256656552045848
Apologies if this is very trivial; I can't find an answer elsewhere, in part because searches are swamped by the various decimal data types available.
Note that the %t and %b flags are two of the differences from C's fprintf(). According to the documentation, it prints a float or double respectively "rather than an unsigned integer." o, x and u switches between octal, hex and decimal.

This representation is the binary IEEE 754 floating point representation of the number, printed as an unsigned integer.
The IEEE 754 Converter website tells us that the IEEE 754 single-precision representation of Pi (approximately 3.1415927) is 40490FDB hexadecimal, which is 1078530011 decimal (the number that you saw printed). The '%bu' format specifier works similarly but outputs the double-precision representation.
The purpose of these format specifiers is to allow you to store a bit-exact representation of a floating-point value to a text file. The alternative approach of printing the floating-point value in human-readable form requires a lot of care if you want to guarantee bit-exact storage, and there might be some edge cases (denormalized values...?) that you won't be able to store precisely at all.

If you were to print the number as hexadecimals:
>> fprintf('%bx\n', pi)
400921fb54442d18
>> fprintf('%tx\n', single(pi))
40490fdb
then the formatters '%bx' and '%tx' are simply equivalent to using NUM2HEX:
>> num2hex( pi )
400921fb54442d18
>> num2hex( single(pi) )
40490fdb
Another way is to simply set the default output format to hexadecimals using:
>> format hex
>> pi
400921fb54442d18
>> single(pi)
40490fdb
On a related note, there was a recent article by #Loren:
"How Many Digits to Write?"
where they try to find how many decimal digits you need to write in order to retain the number's full precision when re-read in MATLAB.

Related

Precision of double values in Spark

I am reading some data from a CSV file, and I have custom code to parse string values into different data types. For numbers, I use:
val format = NumberFormat.getNumberInstance()
which returns a DecimalFormat, and I call parse function on that to get my numeric value. DecimalFormat has arbitrary precision, so I am not losing any precision there. However, when the data is pushed into a Spark DataFrame, it is stored using DoubleType. At this point, I am expecting to see some precision issues, however I do not. I tried entering values from 0.1, 0.01, 0.001, ..., 1e-11 in my CSV file, and when I look at the values stored in the Spark DataFrame, they are all accurately represented (i.e. not like 0.099999999). I am surprised by this behavior since I do not expect a double value to store arbitrary precision. Can anyone help me understand the magic here?
Cheers!
There are probably two issues here: the number of significant digits that a Double can represent in its mantissa; and the range of its exponent.
Roughly, a Double has about 16 (decimal) digits of precision, and the exponent can cover the range from about 10^-308 to 10^+308. (Obviously, the actual limits are set by the binary representation used by the ieee754 format.)
When you try to store a number like 1e-11, this can be accurately approximated within the 56 bits available in the mantissa. Where you'll get accuracy issues is when you want to subtract two numbers that are so close together that they only differ by a small number of the least significant bits (assuming that their mantissas have been aligned shifted so that their exponents are the same).
For example, if you try (1e20 + 2) - (1e20 + 1), you'd hope to get 1, but actually you'll get zero. This is because a Double does not have enough precision to represent the 20 (decimal) digits needed. However, (1e100 + 2e90) - (1e100 + 1e90) is computed to be almost exactly 1e90, as it should be.

TI Basic Numeric Standard

Are numeric variables following a documented standard on TI calculators ?
I've been really surprised noticing on my TI 83 Premium CE that this test actually returns true (i.e. 1) :
0.1 -> X
0.1 -> Y
0.01 -> Z
X*Y=Z
I was expecting this to fail, assuming my calculator would use something like IEEE 754 standard to represent floating points numbers.
On the other hand, calculating 2^50+3-2^50 returns 0, showing that large integers seems use such a standard : we see here the big number has a limited mantissa.
TI-BASIC's = is a tolerant comparison
Try 1+10^-12=1 on your calculator. Those numbers aren't represented equally (1+10^-12-1 gives 1E-12), but you'll notice the comparison returns true: that's because = has a certain amount of tolerance. AFAICT from testing on my calculator, if the numbers are equal when rounded to ten significant digits, = will return true.
Secondarily,
TI-BASIC uses a proprietary BCD float format
TI floats are a BCD format that is nine bytes long, with one byte for sign and auxilliary information and 14 digits (7 bytes) of precision. The ninth byte is used for extra precision so numbers can be rounded properly.
See a source linked to by #doynax here for more information.

Maximum double value (float) possible in MATLAB (64-bit)

I'm aware that double is the default data-type in MATLAB.
When you compare two double numbers that have no floating part, MATLAB is accurate upto the 17th digit place in my testing.
a=12345678901234567 ; b=12345678901234567; isequal(a,b) --> TRUE
a=123456789012345671; b=123456789012345672; isequal(a,b) --> printed as TRUE
I have found a conservative estimate to be use numbers (non-floating) upto only 13th digit as other functions can become unreliable after it (such as ismember, or the MEX functions ismembc etc).
Is there a similar cutoff for floating values? E.g., if I use shares-outstanding for a company which can be very very large with decimal places, when do I start losing decimal accuracy?
a = 1234567.89012345678 ; b = 1234567.89012345679 ; isequal(a,b) --> printed as TRUE
a = 123456789012345.678 ; b = 123456789012345.677 ; isequal(a,b) --> printed as TRUE
isequal may not be right tool to use for comparing such numbers. I'm more concerned about up to how many places should I trust my decimal values once the integer part of a number starts growing?
It's usually not a good idea to test the equality of floating-point numbers. The behavior of binary floating-point numbers can differ drastically from what you may expect from base-10 decimals. Consider the example:
>> isequal(0.1, 0.3/3)
ans =
0
Ultimately, you have 53 bits of precision. This means that integers can be represented exactly (with no loss in accuracy) up to the number 253 (which is a little over 9 x 1015). After that, well:
>> (2^53 + 1) - 2^53
ans =
0
>> 2^53 + (1 - 2^53)
ans =
1
For non-integers, you are almost never going to be representing them exactly, even for simple-looking decimals such as 0.1 (as shown in that first example). However, it still guarantees you at least 15 significant figures of precision.
This means that if you take any number and round it to the nearest number representable as a double-precision floating point, then this new number will match your original number at least up to the first 15 digits (regardless of where these digits are with respect to the decimal point).
You might want to use variable precision arithmetics (VPA) in matlab. It computes expressions exactly up to a given digit count, which may be quite large. See here.
Check out the MATLAB function flintmax which tells you the maximum consecutive integers that can be stored in either double or single precision. From that page:
flintmax returns the largest consecutive integer in IEEEĀ® double
precision, which is 2^53. Above this value, double-precision format
does not have integer precision, and not all integers can be
represented exactly.

How to stop matlab truncating long numbers

These two long numbers are the same except for the last digit.
test = [];
test(1) = 33777100285870080;
test(2) = 33777100285870082;
but the last digit is lost when the numbers are put in the array:
unique(test)
ans = 3.3777e+16
How can I prevent this? The numbers are ID codes and losing the last digit is screwing everything up.
Matlab uses 64-bit floating point representation by default for numbers. Those have a base-10 16-digit precision (more or less) and your numbers seem to exceed that.
Use something like uint64 to store your numbers:
> test = [uint64(33777100285870080); uint64(33777100285870082)];
> disp(test(1));
33777100285870080
> disp(test(2));
33777100285870082
This is really a rounding error, not a display error. To get the correct strings for output purposes, use int2str, because, again, num2str uses a 64-bit floating point representation, and that has rounding errors in this case.
To add more explanation to #rubenvb's solution, your values are greater than flintmax for IEEE 754 double precision floating-point, i.e, greater than 2^53. After this point not all integers can be exactly represented as doubles. See also this related question.

num2hex vs dec2hex in MATLAB

I don't understand the difference between hex2dec and hex2num and their opposites in MATLAB.
Say I had a hex value, 3FD3B502C055FE00. When I use hex2dec, I get 4.5992e+018
. When I use hex2num, I get 0.3079. What's going on?
These functions work very differently, as you noticed. hex2dec converts a hexadecimal string to a floating-point number by raw byte conversion, and I think you found that this works as you were expecting. However, hex2num converts a hexadecimal string to its IEEE double-precision representation.
The IEEE 754 double precision standard calls for a one-bit sign, a 11-bit exponent, and a 52-bit fraction. So hex2num parses the hexadecimal in this format, yielding a very different result from hex2dec.
hex2dec -
Convert hexadecimal number string to decimal number
Description
d = hex2dec('hex_value') converts hex_value to its floating-point integer representation. The argument hex_value is a hexadecimal integer stored in a MATLAB string. The value of hex_value must be smaller than hexadecimal 10,000,000,000,000.
If hex_value is a character array, each row is interpreted as a hexadecimal string.
hex2num -
Convert hexadecimal number string to double-precision number
Description
n = hex2num(S), where S is a 16 character string representing a hexadecimal number, returns the IEEEĀ® double-precision floating-point number n that it represents. Fewer than 16 characters are padded on the right with zeros. If S is a character array, each row is interpreted as a double-precision number.
NaNs, infinities and denorms are handled correctly.
Knowing that 3FD3B502C055FE00 is bigger than (10,000,000,000,000)16, out of range.