Large integer and arbitrary/multi precision floats for Vala - biginteger

Is there a way to use big integers or arbitrary precision types in vala?

Apparently, no one made a binding yet: http://live.gnome.org/Vala/BindingsStatus, although there was some discussion about GMP & Vala operator overloading.
You'll have to bind one of the bignum libraries. (see http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic)

Related

Double precision and quadruple precision in MATLAB

I want to convert data(double precision,15 decimal points) to data of another type(quadruple precision,34 decimal points). So, I used vpa function like this:
data = sin(2*pi*frequency*time);
quad_data = vpa(data,34);
But, the type of the result is sym, not double. And when I checked each cell of the sym type data, 1x1 sym was created in each cell. I tried to use fft function using 'quad_data', but it didn't work. Is there any solution that I can change the decimal point of double type from 15 to 34?
The only numeric floating point types that MATLAB currently supports is double, single, and half. Extended precision types can be achieved via the Symbolix Toolbox (e.g., vpa) or 3rd party code (e.g., John D'Errico's FEX submission High Precision Floating HPF class). But even then, only a subset of floating point functions will typically be supported. If the function you are trying to use doesn't support the variable type, then you would have to supply your own function.
Also, you are not building vpa objects properly in the first place. Typically you would convert the operands to vpa first and then do arithmetic on them. Doing the arithmetic in double precision first as you are doing with data, and then converting to extended precision vpa, just adds garbage to the values. E.g., set the digits first and then use vpa('pi') to get the full extended precision version of pi as a vpa variable.
There is a commercial 3rd-party toolbox for this purpose, called the Multiprecision Computing Toolbox for MATLAB.
This tool implements many of the mathematical operations you would expect from double inputs, and according to benchmarks on the website, it's much faster than vpa.
Disclosure: I am not affiliated with the creators of this tool in any way, however I can say that we had a good experience with this tool for one of our lab's projects.
The other suggestion I can give is doing the high-precision arithmetic in an another language\environment to which MATLAB provides interfaces (e.g., C, python, java), and which should have the quad data type implemented.

Why are there 'Int's **and** 'Double's? Why not just have one class?

So I know this may be a very naive question, but I'm new to Scala and other similar languages, and this legitimately befuddles me.
Why does Scala (and other languages) have two distinct classes for integers and doubles? Since integers are in floats (1.0 = 1), why bother with an Int class?
I sort of understand that maybe you want to make sure that some integers are never changed into floats, or that maybe you want to guard against occurrences like 1.00000000002 != 1, which may be confusing when you only see the first few digits, but is there some other underlying justification that I'm missing?
Thanks!
Integers are important to the internal workings of software, because many things are internally implemented as integers that you wouldn't necessarily think of as "numbers". For example, memory addresses are generally represented as integers; individual eight-bit bytes are generally conceived of as one-byte integers; and characters (such as ASCII characters and Unicode characters) are usually identified by integer-valued codepoints. (In all of these cases, incidentally, in the rare event that we want to display them, we generally use hexadecimal notation, which is convenient because it uses exactly two characters per eight-bit byte.) Another important case, that usually is thought of as numeric even outside programming, is array indices; even in math, an array of length three will have elements numbered 1, 2, and 3 (though in programming, many languages — including Scala — will use the indices 0, 1, and 2 instead, sometimes because of the underlying scheme for mapping indices to memory addresses, and sometimes due to historical reasons relating to older languages' doing so for this reason).
More generally, many things in computing (and in the real world) are strictly quantized; it doesn't make sense to write of "2.4 table rows", or "2.4 loop iterations", so it's convenient to have a data-type where arithmetic is exact, and represents exact integer quantities.
But you're right that the distinction is not absolutely essential; a number of scripting languages, such as Perl and JavaScript and Tcl, have mostly dispensed with the explicit distinction between integers and floating-point numbers (though the distinction is still often drawn in the workings of the interpreters, with conversions occurring implicitly when needed). For example, in JavaScript, typeof 3 and typeof 3.0 are both 'number'.
Integers are generally much easier to work with given that they have exact representations. This is true not only at the software level, but at the hardware level as well. For example, looking at this source describing the x86 architecture, floating point addition generally takes 4X longer than integer addition. As such, it is advantageous to separate the two types of operations for performance reasons as well as usability reasons.
To add to other two answers, integers are not actually in floats. That is, there are some integers which can't be represented exactly by a float:
scala> Int.MaxValue.toFloat == (Int.MaxValue - 1).toFloat
res0: Boolean = true
Same for longs and doubles.

double_t in C99

I just read that C99 has double_t which should be at least as wide as double. Does this imply that it gives more precision digits after the decimal place? More than the usual 15 digits for double?.
Secondly, how to use it: Is only including
#include <float.h>
enough? I read that one has to set the FLT_EVAL_METHOD to 2 for long double. How to do this? As I work with numerical methods, I would like maximum precision without using an arbitrary precision library.
Thanks a lot...
No. double_t is at least as wide as double; i.e., it might be the same as double. Footnote 190 in the C99 standard makes the intent clear:
The types float_t and double_t are
intended to be the implementation’s
most efficient types at least as wide
as float and double, respectively.
As Michael Burr noted, you can't set FLT_EVAL_METHOD.
If you want the widest floating-point type on any system available using only C99, use long double. Just be aware that on some platforms it will be the same as double (and could even be the same as float).
Also, if you "work with numerical methods", you should be aware that for many (most even) numerical methods, the approximation error of the method is vastly larger than the rounding error of double precision, so there's often no benefit to using wider types. Exceptions exist, of course. What type of numerical methods are you working on, specifically?
Edit: seriously, either (a) just use long double and call it a day or (b) take a few weeks to learn about how floating-point is actually implemented on the platforms that you're targeting, and what the actual accuracy requirements are for the algorithms that you're implementing.
Note that you don't get to set FLT_EVAL_METHOD - it is set by the compiler's headers to let you determine how the library does certain things with floating point.
If your code is very sensitive to exactly how floating point operations are performed, you can use the value of that macro to conditionally compile code to handle those differences that might be important to you.
So for example, in general you know that double_t will be at least a double in all cases. If you want your code to do something different if double_t is a long double then your code can test if FLT_EVAL_METHOD == 2 and act accordingly.
Note that if FLT_EVAL_METHOD is something other than 0, 1, or 2 you'll need to look at the compiler's documentation to know exactly what type double_t is.
double_t may be defined by typedef double double_t; — of course, if you plan to rely on implementation specifics, you need to look at your own implementation.

double precision in Ada?

I'm very new to Ada and was trying to see if it offers double precision type. I see that we have float and
Put( Integer'Image( Float'digits ) );
on my machine gives a value of 6, which is not enough for numerical computations.
Does Ada has double and long double types as in C?
Thanks a lot...
It is a wee bit more complicated than that.
The only predefined floating-point type that compilers have to support is Float. Compilers may optionally support Short_Float and Long_Float. You should be able to look in appendex F of your compiler documentation to see what it supports.
In practice, your compiler almost certianly defines Float as a 32-bit IEEE float, and Long_Float as a 64-bit. Note that C pretty much works this way too with its float and double. C doesn't actually define the size of those.
If you absolutely must have a certian precision (eg: you are sharing the data with something external that must use IEEE 64-bit), then you should probably define your own float type with exactly that precision. That would ensure your code is either portable to any platform or compiler you move it to, or that it will produce a compiler error so you can fix the issue.
You can create any size Float you like. For a long it would be:
type My_Long_Float is digits 11;
Wiki Books is a good reference for things like this.

Does scientific notation affect Perl's precision?

I encountered a weird behaviour in Perl. The following subtraction should yield zero as result (which it does in Python):
print 7.6178E-01 - 0.76178
-1.11022302462516e-16
Why does it occur and how to avoid it?
P.S. Effect appears on "v5.10.0 built for x86_64-linux-gnu-thread-multi" (Ubuntu 9.04) and "v5.8.9 built for darwin-2level" (Mac OS 10.6)
It's not that scientific notation affects the precision so much as the limitations of floating point notation represented in binary. See the answers to the perlfaq4. This is a problem for any language that relies on the underlying architecture for number storage.
Why am I getting long decimals (eg, 19.9499999999999) instead of the numbers I should be getting (eg, 19.95)?
Why is int() broken?
If you need better number handling, check out the bignum pragma.