As any secondary math student can attest, pi is irrational.
And yet:
Welcome to Racket v5.3.6.
> pi
3.141592653589793
> (rational? pi)
#t
Is this because the representation of pi, in the underlying machine's floating point format, is of limited precision and therefore can always be expressed as some p/q where q is 10^n, and n is the representational precision ?
If so, how could any number thrown about by Racket (or other similarly-behaving scheme) ever be considered anything but rational? And hence, why bother with the rational? function?
UPDATE: Even (rational? (sqrt 3)) reports #t
The number returned by pi is rational because the documentation says so. Specifically it says:
All numbers are complex numbers. Some of them are real numbers, and all of the real numbers that can be represented are also rational numbers, except for +inf.0 (positive infinity), +inf.f (single-precision variant), -inf.0 (negative infinity), -inf.f (single-precision variant), +nan.0 (not-a-number), and +nan.f (single-precision variant). Among the rational numbers, some are integers, because round applied to the number produces the same number.
So your hunch is right. All representable real numbers are indeed rational (except for the infinities and NaNs) because, yes, numbers are stored in fixed-size registers so the machine isn't going to store an irrational number.
As to why the Racket designers bothered with a rational? function, that is a good question. Many languages like Julia and Clojure have a real, actual, honest-to-goodness rational datatype. Racket doesn't, so, as you suspect, it does seem silly to define a near-complete subset of the reals as rationals.
But you know, it just may be convenient to have a way to talk about a non-NaN, non-Infinity value. I would have called it finite, but Racket calls it rational.
Related
I'm running the following code on Emacs Lisp Interaction:
(defun square (x) (* x x))
(square (square (square 1001)))
which is giving me 1114476179152563777. However, the ((1001^2)^2)^2 is actually 1008028056070056028008001.
How is this possible?
#Barmar's answer is accurate for Emacs versions < 27.
In Emacs 27 bignum support has been added. NEWS says:
** Emacs Lisp integers can now be of arbitrary size.
Emacs uses the GNU Multiple Precision (GMP) library to support
integers whose size is too large to support natively. The integers
supported natively are known as "fixnums", while the larger ones are
"bignums". The new predicates 'bignump' and 'fixnump' can be used to
distinguish between these two types of integers.
All the arithmetic, comparison, and logical (a.k.a. "bitwise")
operations where bignums make sense now support both fixnums and
bignums. However, note that unlike fixnums, bignums will not compare
equal with 'eq', you must use 'eql' instead. (Numerical comparison
with '=' works on both, of course.)
Since large bignums consume a lot of memory, Emacs limits the size of
the largest bignum a Lisp program is allowed to create. The
nonnegative value of the new variable 'integer-width' specifies the
maximum number of bits allowed in a bignum. Emacs signals an integer
overflow error if this limit is exceeded.
Several primitive functions formerly returned floats or lists of
integers to represent integers that did not fit into fixnums. These
functions now simply return integers instead. Affected functions
include functions like 'encode-char' that compute code-points, functions
like 'file-attributes' that compute file sizes and other attributes,
functions like 'process-id' that compute process IDs, and functions like
'user-uid' and 'group-gid' that compute user and group IDs.
and indeed using my 27.0.50 build:
(defun square (x) (* x x))
square
(square (square (square 1001)))
1008028056070056028008001
Emacs Lisp doesn't implement bignums, it uses the machine's integer type. The range of integers it supports is between most-negative-fixnum and most-positive-fixnum. On a 64-bit system, most-positive-fixnum will be 261-1, which has about 20 decimal digits.
See Integer Basics in the Elisp manual.
The correct result of your calculation is 25 digits, which is much larger than this. The calculation overflows and wraps around. It should be correct modulo 262.
You could use floating point instead. It has a much larger range, although very large numbers lose precision.
(square (square (square 1001.0)))
1.008028056070056e+24
I have to run some computations in Racket that I have never used before.
How do I force it to calculate sth in single or half (if it has those) precision floats?
I figured out how to make it compute in big floats:
(bf/ (bf 1) (bf 7))
I know that the abbreviation for floats (double precision) is fl. I cannot figure out the right abbreviation for single floats though.
The 'bigfloat' package you refer to are for arbitrary precision floating point numbers. You're very unlikely to want these, as you point out.
It sounds like you're looking for standard IEEE 64-bit floating point numbers. Racket uses these by default, for all inexact arithmetic.
So, for instance:
(/ 1 pi)
produces
0.3183098861837907
One possible tripper-upper is that when dividing two rational numbers, the result will again be a rational number. So, for instance,
(/ 12347728 298340194)
produces
6173864/149170097
You can force inexact arithmetic either by using exact->inexact (always works), or by ensuring that your literals end with decimals (unless you're using the htdp languages).
So, for instance:
(/ 12347728.0 298340194.0)
produces
0.04138808061511148
Let me know if this doesn't answer your question....
This is in both common lisp (clisp and sbcl) and scheme (guile). While these are true:
(= 1/2 0.5)
(= 1/4 0.25)
This turns out to be false:
(= 1/5 0.2)
I checked the hyperspec, it says that "=" should check for mathematical equivalency despite the types of the arguments. What the heck is going on?
The problem is that 0.2 really is not equal to 1/5. Floating point numbers cannot represent 0.2 correctly, so the literal 0.2 is actually rounded to the nearest representable floating point number (0.200000001 or something like that). After this rounding occurs, the computer has no way of knowing that your number was originally 0.2 and not another nearby non-representable number (such as 0.20000000002).
As for the reason why 1/2 and 1/4 work its because floating point is a base 2 encoding and can accurately represent powers of two.
please read What every Computer Scientist should know about Floating Point Numbers
This actually depends on what is coerced to what. If you think about it, then rational is more precise, so it makes sense to coerce to rational for comparison, rather then to float, however, if you consciously want to compare numbers as being floats you can force it by doing something like below:
(declaim (inline float=))
(defun float= (a b)
(= (coerce a 'float) (coerce b 'float)))
(float= 0.2 1/5) ; T
Actually... there's more to it, since floats provide you with things like not-a-number, positive-infinity and negative-infinity. Infinities, for example, for the 64-bit floats are 10e200 iirc, so, there's nothing stopping you from creating a rational larger then infinity (or smaller then negative infinity!), so, perhaps, if you want to be super precise, you'd need to consider those cases too. Likewise a comparison to not-a-number must always give you nil...
However, in scheme you have exact numbers, so you may ask (note the #e prefix, which means the number which follows is to be treated exactly):
> (= 1/5 #e0.2)
#t
We can write a simple Rational Number class using two integers representing A/B with B != 0.
If we want to represent an irrational number class (storing and computing), the first thing came to my mind is to use floating point, which means use IEEE 754 standard (binary fraction). This is because irrational number must be approximated.
Is there another way to write irrational number class other than using binary fraction (whether they conserve memory space or not) ?
I studied jsbeuno's solution using Python: Irrational number representation in any programming language?
He's still using the built-in floating point to store.
This is not homework.
Thank you for your time.
With a cardinality argument, there are much more irrational numbers than rational ones. (and the number of IEEE754 floating point numbers is finite, probably less than 2^64).
You can represent numbers with something else than fractions (e.g. logarithmically).
jsbeuno is storing the number as a base and a radix and using those when doing calcs with other irrational numbers; he's only using the float representation for output.
If you want to get fancier, you can define the base and the radix as rational numbers (with two integers) as described above, or make them themselves irrational numbers.
To make something thoroughly useful, though, you'll end up replicating a symbolic math package.
You can always use symbolic math, where items are stored exactly as they are and calculations are deferred until they can be performed with precision above some threshold.
For example, say you performed two operations on a non-irrational number like 2, one to take the square root and then one to square that. With limited precision, you may get something like:
(√2)²
= 1.414213562²
= 1.999999999
However, storing symbolic math would allow you to store the result of √2 as √2 rather than an approximation of it, then realise that (√x)² is equivalent to x, removing the possibility of error.
Now that obviously involves a more complicated encoding that simple IEEE754 but it's not impossible to achieve.
I compute this simple sum on Matlab:
2*0.04-0.5*0.4^2 = -1.387778780781446e-017
but the result is not zero. What can I do?
Aabaz and Jim Clay have good explanations of what's going on.
It's often the case that, rather than exactly calculating the value of 2*0.04 - 0.5*0.4^2, what you really want is to check whether 2*0.04 and 0.5*0.4^2 differ by an amount that is small enough to be within the relevant numerical precision. If that's the case, than rather than checking whether 2*0.04 - 0.5*0.4^2 == 0, you can check whether abs(2*0.04 - 0.5*0.4^2) < thresh. Here thresh can either be some arbitrary smallish number, or an expression involving eps, which gives the precision of the numerical type you're working with.
EDIT:
Thanks to Jim and Tal for suggested improvement. Altered to compare the absolute value of the difference to a threshold, rather than the difference.
Matlab uses double-precision floating-point numbers to store real numbers. These are numbers of the form m*2^e where m is an integer between 2^52 and 2^53 (the mantissa) and e is the exponent. Let's call a number a floating-point number if it is of this form.
All numbers used in calculations must be floating-point numbers. Often, this can be done exactly, as with 2 and 0.5 in your expression. But for other numbers, most notably most numbers with digits after the decimal point, this is not possible, and an approximation has to be used. What happens in this case is that the number is rounded to the nearest floating-point number.
So, whenever you write something like 0.04 in Matlab, you're really saying "Get me the floating-point number that is closest to 0.04. In your expression, there are 2 numbers that need to be approximated: 0.04 and 0.4.
In addition, the exact result of operations like addition and multiplication on floating-point numbers may not be a floating-point number. Although it is always of the form m*2^e the mantissa may be too large. So you get an additional error from rounding the results of operations.
At the end of the day, a simple expression like yours will be off by about 2^-52 times the size of the operands, or about 10^-17.
In summary: the reason your expression does not evaluate to zero is two-fold:
Some of the numbers you start out with are different (approximations) to the exact numbers you provided.
The intermediate results may also be approximations of the exact results.
What you are seeing is quantization error. Matlab uses doubles to represent numbers, and while they are capable of a lot of precision, they still cannot represent all real numbers because there are an infinite number of real numbers. I'm not sure about Aabaz's trick, but in general I would say there isn't anything you can do, other than perhaps massaging your inputs to be double-friendly numbers.
I do not know if it is applicable to your problem but often the simplest solution is to scale your data.
For example:
a=0.04;
b=0.2;
a-0.2*b
ans=-6.9389e-018
c=a/min(abs([a b]));
d=b/min(abs([a b]));
c-0.2*d
ans=0
EDIT: of course I did not mean to give a universal solution to these kind of problems but it is still a good practice that can make you avoid a few problems in numerical computation (curve fitting, etc ...). See Jim Clay's answer for the reason why you are experiencing these problems.
I'm pretty sure this is a case of ye olde floating point accuracy issues.
Do you need 1e-17 accuracy? Is this merely a case of wanting 'pretty' output?
In that case, you can just use a formatted sprintf to display the number of significant digits you want.
Realize that this is not a matlab problem, but a fundamental limitation of how numbers are represented in binary.
For fun, work out what .1 is in binary...
Some references:
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
http://www.mathworks.com/support/tech-notes/1100/1108.html