Why round is incorrect in CCL? - lisp

I have learning Common Lisp for 2 months, I meet a puzzle, here is the code:
CL-USER> (round 33.6)
34
-0.40000153
anyone explain it? Thanks

I'm not sure I understand your problem. In CLisp, round rounds to the nearest integer (unless you specify a divisor). The nearest integer to 33.6 is 34 so that bit is right.
And since round returns both the quotient and remainder, it gives 34, with a remainder of -0.4. That bit's mostly right so I suspect you're wondering why it's only "mostly".
The reason it's not exactly -0.4 is almost certainly due to the limited precision of floating point numbers. The result of calculating the difference between a (seemingly exact) floating point number and an integer can be surprising:
CL-USER> (- 23.6 24) -0.39999962
You'd expect in a perfect world for that to return -0.4 but it doesn't, for reasons mentioned above.
If you want to know why that's the case (i.e., how floating point works under the covers), you can check out this and this as examples.

Related

Define single precision variable in Racket

I have to run some computations in Racket that I have never used before.
How do I force it to calculate sth in single or half (if it has those) precision floats?
I figured out how to make it compute in big floats:
(bf/ (bf 1) (bf 7))
I know that the abbreviation for floats (double precision) is fl. I cannot figure out the right abbreviation for single floats though.
The 'bigfloat' package you refer to are for arbitrary precision floating point numbers. You're very unlikely to want these, as you point out.
It sounds like you're looking for standard IEEE 64-bit floating point numbers. Racket uses these by default, for all inexact arithmetic.
So, for instance:
(/ 1 pi)
produces
0.3183098861837907
One possible tripper-upper is that when dividing two rational numbers, the result will again be a rational number. So, for instance,
(/ 12347728 298340194)
produces
6173864/149170097
You can force inexact arithmetic either by using exact->inexact (always works), or by ensuring that your literals end with decimals (unless you're using the htdp languages).
So, for instance:
(/ 12347728.0 298340194.0)
produces
0.04138808061511148
Let me know if this doesn't answer your question....

When can I compare NetLogo values as integers?

Is there official documentation to resolve the apparent conflict between these two statements from the NetLogo 5.0.5 Programming Guide:
"A patch's coordinates are always integers" (from the Agents section)
"All numbers in NetLogo are stored internally as double precision floating point numbers" (from the Math section on the same page.)
Here's why I ask: if the integer patch coordinates are stored as floating point numbers that are very close to integer values then I should avoid comparisons for equality. For example, if there are really no integers, instead of
if pxcor = pycor...
I should use the usual tolerance-checking, like
if abs( pxcor – pycor) < 0.1 ...
Is there some official word that the more complicated code is unnecessary?
The Math section also seems to imply the absence of integer literals: "No distinction is made between 3 and 3.0". So is the official policy to avoid comparisons for equality with constants? For example, is there official sanction for writing code like
if pxcor = 3...
?
Are sliders defined somewhere to produce floating point values? If so, it seems invalid to compare slider values for equality, also. That is, if so, one should avoid writing code like
if pxcor = slider-value
even when the minimum, maximum, and increment values for the slider look like integers.
The focus on official sources in this question arises because I'm not just trying to write a working program. Rather, I'm seeking to tell students how they should program. I'd hate to mislead them, so thanks for any good advice.
NetLogo isn't the only language that works this way, with all numbers stored internally as double precision floating point. The best known other such language is JavaScript.
Math in NetLogo follows IEEE 754, so what follows isn't actually specific to NetLogo, but applies to IEEE 754 generally.
There's no contradiction in the User Manual because mathematically, some floating point numbers are integers, exactly. If the fractional part is exactly zero, then mathematically, it's an integer, and IEEE 754 guarantees that arithmetic and comparison operations will behave as you would expect. If you add 2 and 2 you'll always get 4, never 3.999... or 4.00...01. Integers in, integers out. That holds for comparison, addition, subtraction, multiplication, and divisions that divide evenly. (It may not hold for other operations, so e.g. log 1000 10 isn't exactly 3, and cos 90 isn't exactly 0.)
Therefore if pxcor = 3 is completely valid, correct code. pxcor never has a fractional part, and 3 doesn't have one, either, so no issue of floating point imprecision can arise.
As for NetLogo sliders, if the slider's min, max, and increment are all integers, then there's nothing to worry about; the value of the slider is also always an integer.
(Note: I am the lead developer of NetLogo, and I wrote the sections of the User Manual that you are quoting.)
Just to stress what Seth writes:
Integers in, integers out. That holds for comparison, addition,
subtraction, multiplication, and divisions that divide evenly (emphasis added).
Here's a classic instance of floating point imprecision:
observer> show (2 + 1) / 10
observer: 0.3
observer> show 2 / 10 + 1 / 10
observer: 0.30000000000000004
For nice links that explain why, check out http://0.30000000000000004.com/

iOS - rounding a float with roundf() is not working properly

I am having issue with rounding a float in iPhone application.
float f=4.845;
float s= roundf(f * 100.0)/100;
NSLog(#"Output-1: %.2f",s);
s= roundf(484.5)/100;
NSLog(#"Output-2: %.2f",s);
Output-1: 4.84
Output-2: 4.85
Let me know whats problem in this and how to solve this.
The problem is that you don't yet realise one of the inherent problems with floating point: the fact that most numbers cannot be represented exactly (a).
This means that 4.845 is likely to be, in reality, something like 4.8449999999999 which, when you round it, gives you 4.84 rather than what you expect, 4.85.
And what value you end up with also depends on how you calculate it, which is why you're getting a different result.
And, of course, no floating point "inaccuracy" answer would be complete on SO without the authoritative What Every Computer Scientist Should Know About Floating-Point Arithmetic.
(a) Only sums of exact powers of two, within a certain similar range, can be exactly rendered in IEEE754. So, for example, 484.5 is
256 + 128 + 64 + 32 + 4 + 0.5 (28 + 27 + 26 + 25 + 22 + 2-1).
See this answer for a more detailed look into the IEEE754 format.
As to solving it, you have a few choices. One is to use double instead of float. That gives you more precision and greater range of numbers but only moves the problem further away rather than really solving it. Since 0.1 is a repeating fraction in IEEE754, no amount of bits (short of infinity) can exactly represent it.
Another choice is to use a custom library like a big decimal type, which can represent decimals of arbitrary precision (that's not infinite precision as some people are wont to suggest, since it's limited by memory). This will reduce the errors caused by the binary/decimal mismatch.
You may also want to look into NSDecimalNumber - this doesn't give you arbitrary precision but it does give a large range with accurate decimal representation.
There'll still be numbers you can't represent, like PI or the square root of 2 or any other irrational number, but it should cover most cases. If you really need to handle those other values, you need to switch to symbolic numeric representations.
Unlike 484.5 which can be represented exactly as a float* , 4.845 is represented as 4.8449998 (see this calculator if you wish to try other numbers). Multiplying by one hundred keeps the number at 484.49998, which correctly rounds to 484.
* An exact representation is possible because its fractional part 0.5 is a power of two (i.e. 2^-1).

(= 1/5 0.2) is false?

This is in both common lisp (clisp and sbcl) and scheme (guile). While these are true:
(= 1/2 0.5)
(= 1/4 0.25)
This turns out to be false:
(= 1/5 0.2)
I checked the hyperspec, it says that "=" should check for mathematical equivalency despite the types of the arguments. What the heck is going on?
The problem is that 0.2 really is not equal to 1/5. Floating point numbers cannot represent 0.2 correctly, so the literal 0.2 is actually rounded to the nearest representable floating point number (0.200000001 or something like that). After this rounding occurs, the computer has no way of knowing that your number was originally 0.2 and not another nearby non-representable number (such as 0.20000000002).
As for the reason why 1/2 and 1/4 work its because floating point is a base 2 encoding and can accurately represent powers of two.
please read What every Computer Scientist should know about Floating Point Numbers
This actually depends on what is coerced to what. If you think about it, then rational is more precise, so it makes sense to coerce to rational for comparison, rather then to float, however, if you consciously want to compare numbers as being floats you can force it by doing something like below:
(declaim (inline float=))
(defun float= (a b)
(= (coerce a 'float) (coerce b 'float)))
(float= 0.2 1/5) ; T
Actually... there's more to it, since floats provide you with things like not-a-number, positive-infinity and negative-infinity. Infinities, for example, for the 64-bit floats are 10e200 iirc, so, there's nothing stopping you from creating a rational larger then infinity (or smaller then negative infinity!), so, perhaps, if you want to be super precise, you'd need to consider those cases too. Likewise a comparison to not-a-number must always give you nil...
However, in scheme you have exact numbers, so you may ask (note the #e prefix, which means the number which follows is to be treated exactly):
> (= 1/5 #e0.2)
#t

iPhone and floating point math

I have following code:
float totalSpent;
int intBudget;
float moneyLeft;
totalSpent += Amount;
moneyLeft = intBudget - totalSpent;
And this is how it looks in debugger: http://www.braginski.com/math.tiff
Why would moneyLeft calculated by the code above is .02 different compared to the expression calculated by the debugger?
Expression windows is correct, yet code above produces wrong by .02 result. It only happens for a very large numbers (yet way below int limit)
thanks
A single-precision float has 23 bits of precision. That means that every calculation is rounded to 23 binary digits. This means that if you have a computation that, say, adds a very small number to a very large number, rounding may result in strange results.
Imagine that you are doing math in scientific notation decimal by hand, under the rule that you may only have four significant figures. Let's say I ask you to write twelve in scientific notation, with four significant figures. Remembering junior high school, you write:
1.200 × 101
Now I say compute the square of 12, and then add 0.5. That is easy enough:
1.440×102 + 0.005×102 = 1.445×102
How about twelve cubed plus 0.75:
1.728×103 + 0.00075×103 = 1.72875×103
But remember, I only gave you room for four significant digits, so you must round; then we get:
1.728×103 + 7.5×10-1 = 1.729×103
See? The lack of precision can make the computation come out with unexpected results.
In your example, you've got 999999 in a calculation where you're trying to be precise to 0.01. log2(999999) = 19.93 and log2(0.01) = -6.64. The difference is more than 23; therefore you would need more than 23 binary digits to perform this calculation accurately.
Because floating point mathematics rounds-off precision by its very nature, it is usually a bad choice for currency computation, where you must be accurate to the last cent. But are you really concerned with fractions of a cent in your application? If not, then why not do away with the decimal point altogether, and simply store cents (instead of dollars) in a 64-bit integer? 264¢ is more than the GDP of the entire planet.
Floating point will always produce strange results with money type calculations.
The golden rule is that floating point is good for things you measure litres,yards,lightyears,bushels etc. etc. but not for things you count like
sheep, beans, buttons etc.
Most money calculations are to do with counting pennies so use integer math
and you wont get the strange results. Either use a fixed decimal arithimatic
library (which would probably be overkill on an iPhone) or store your amounts
as whole numbers of cents and only convert to $ and cents on display.