Is there official documentation to resolve the apparent conflict between these two statements from the NetLogo 5.0.5 Programming Guide:
"A patch's coordinates are always integers" (from the Agents section)
"All numbers in NetLogo are stored internally as double precision floating point numbers" (from the Math section on the same page.)
Here's why I ask: if the integer patch coordinates are stored as floating point numbers that are very close to integer values then I should avoid comparisons for equality. For example, if there are really no integers, instead of
if pxcor = pycor...
I should use the usual tolerance-checking, like
if abs( pxcor – pycor) < 0.1 ...
Is there some official word that the more complicated code is unnecessary?
The Math section also seems to imply the absence of integer literals: "No distinction is made between 3 and 3.0". So is the official policy to avoid comparisons for equality with constants? For example, is there official sanction for writing code like
if pxcor = 3...
?
Are sliders defined somewhere to produce floating point values? If so, it seems invalid to compare slider values for equality, also. That is, if so, one should avoid writing code like
if pxcor = slider-value
even when the minimum, maximum, and increment values for the slider look like integers.
The focus on official sources in this question arises because I'm not just trying to write a working program. Rather, I'm seeking to tell students how they should program. I'd hate to mislead them, so thanks for any good advice.
NetLogo isn't the only language that works this way, with all numbers stored internally as double precision floating point. The best known other such language is JavaScript.
Math in NetLogo follows IEEE 754, so what follows isn't actually specific to NetLogo, but applies to IEEE 754 generally.
There's no contradiction in the User Manual because mathematically, some floating point numbers are integers, exactly. If the fractional part is exactly zero, then mathematically, it's an integer, and IEEE 754 guarantees that arithmetic and comparison operations will behave as you would expect. If you add 2 and 2 you'll always get 4, never 3.999... or 4.00...01. Integers in, integers out. That holds for comparison, addition, subtraction, multiplication, and divisions that divide evenly. (It may not hold for other operations, so e.g. log 1000 10 isn't exactly 3, and cos 90 isn't exactly 0.)
Therefore if pxcor = 3 is completely valid, correct code. pxcor never has a fractional part, and 3 doesn't have one, either, so no issue of floating point imprecision can arise.
As for NetLogo sliders, if the slider's min, max, and increment are all integers, then there's nothing to worry about; the value of the slider is also always an integer.
(Note: I am the lead developer of NetLogo, and I wrote the sections of the User Manual that you are quoting.)
Just to stress what Seth writes:
Integers in, integers out. That holds for comparison, addition,
subtraction, multiplication, and divisions that divide evenly (emphasis added).
Here's a classic instance of floating point imprecision:
observer> show (2 + 1) / 10
observer: 0.3
observer> show 2 / 10 + 1 / 10
observer: 0.30000000000000004
For nice links that explain why, check out http://0.30000000000000004.com/
Related
I am writing a program in Swift that takes the multiplicative inverse of random bytes. Sometimes, the byte is 0, and when the multiplicative inverse is taken, it results in inf.
The multiplicative inverse is being determined using
powf(Float(byte), -1.0)
byte is of type UInt8. If byte is equal to 0, the result is inf as mentioned earlier. How would the multiplicative inverse of 0 be infinity? Wouldn't the multiplicative inverse also be 0 since 0/0's multiplicative inverse is 0/0?
Short answer: By definition. In Swift (and many other languages), floating point numbers are backed by IEEE-754 definition of floats, which is directly implemented by the underlying hardware in most cases and thus quite fast. And according to that standard, division by 0 for floats is defined to be Infinity, and Swift is merely returning that result back to you. (To be precise, 0/0 is defined to be NaN, any positive number divided by 0 is defined to be Infinity, and any negative number divided by 0 is defined to be -Infinity.)
An interesting question to ask might be "why?" Why does IEEE-754 define division by 0 to be Infinity for floats, where one can reasonably also expect the machine to throw an error, or maybe define it as NaN (not-a-number), or perhaps maybe even 0? For an analysis of this, you should really read Kahan's (the designer of the semantics behind IEEE-754) own notes regarding this matter. Starting on page 10 of the linked document, he discusses why the choice of Infinity is preferable for division-by-zero, which essentially boils down to efficient implementation of numerical algorithms since this convention allows skipping of expensive tests in iterative numerical analysis. Start reading on page 10, and go through the examples he discusses, which ends on top of page 14.
To sum up: Floating point division by 0 is defined to be Infinity by the IEEE-754 standard, and there are good reasons for making this choice. Of course, one can imagine different systems adopting a different answer as well, depending on their particular need or application area; but then they wouldn't be IEEE-754 compliant.
Plugging in 0 just means it is 0 divided by some positive number. Then, the multiplicative inverse will be dividing by 0. As you probably know, this is undefined in mathematics, but in swift, it tries to calculate it. Essentially, it keeps subtracting 0 from the number, but never gets a result, so it will output infinity.
Edit: As Alias pointed out, Swift is not actually going through that process of continually subtracting 0. It will just return infinity anytime it is supposed to divide by 0.
I have a task to use Marie Simulator to calculate the area of a circle
requiring its radius
I know that in Marie Language there is no multiplication operator so we use multiplication by adding numbers several times so If I wanted to multiply 2*3 I could write it down like 3+3 or 2+2+2
but when using the area of a circle there is pi which is 3.14 I can't imagine how could I get it so can anyone give me the algorithm or code for that ?
thanks in advance.
MARIE does not have floating point support.
So, should refer to your course work or ask your instructors what to do, as it is not obvious.
It is, of course, possible to do floating point in software, but the complexity is extraordinary, so unlikely to be what the're looking for.
You could use fixed point arithmetic, fractions, or decimal.
Here's one solution that might be appropriate: multiply one of the numbers (having decimal places) by some fixed constant factor, do the arithmetic, then interpret answers accordingly. For example, let's use 100 as the factor, so 3.14 is represented by 314. Let's say r is 9, so we can square that (9x9=81), then multiply 81 x 314 = 25434. Now we know that value is 100x too large, so the real answer is 254.34. (You can choose to ignore the .34, or, round it, then ignore. 254 is still more accurate than 243 which we would get from 9x9x3.)
Fixed point multiplies all numbers by the constant (usually a power of 2, so that the binary point is in the same bit position). Additions are relatively straightforward, but multiplications need to interpret results by factoring in (or out) that both sources are in scaled, meaning the answer is doubly scaled.
If you need to measure radius also with decimal digits, e.g. 9.5, then you could scale both 9.5 and 3.14 by 100. Then we need 950x950, and multiply by 314. The answer will be 100x100x100 too large, so 1000000x too large. With this approach, 16 bits that MARIE offers will overflow, so you would need to use at least 32-bit arithmetic (not trivial on 16-bit machine).
You can use two different scaling factors, e.g. 9.5 as 95 and 3.14 as 314. Take 95x95x314, is 10000x too large, so interpret the answer accordingly. Still this will overflow MARIE's 16-bits
Fractions would maintain both a numerator and denominator for all numbers. So, 3.14 could be 314/100, and 9.5 could be 95/10 — and simplified 157/50 and 19/2. To add you have to find a common denominator, convert, then sum numerators. To multiply you multiply both numerators and denominators: numerator = 19x19x157, denominator = 2x2x50. Just fits in 16-bit unsigned arithmetic, but still overflows 16-bit signed arithmetic..
And finally binary coded decimal is more like a string format, where numbers are stored one decimal digit per byte or per nibble (packed decimal). Algorithms for addition and subtraction need to account for variable length inputs.
Big integer forms also use similar to binary coded decimal but compose much larger elements instead of single decimal digits.
All of these approaches require some thought, and the more limitations you want to remove, the more work required. So, I'd suggest to go back to your course to find what they really want.
I am having issue with rounding a float in iPhone application.
float f=4.845;
float s= roundf(f * 100.0)/100;
NSLog(#"Output-1: %.2f",s);
s= roundf(484.5)/100;
NSLog(#"Output-2: %.2f",s);
Output-1: 4.84
Output-2: 4.85
Let me know whats problem in this and how to solve this.
The problem is that you don't yet realise one of the inherent problems with floating point: the fact that most numbers cannot be represented exactly (a).
This means that 4.845 is likely to be, in reality, something like 4.8449999999999 which, when you round it, gives you 4.84 rather than what you expect, 4.85.
And what value you end up with also depends on how you calculate it, which is why you're getting a different result.
And, of course, no floating point "inaccuracy" answer would be complete on SO without the authoritative What Every Computer Scientist Should Know About Floating-Point Arithmetic.
(a) Only sums of exact powers of two, within a certain similar range, can be exactly rendered in IEEE754. So, for example, 484.5 is
256 + 128 + 64 + 32 + 4 + 0.5 (28 + 27 + 26 + 25 + 22 + 2-1).
See this answer for a more detailed look into the IEEE754 format.
As to solving it, you have a few choices. One is to use double instead of float. That gives you more precision and greater range of numbers but only moves the problem further away rather than really solving it. Since 0.1 is a repeating fraction in IEEE754, no amount of bits (short of infinity) can exactly represent it.
Another choice is to use a custom library like a big decimal type, which can represent decimals of arbitrary precision (that's not infinite precision as some people are wont to suggest, since it's limited by memory). This will reduce the errors caused by the binary/decimal mismatch.
You may also want to look into NSDecimalNumber - this doesn't give you arbitrary precision but it does give a large range with accurate decimal representation.
There'll still be numbers you can't represent, like PI or the square root of 2 or any other irrational number, but it should cover most cases. If you really need to handle those other values, you need to switch to symbolic numeric representations.
Unlike 484.5 which can be represented exactly as a float* , 4.845 is represented as 4.8449998 (see this calculator if you wish to try other numbers). Multiplying by one hundred keeps the number at 484.49998, which correctly rounds to 484.
* An exact representation is possible because its fractional part 0.5 is a power of two (i.e. 2^-1).
I have a quick question. So, say I have a really big number up to like 15 digits, and I would take the input and assign it to two variables, one float and one double if I were to compare two numbers, how would you compare them? I think double has the precision up to like 15 digits? and float has 8? So, do I simply compare them while the float only contains 8 digits and pad the rest or do I have the float to print out all 15 digits and then make the comparison? Also, if I were asked to print out the float number, is the standard way of doing it is just printing it up to 8 digits? which is its max precision
thanks
Most languages will do some form of type promotion to let you compare types that are not identical, but reasonably similar. For details, you would have to indicate what language you are referring to.
Of course, the real problem with comparing floating point numbers is that the results might be unexpected due to rounding errors. Most mathematical equivalences don't hold for floating point artihmetic, so two sequences of operations which SHOULD yield the same value might actually yield slightly different values (or even very different values if you aren't careful).
EDIT: as for printing, the "standard way" is based on what you need. If, for some reason, you are doing monetary computations in floating point, chances are that you'll only want to print 2 decimal digits.
Thinking in terms of digits may be a problem here. Floats can have a range from negative infinity to positive infinity. In C# for example the range is ±1.5 × 10^−45 to ±3.4 × 10^38 with a precision of 7 digits.
Also, IEEE 754 defines floats and doubles.
Here is a link that might help http://en.wikipedia.org/wiki/IEEE_floating_point
Your question is the right one. You want to consider your approach, though.
Whether at 32 or 64 bits, the floating-point representation is not meant to compare numbers for equality. For example, the assertion 2.0/7.0 == 60.0/210.0 may or may not be true in the CPU's view. Conceptually, the floating-point is inherently meant to be imprecise.
If you wish to compare numbers for equality, use integers. Consider again the ratios of the last paragraph. The assertion that 2*210 == 7*60 is always true -- noting that those are the integral versions of the same four numbers as before, only related using multiplication rather than division. One suspects that what you are really looking for is something like this.
We can write a simple Rational Number class using two integers representing A/B with B != 0.
If we want to represent an irrational number class (storing and computing), the first thing came to my mind is to use floating point, which means use IEEE 754 standard (binary fraction). This is because irrational number must be approximated.
Is there another way to write irrational number class other than using binary fraction (whether they conserve memory space or not) ?
I studied jsbeuno's solution using Python: Irrational number representation in any programming language?
He's still using the built-in floating point to store.
This is not homework.
Thank you for your time.
With a cardinality argument, there are much more irrational numbers than rational ones. (and the number of IEEE754 floating point numbers is finite, probably less than 2^64).
You can represent numbers with something else than fractions (e.g. logarithmically).
jsbeuno is storing the number as a base and a radix and using those when doing calcs with other irrational numbers; he's only using the float representation for output.
If you want to get fancier, you can define the base and the radix as rational numbers (with two integers) as described above, or make them themselves irrational numbers.
To make something thoroughly useful, though, you'll end up replicating a symbolic math package.
You can always use symbolic math, where items are stored exactly as they are and calculations are deferred until they can be performed with precision above some threshold.
For example, say you performed two operations on a non-irrational number like 2, one to take the square root and then one to square that. With limited precision, you may get something like:
(√2)²
= 1.414213562²
= 1.999999999
However, storing symbolic math would allow you to store the result of √2 as √2 rather than an approximation of it, then realise that (√x)² is equivalent to x, removing the possibility of error.
Now that obviously involves a more complicated encoding that simple IEEE754 but it's not impossible to achieve.