I am having issue with rounding a float in iPhone application.
float f=4.845;
float s= roundf(f * 100.0)/100;
NSLog(#"Output-1: %.2f",s);
s= roundf(484.5)/100;
NSLog(#"Output-2: %.2f",s);
Output-1: 4.84
Output-2: 4.85
Let me know whats problem in this and how to solve this.
The problem is that you don't yet realise one of the inherent problems with floating point: the fact that most numbers cannot be represented exactly (a).
This means that 4.845 is likely to be, in reality, something like 4.8449999999999 which, when you round it, gives you 4.84 rather than what you expect, 4.85.
And what value you end up with also depends on how you calculate it, which is why you're getting a different result.
And, of course, no floating point "inaccuracy" answer would be complete on SO without the authoritative What Every Computer Scientist Should Know About Floating-Point Arithmetic.
(a) Only sums of exact powers of two, within a certain similar range, can be exactly rendered in IEEE754. So, for example, 484.5 is
256 + 128 + 64 + 32 + 4 + 0.5 (28 + 27 + 26 + 25 + 22 + 2-1).
See this answer for a more detailed look into the IEEE754 format.
As to solving it, you have a few choices. One is to use double instead of float. That gives you more precision and greater range of numbers but only moves the problem further away rather than really solving it. Since 0.1 is a repeating fraction in IEEE754, no amount of bits (short of infinity) can exactly represent it.
Another choice is to use a custom library like a big decimal type, which can represent decimals of arbitrary precision (that's not infinite precision as some people are wont to suggest, since it's limited by memory). This will reduce the errors caused by the binary/decimal mismatch.
You may also want to look into NSDecimalNumber - this doesn't give you arbitrary precision but it does give a large range with accurate decimal representation.
There'll still be numbers you can't represent, like PI or the square root of 2 or any other irrational number, but it should cover most cases. If you really need to handle those other values, you need to switch to symbolic numeric representations.
Unlike 484.5 which can be represented exactly as a float* , 4.845 is represented as 4.8449998 (see this calculator if you wish to try other numbers). Multiplying by one hundred keeps the number at 484.49998, which correctly rounds to 484.
* An exact representation is possible because its fractional part 0.5 is a power of two (i.e. 2^-1).
Related
I have a task to use Marie Simulator to calculate the area of a circle
requiring its radius
I know that in Marie Language there is no multiplication operator so we use multiplication by adding numbers several times so If I wanted to multiply 2*3 I could write it down like 3+3 or 2+2+2
but when using the area of a circle there is pi which is 3.14 I can't imagine how could I get it so can anyone give me the algorithm or code for that ?
thanks in advance.
MARIE does not have floating point support.
So, should refer to your course work or ask your instructors what to do, as it is not obvious.
It is, of course, possible to do floating point in software, but the complexity is extraordinary, so unlikely to be what the're looking for.
You could use fixed point arithmetic, fractions, or decimal.
Here's one solution that might be appropriate: multiply one of the numbers (having decimal places) by some fixed constant factor, do the arithmetic, then interpret answers accordingly. For example, let's use 100 as the factor, so 3.14 is represented by 314. Let's say r is 9, so we can square that (9x9=81), then multiply 81 x 314 = 25434. Now we know that value is 100x too large, so the real answer is 254.34. (You can choose to ignore the .34, or, round it, then ignore. 254 is still more accurate than 243 which we would get from 9x9x3.)
Fixed point multiplies all numbers by the constant (usually a power of 2, so that the binary point is in the same bit position). Additions are relatively straightforward, but multiplications need to interpret results by factoring in (or out) that both sources are in scaled, meaning the answer is doubly scaled.
If you need to measure radius also with decimal digits, e.g. 9.5, then you could scale both 9.5 and 3.14 by 100. Then we need 950x950, and multiply by 314. The answer will be 100x100x100 too large, so 1000000x too large. With this approach, 16 bits that MARIE offers will overflow, so you would need to use at least 32-bit arithmetic (not trivial on 16-bit machine).
You can use two different scaling factors, e.g. 9.5 as 95 and 3.14 as 314. Take 95x95x314, is 10000x too large, so interpret the answer accordingly. Still this will overflow MARIE's 16-bits
Fractions would maintain both a numerator and denominator for all numbers. So, 3.14 could be 314/100, and 9.5 could be 95/10 — and simplified 157/50 and 19/2. To add you have to find a common denominator, convert, then sum numerators. To multiply you multiply both numerators and denominators: numerator = 19x19x157, denominator = 2x2x50. Just fits in 16-bit unsigned arithmetic, but still overflows 16-bit signed arithmetic..
And finally binary coded decimal is more like a string format, where numbers are stored one decimal digit per byte or per nibble (packed decimal). Algorithms for addition and subtraction need to account for variable length inputs.
Big integer forms also use similar to binary coded decimal but compose much larger elements instead of single decimal digits.
All of these approaches require some thought, and the more limitations you want to remove, the more work required. So, I'd suggest to go back to your course to find what they really want.
I've been using matlab to solve some boundary value problems lately, and I've noticed an annoying quirk. Suppose I start with the interval [0,1], and I want to search inside it. Naturally, one would perform a binary search, so I would subdivide the interval into [0,0.5] and [0.5,1]. Excellent: let's now suppose we narrow down our search to [0.5,1]. Now we divide the interval [0.5,0.75] and [0.75,1]. No apparent problem yet. However, as we keep going, representation of powers of 2 in base 10 becomes less and less natural. For example, 2^-22 in binary is just 22 bits, while in decimal it is 16 digits. However, keep in mind that each digit of decimal is really encoding ~ 4 bits. In other words, representing these fractions as decimal is extremely inefficient.
Matlab's precision only extends to 16 digit decimal floats, so a binary search going to 2^-22 is as good as you can do. However, 2^-22 ~ 10^-7, which is much bigger than 10^-16, so the best search strategy in matlab seems to be a decimal search! In any case, this is what I have done so far: to take full advantage of the 16 digit precision, I've had to subdivide the interval [0,1] into 10 pieces.
Hopefully I've made my problem clear. So, my question is: how do I make matlab count in native binary? I want to work with 64 bit binary floats!
I am having a problem with a simple division from two integers. I need it to be as accurate as possible, but for some reason the double type is working strange.
For example, if I execute the following code:
double res = (29970.0/1000.0);
The result is 29.969999999999999, when it should be 29.970.
Any idea why this is happening?
Thanks
Any idea why this is happening?
Because double representation is finite. For example, IEEE754 double-precision standard has 52 bits for fraction. So, not all the real numbers are covered. So, some of the values can not be ideally precise. In your case the result is 10^-15 away from the ideal.
I need it to be as accurate as possible
You shouldn't use doubles, then. In Java, for example, you would use BigDecimal instead (most languages provide a similar facility). double operations are intrinsically inaccurate to some degree. This is due to the internal representation of floating point numbers.
floating point numbers of type float and double are stored in binary format. Therefore numbers cant have precise decimal values. Those values are instead quantisized. If you hypothetically had only 2 bits fraction number type you would be able to represent only 2^-2 quantums: 0.00 0.25 0.50 0.75, nothing between.
I need it to be as accurate as possible
There is no silver bullet, but if you want only basic arithmetic operations (which map ℚ to ℚ), and you REALLY want exact results, then your best bet is rational type composed of two unlimited integers (a.k.a. BigInteger, BigInt, etc.) - but even then, memory is not infinite, and you must think about it.
For the rest of the question, please read about fixed size floating-point numbers, there's plenty of good sources.
I have following code:
float totalSpent;
int intBudget;
float moneyLeft;
totalSpent += Amount;
moneyLeft = intBudget - totalSpent;
And this is how it looks in debugger: http://www.braginski.com/math.tiff
Why would moneyLeft calculated by the code above is .02 different compared to the expression calculated by the debugger?
Expression windows is correct, yet code above produces wrong by .02 result. It only happens for a very large numbers (yet way below int limit)
thanks
A single-precision float has 23 bits of precision. That means that every calculation is rounded to 23 binary digits. This means that if you have a computation that, say, adds a very small number to a very large number, rounding may result in strange results.
Imagine that you are doing math in scientific notation decimal by hand, under the rule that you may only have four significant figures. Let's say I ask you to write twelve in scientific notation, with four significant figures. Remembering junior high school, you write:
1.200 × 101
Now I say compute the square of 12, and then add 0.5. That is easy enough:
1.440×102 + 0.005×102 = 1.445×102
How about twelve cubed plus 0.75:
1.728×103 + 0.00075×103 = 1.72875×103
But remember, I only gave you room for four significant digits, so you must round; then we get:
1.728×103 + 7.5×10-1 = 1.729×103
See? The lack of precision can make the computation come out with unexpected results.
In your example, you've got 999999 in a calculation where you're trying to be precise to 0.01. log2(999999) = 19.93 and log2(0.01) = -6.64. The difference is more than 23; therefore you would need more than 23 binary digits to perform this calculation accurately.
Because floating point mathematics rounds-off precision by its very nature, it is usually a bad choice for currency computation, where you must be accurate to the last cent. But are you really concerned with fractions of a cent in your application? If not, then why not do away with the decimal point altogether, and simply store cents (instead of dollars) in a 64-bit integer? 264¢ is more than the GDP of the entire planet.
Floating point will always produce strange results with money type calculations.
The golden rule is that floating point is good for things you measure litres,yards,lightyears,bushels etc. etc. but not for things you count like
sheep, beans, buttons etc.
Most money calculations are to do with counting pennies so use integer math
and you wont get the strange results. Either use a fixed decimal arithimatic
library (which would probably be overkill on an iPhone) or store your amounts
as whole numbers of cents and only convert to $ and cents on display.
I write financial applications where I constantly battle the decision to use a double vs using a decimal.
All of my math works on numbers with no more than 5 decimal places and are not larger than ~100,000. I have a feeling that all of these can be represented as doubles anyways without rounding error, but have never been sure.
I would go ahead and make the switch from decimals to doubles for the obvious speed advantage, except that at the end of the day, I still use the ToString method to transmit prices to exchanges, and need to make sure it always outputs the number I expect. (89.99 instead of 89.99000000001)
Questions:
Is the speed advantage really as large as naive tests suggest? (~100 times)
Is there a way to guarantee the output from ToString to be what I want? Is this assured by the fact that my number is always representable?
UPDATE: I have to process ~ 10 billion price updates before my app can run, and I have implemented with decimal right now for the obvious protective reasons, but it takes ~3 hours just to turn on, doubles would dramatically reduce my turn on time. Is there a safe way to do it with doubles?
Floating point arithmetic will almost always be significantly faster because it is supported directly by the hardware. So far almost no widely used hardware supports decimal arithmetic (although this is changing, see comments).
Financial applications should always use decimal numbers, the number of horror stories stemming from using floating point in financial applications is endless, you should be able to find many such examples with a Google search.
While decimal arithmetic may be significantly slower than floating point arithmetic, unless you are spending a significant amount of time processing decimal data the impact on your program is likely to be negligible. As always, do the appropriate profiling before you start worrying about the difference.
There are two separable issues here. One is whether the double has enough precision to hold all the bits you need, and the other is where it can represent your numbers exactly.
As for the exact representation, you are right to be cautious, because an exact decimal fraction like 1/10 has no exact binary counterpart. However, if you know that you only need 5 decimal digits of precision, you can use scaled arithmetic in which you operate on numbers multiplied by 10^5. So for example if you want to represent 23.7205 exactly you represent it as 2372050.
Let's see if there is enough precision: double precision gives you 53 bits of precision.
This is equivalent to 15+ decimal digits of precision. So this would allow you five digits after the decimal point and 10 digits before the decimal point, which seems ample for your application.
I would put this C code in a .h file:
typedef double scaled_int;
#define SCALE_FACTOR 1.0e5 /* number of digits needed after decimal point */
static inline scaled_int adds(scaled_int x, scaled_int y) { return x + y; }
static inline scaled_int muls(scaled_int x, scaled_int y) { return x * y / SCALE_FACTOR; }
static inline scaled_int scaled_of_int(int x) { return (scaled_int) x * SCALE_FACTOR; }
static inline int intpart_of_scaled(scaled_int x) { return floor(x / SCALE_FACTOR); }
static inline int fraction_of_scaled(scaled_int x) { return x - SCALE_FACTOR * intpart_of_scaled(x); }
void fprint_scaled(FILE *out, scaled_int x) {
fprintf(out, "%d.%05d", intpart_of_scaled(x), fraction_of_scaled(x));
}
There are probably a few rough spots but that should be enough to get you started.
No overhead for addition, cost of a multiply or divide doubles.
If you have access to C99, you can also try scaled integer arithmetic using the int64_t 64-bit integer type. Which is faster will depend on your hardware platform.
Always use Decimal for any financial calculations or you will be forever chasing 1cent rounding errors.
Yes; software arithmetic really is 100 times slower than hardware. Or, at least, it is a lot slower, and a factor of 100, give or take an order of magnitude, is about right. Back in the bad old days when you could not assume that every 80386 had an 80387 floating-point co-processor, then you had software simulation of binary floating point too, and that was slow.
No; you are living in a fantasy land if you think that a pure binary floating point can ever exactly represent all decimal numbers. Binary numbers can combine halves, quarters, eighths, etc, but since an exact decimal of 0.01 requires two factors of one fifth and one factor of one quarter (1/100 = (1/4)*(1/5)*(1/5)) and since one fifth has no exact representation in binary, you cannot exactly represent all decimal values with binary values (because 0.01 is a counter-example which cannot be represented exactly, but is representative of a huge class of decimal numbers that cannot be represented exactly).
So, you have to decide whether you can deal with the rounding before you call ToString() or whether you need to find some other mechanism that will deal with rounding your results as they are converted to a string. Or you can continue to use decimal arithmetic since it will remain accurate, and it will get faster once machines are released that support the new IEEE 754 decimal arithmetic in hardware.
Obligatory cross-reference: What Every Computer Scientist Should Know About Floating-Point Arithmetic. That's one of many possible URLs.
Information on decimal arithmetic and the new IEEE 754:2008 standard at this Speleotrove site.
Just use a long and multiply by a power of 10. After you're done, divide by the same power of 10.
Decimals should always be used for financial calculations. The size of the numbers isn't important.
The easiest way for me to explain is via some C# code.
double one = 3.05;
double two = 0.05;
System.Console.WriteLine((one + two) == 3.1);
That bit of code will print out False even though 3.1 is equal to 3.1...
Same thing...but using decimal:
decimal one = 3.05m;
decimal two = 0.05m;
System.Console.WriteLine((one + two) == 3.1m);
This will now print out True!
If you want to avoid this sort of issue, I recommend you stick with decimals.
I refer you to my answer given to this question.
Use a long, store the smallest amount you need to track, and display the values accordingly.