how do I round numbers with NSNumberFormatter - iphone

I've got a calculation for example 57 / 30 so the solution will be 1,766666667..
How do i first of all get the 1,766666667 i only get 1 or 1.00 and then how do i round the solution (to be 2)?
thanks a lot!

57/30 performs integer division. To obtain a float (or double) result you should make 1 of the operands a floating point value:
result = 57.0/30;
To round result have a look at standard floor and ceil functions.

Related

How I can get the other part after a division with a modulo operator

When I divide 13 with 3 and use integer numbers the result will be 4.
With mod(13,3) I receive the remainder 1. But how can I get the 4 in Matlab? I think it is not possible to switch to integer numbers for this calculation, isn't it?
You can use the floor function:
result = floor(13/3)
This function always rounds down to the lower integer
You can explicitly use integers:
result = uint32(13)/unit32(3);
You can also use hex numbers:
result = 0xDu32 / 0x3u32;
Note that result will be of type uint32.
Use idivide:
result = idivide(13, 3);
You can specify the rounding method with a third argument, with the default being 'fix', or rounding towards zero. For example, this would round towards negative infinity:
result = idivide(13, 3, 'floor');

How to round the numbers up to the third digit?

I havent found a solution for my code to round the outcome numbers up to the 3rd digit. For example: 1,235
(round (/ (* (cpu-clock cpu) (cpu-cores cpu)) (cpu-price cpu)))
I found this in a tutorial but I expect a same solution for a decimal integer. How can I do it?
(real->decimal-string n [decimal-digits]) → string?
A simple language-agnostic solution is to multiply by 10^digits before rounding, and then divide again after. So if you want to keep 3 digits, multiply by 1000:
round(number * 1000) / 1000
If the number is 1.23534, it gets multiplied to 1235.34, rounded to 1235, and then divided back down to the answer you hope for: 1.235

Can't convert division result into float or decimal type

I have a calculation in my t-sql code that I expect will show decimal result (with at least 2 digits after comma)
My fields that I am using are integer type, but the calculations result is decimal
I tried using CAST as float, but won't work
(COUNT(ct.[ClientFK]) / ehrprg.AnnualGoalClientsServed) AS [AnnualGoal]
I tried:
CAST((COUNT(ct.[ClientFK]) / ehrprg.AnnualGoalClientsServed) as float)
AS[AnnualGoal]
I expect to see at lest two digits after comma -
2/50 to be 0.04 while now I am getting 0
Any advice / help would be much appreciated
Try explicitly casting the denominator to float before the quotient is taken:
COUNT(ct.[ClientFK]) / CAST(ehrprg.AnnualGoalClientsServed AS float) AS [AnnualGoal]
In the above approach, because one of the two terms in the quotient is floating point, the other term (in this case, the count) should be promoted to float as well.

Accountant rounding in swift

I'm not aware how to round numbers in the following manner in Swift:
6.51,6.52,6.53, 6.54 should be rounded down to 6.50
6.56, 6.57, 6.58, 6.59 should be rounded down to 6.55
I have already tried
func roundDown(number: Double, toNearest: Double) -> Double {
return floor(number / toNearest) * toNearest
}
to no success. Any thoughts ?
Here's your problem (and it has nothing to do with Swift whatsoever): Floating point arithmetic is not exact. Let's say you try to divide 6.55 by 0.05 and expect a result of 131.0. In reality, 6.55 is "some number close to 6.55" and 0.05 is "some number close to 0.05", so the result that you get is "some number close to 131.0". That result is likely just a tiny little bit smaller than 131.0, maybe 130.999999999999 and floor () returns 130.0.
What you do: You decide what is the smallest number that you still want to round up. For example, you'd want 130.999999999999 to give a result of 131.0. You'd probably want 130.9999 to give a result of 131.0. So change your code to
floor (number * 20.0 + 0.0001);
This will round 6.549998 to 6.55, so check if you are Ok with that. Also, floor () works in an unexpected way for negative input, so -6.57 would be rounded down to -6.60, which is likely not what you want.

53 * .01 = .531250

I'm converting a string date/time to a numerical time value. In my case I'm only using it to determine if something is newer/older than something else, so this little decimal problem is not a real problem. It doesn't need to be seconds precise. But still it has me scratching my head and I'd like to know why..
My date comes in a string format of #"2010-09-08T17:33:53+0000". So I wrote this little method to return a time value. Before anyone jumps on how many seconds there are in months with 28 days or 31 days I don't care. In my math it's fine to assume all months have 31 days and years have 31*12 days because I don't need the difference between two points in time, only to know if one point in time is later than another.
-(float) uniqueTimeFromCreatedTime: (NSString *)created_time {
float time;
if ([created_time length]>19) {
time = ([[created_time substringWithRange:NSMakeRange(2, 2)]floatValue]-10) * 535680; // max for 12 months is 535680.. uh oh y2100 bug!
time=time + [[created_time substringWithRange:NSMakeRange(5, 2)]floatValue] * 44640; // to make it easy and since it doesn't matter we assume 31 days
time=time + [[created_time substringWithRange:NSMakeRange(8, 2)]floatValue] * 1440;
time=time + [[created_time substringWithRange:NSMakeRange(11, 2)]floatValue] * 60;
time=time + [[created_time substringWithRange:NSMakeRange(14, 2)]floatValue];
time = time + [[created_time substringWithRange:NSMakeRange(17, 2)]floatValue] * .01;
return time;
}
else {
//NSLog(#"error - time string not long enough");
return 0.0;
}
}
When passed that very string listed above the result should be 414333.53, but instead it is returning 414333.531250.
When I toss an NSLog in between each time= to track where it goes off I get this result:
time 0.000000
time 401760.000000
time 413280.000000
time 414300.000000
time 414333.000000
floatvalue 53.000000
time 414333.531250
Created Time: 2010-09-08T17:33:53+0000 414333.531250
So that last floatValue returned 53.0000 but when I multiply it by .01 it turns into .53125. I also tried intValue and it did the same thing.
Welcome to floating point rounding errors. If you want accuracy two a fixed number of decimal points, multiply by 100 (for 2 decimal points) then round() it and divide it by 100. So long as the number isn't obscenely large (occupies more than I think 57 bits) then you should be fine and not have any rounding problems on the division back down.
EDIT: My note about 57 bits should be noted I was assuming double, floats have far less precision. Do as another reader suggests and switch to double if possible.
IEEE floats only have 24 effective bits of mantissa (roughly between 7 and 8 decimal digits). 0.00125 is the 24th bit rounding error between 414333.53 and the nearest float representation, since the exact number 414333.53 requires 8 decimal digits. 53 * 0.01 by itself will come out a lot more accurately before you add it to the bigger number and lose precision in the resulting sum. (This shows why addition/subtraction between numbers of very different sizes in not a good thing from a numerical point of view when calculating with floating point arithmetic.)
This is from a classic floating point error resulting from how the number is represented in bits. First, use double instead of float, as it is quite fast to use on modern machines. When the result really really matters, use the decimal type, which is 20x slower but 100% accurate.
You can create NSDate instances form those NSString dates using the +dateWithString: method. It takes strings formatted as YYYY-MM-DD HH:MM:SS ±HHMM, which is what you're dealing with. Once you have two NSDates, you can use the -compare: method to see which one is later in time.
You could try multiplying all your constants by by 100 so you don't have to divide. The division is what's causing the problem because dividing by 100 produces a repeating pattern in binary.