Decimal Value Set as 0 in Swift - swift

I am trying to get a random decimal from 0.75 to 1.25
let incomeCalc = Decimal((arc4random_uniform(50)+75)/100)
print("incomeCalc")
print(incomeCalc)
Why does this print 0?

arc4random_uniform return an integer type so you are doing integer math. You need to be doing floating point math.
let incomeCalc = Decimal(Double((arc4random_uniform(50)+75))/100)
By casting the value before you do the division, you get a Double result which is passed to your Decimal initializer.
Or you can do:
let incomeCalc = Decimal((arc4random_uniform(50)+75))/100
which creates the Decimal before the division is done.

You can also use the code below which gets a random number between 75 - 125 and then divides it by 100
let incomeCalc = Decimal((arc4random_uniform(50)+75)) / 100
print("incomeCalc")
print(incomeCalc)

Related

Swift, reducing numeric error using Decimal type

I found that making operand in Decimal can reduce numeric error in Swift
0.1 + 0.2 = 0.30000000000000004
Decimal(0.1) + Decimal(0.2) = 0.3
So I tried to make a function that calculate calculation String in Decimal like this:
func calculate(expression: String) -> Decimal {
let expression = NSExpression(format: expression)
let value = expression.expressionValue(with: nil, context: nil) as? Decimal
return value ?? 0.0
}
But value property keep getting nil value and function always returning 0.0. Can I get some any help on this?
Thanks
The Decimal type does not reduce numeric error. It just computes values in decimal. That can increase or decrease error, depending on the calculation. 1/10 happens to be bad in binary for the same reason that 1/3 is bad in decimal. Your code doesn't actually compute anything in Decimal. It's just trying to convert the final Double value to Decimal at the end, which introduces binary-to-decimal rounding (making it less accurate).
That said, expressionValue(with:context:) returns an NSNumber. You can't convert that to Decimal with as?. You need to use .decimalValue:
let number = expression.expressionValue(with: nil, context: nil) as? NSNumber
let value = number?.decimalValue
This will compute the value in Double and then round it to a Decimal.
But if you want to do calculations in Decimal, I don't believe that NSExpression can do that.

Dividing UInt64 always returns 0 [duplicate]

I'm trying print the result of division for example:
let division = (4/6)
print(division)
In this case the print out is 0.
How can I print the numeric value of the division without losing the numeric value. I mean without casting the output to string.
You are performing integer division. You need to perform floating point division.
In your code, division is an Int value. 4 / 6 is zero in integer division.
You need:
let division = 4.0/6.0
print(division)
ans = Double(no1)/Double(no2)
return ans
If you want the value should be correct, then try as
let division = Float(v1) / Float(v2)
print(division)

Why does Int(Float(Int.max)) give me an error?

I observed something really strange. If you run this code in Swift:
Int(Float(Int.max))
It crashes with the error message:
fatal error: Float value cannot be converted to Int because the result would be greater than Int.max
This is really counter-intuitive, so I expanded the expression into 3 lines and tried to see what happens in each step in a playground:
let a = Int.max
let b = Float(a)
let c = Int(b)
It crashes with the same message. This time, I see that a is 9223372036854775807 and b is 9.223372e+18. It is obvious that a is greater than b by 36854775807. I also understand that floating points are inaccurate, so I expected something less than Int.max, with the last few digits being 0.
I also tried this with Double, it crashes too.
Then I thought, maybe this is just how floating point numbers behave, so I tested the same thing in Java:
long a = Long.MAX_VALUE;
float b = (float)a;
long c = (long)b;
System.out.println(c);
It prints the expected 9223372036854775807!
What is wrong with swift?
There aren't enough bits in the mantissa of a Double or Float to accurately represent 19 significant digits, so you are getting a rounded result.
If you print the Float using String(format:) you can see a more accurate representation of the value of the Float:
let a = Int.max
print(a) // 9223372036854775807
let b = Float(a)
print(String(format: "%.1f", b)) // 9223372036854775808.0
So the value represented by the Float is 1 larger than Int.max.
Many values will be converted to the same Float value. The question becomes, how much would you have to reduce Int.max before it results in a different Double or Float value.
Starting with Double:
var y = Int.max
while Double(y) == Double(Int.max) {
y -= 1
}
print(Int.max - y) // 512
So with Double, the last 512 Ints all convert to the same Double.
Float has fewer bits to represent the value, so there are more values that all map to the same Float. Switching to - 1000 so that it runs in reasonable time:
var y = Int.max
while Float(y) == Float(Int.max) {
y -= 1000
}
print(Int.max - y) // 274877907000
So, your expectation that a Float could accurately represent a specific Int was misplaced.
Follow up question from the comments:
If float does not have enough bits to represent Int.max, how is it
able to represent a number one larger than that?
Floating point numbers are represented as two parts: mantissa and exponent. The mantissa represents the significant digits (in binary) and the exponent represents the power of 2. As a result, a floating point number can accurately express an even power of 2 by having a mantissa of 1 with an exponent that represents the power.
Numbers that are not even powers of 2 may have a binary pattern that contains more digits than can be represented in the mantissa. This is the case for Int.max (which is 2^63 - 1) because in binary that is 111111111111111111111111111111111111111111111111111111111111111 (63 1's). A Float which is 32 bits cannot store a mantissa which is 63 bits, so it has to be rounded or truncated. In the case of Int.max, rounding up by 1 results in the value
1000000000000000000000000000000000000000000000000000000000000000. Starting from the left, there is only 1 significant bit to be represented by the mantissa (the trailing 0's come for free), so this number is a mantissa of 1 and an exponent of 64.
See #MartinR's answer for an explanation of what Java is doing.
Swift and Java behave differently when converting a "too large" floating point
number to an integer. Java truncates any floating point value
larger than Long.MAX_VALUE = 2^63-1:
long c = (long)(1.0E+30f);
System.out.println(c);
// 9223372036854775807
Swift expects that the value is in the range of Int, and aborts
with a runtime exception otherwise:
/// Creates a new instance by rounding the given floating-point value toward
/// zero.
///
/// - Parameter other: A floating-point value. When `other` is rounded toward
/// zero, the result must be within the range `Int.min...Int.max`.
public init(_ value: Float)
Example:
let c = Int(Float(1.0E30))
print(c)
// fatal error: Float value cannot be converted to Int because the result would be greater than Int.max
The same happens with your value Float(Int.max), which is the
floating point representable value closest to Int.max and happens
to be larger than Int.max.

Swift: Print decimal precision of division

I'm trying print the result of division for example:
let division = (4/6)
print(division)
In this case the print out is 0.
How can I print the numeric value of the division without losing the numeric value. I mean without casting the output to string.
You are performing integer division. You need to perform floating point division.
In your code, division is an Int value. 4 / 6 is zero in integer division.
You need:
let division = 4.0/6.0
print(division)
ans = Double(no1)/Double(no2)
return ans
If you want the value should be correct, then try as
let division = Float(v1) / Float(v2)
print(division)

Round currency closest to five

I'd like to round my values to the closest of 5 cent for example:
5.31 -> 5.30
5.35 -> 5.35
5.33 -> 5.35
5.38 -> 5.40
Currently I'm doing it by getting the decimal values using:
let numbers = 5.33
let decimal = (numbers - rint(numbers)) * 100
let rounded = rint(numbers) + (5 * round(decimal / 5)) / 100
// This results in 5.35
I was wondering if there's a better method with fewer steps because sometimes numbers - rint(numbers) is giving me a weird result like:
let numbers = 12.12
let decimal = (numbers - rint(numbers)) * 100
// This results in 11.9999999999999
Turns out..it's really simple
let x: Float = 1.03 //or whatever value, you can also use the Double type
let y = round(x * 20) / 20
It's really better to stay away from floating-point for this kind of thing, but you can probably improve the accuracy a little with this:
import Foundation
func roundToFive(n: Double) -> Double {
let f = floor(n)
return f + round((n-f) * 20) / 20
}
roundToFive(12.12) // 12.1
I will use round function and NSNumberFormatter also but slightly different algorithm
I was thinking about using % but I changed it to /
let formatter = NSNumberFormatter()
formatter.minimumFractionDigits = 2
formatter.maximumFractionDigits = 2
//5.30
formatter.stringFromNumber(round(5.31/0.05)*0.05)
//5.35
formatter.stringFromNumber(round(5.35/0.05)*0.05)
//5.35
formatter.stringFromNumber(round(5.33/0.05)*0.05)
//5.40
formatter.stringFromNumber(round(5.38/0.05)*0.05)
//12.15
formatter.stringFromNumber(round(12.13/0.05)*0.05)
Depending on how you are storing your currency data, I would recommend using a dictionary or an array to look up the original cents value, and return a pre-computed result. There's no reason to do the calculations at all, since you know that 0 <= cents < 100.
If your currency is a string input, just chop off the last couple of digits and do a dictionary lookup.
round_cents = [ ... "12":"10", "13":"15", ... ]
If your currency is a floating point value, well, you have already discovered the joys of trying to do that. You should change it.
If your currency is a data type, or a fixed point integer, just get the cents part out and do an array lookup.
...
round_cents[12] = 10
round_cents[13] = 15
...
In either case, you would then do:
new_cents = round_cents[old_cents]
and be done with it.