Receiving NaN as an output when using the pow() function to generate a Decimal - swift

I've been at this for hours so forgive me if I'm missing something obvious.
I'm using the pow(_ x: Decimal, _ y: Int) -> Decimal function to help generate a monthly payment amount using a basic formula. I have this function linked to the infix operator *** but I've tried using it just by typing out the function and have the same problem.
Xcode was yelling at me yesterday for having too long of a formula, so I broke it up into a couple constants and incorporated that into the overall formula I need.
Code:
precedencegroup PowerPrecedence { higherThan: MultiplicationPrecedence }
infix operator *** : PowerPrecedence
func *** (radix: Decimal, power: Int) -> Decimal {
return (pow((radix), (power)))
}
func calculateMonthlyPayment() {
let rateAndMonths: Decimal = ((0.0199 / 12.0) + (0.0199 / 12.0))
let rateTwo: Decimal = ((1.0+(0.0199 / 12.0)))
loan12YearsPayment[0] = ((rateAndMonths / rateTwo) *** 144 - 1.0) * ((values.installedSystemCost + loanFees12YearsCombined[0]) * 0.7)
When I print to console or run this in the simulator, the output is NaN. I know the pow function itself is working properly because I've tried it with random integers.

Kindly find my point of view for this Apple function implementation, Note the following examples:
pow(1 as Decimal, -2) // 1; (1 ^ Any number) = 1
pow(10 as Decimal, -2) // NAN
pow(0.1 as Decimal, -2) // 100
pow(0.01 as Decimal, -2) // 10000
pow(1.5 as Decimal, -2) // NAN
pow(0.5 as Decimal, -2) // NAN
It seems like, pow with decimal don't consider any floating numbers except for 10 basis. So It deals with:
0.1 ^ -2 == (1/10) ^ -2 == 10 ^ 2 // It calculates it appropriately, It's 10 basis 10, 100, 1000, ...
1.5 ^ -2 == (3/2) ^ -2 // (3/2) is a floating number ,so deal with it as Double not decimal, It returns NAN.
0.5 ^ -2 == (1/2) ^ -2 // (2) isn't 10 basis, So It will be dealt as (1/2) as It is, It's a floating number also. It returns NAN.

Related

Swift unreliable round() method to 2 decimal places

I'm trying to calculate the time between two doubles (distance, speed) to 2 decimal places using swift round() method but there are instances where its unreliable and I end up with something like 58.000000001. I tried to hack fix this by rounding again but even that doesn't work on larger numbers eg 734.00000001 Is there a way to fix this or another way to calculate time between two doubles?
var totalTime: Double = 0
for 0...100 {
let calcTime = 35.3 / 70
let roundedTime = round(calcTime * 100) / 100.0
print("TIME === \(roundedTime)")
totalTime += round(totalTime * 100) / 100 + roundedTime // round again to clamp the extra zero's
print("Total time \(totalTime)")
}

Swift NOT operator (~) prints different values than when used

Swift bitwise NOT operator (~) inverts all bits in a number.
The docs provide the example:
let initialBits: UInt8 = 0b00001111
let invertedBits = ~initialBits // equals 11110000
And I can confirm this by printing a String:
print(String(invertedBits, radix: 2)) // equals 11110000
Given this logic I would expect ~0 to equal 1 and ~1 to equal 0. However printing these as I did above print something unexpected:
print(String(~0b1, radix: 2)) // equals -10
print(String(~0b0, radix: 2)) // equals -1
When in use I see something different:
print(String(0b100 & ~0b111, radix: 2)) // equals 0 just as I would expect 100 & 000 to equal 000
but
print(String(~0b111, radix: 2)) // equals -1000
~0b111 seems to act as if it were 0b000 but it prints as -1000
What's going on here?
In the documentation provided example, initialBits has an explicit type. What is happening can be shown by the following code
let number = 0b1 // we don't specify a type, therefore it's inferred
print(type(of: number)) // Int
print(String(~number, radix: 2)) // -10 as you know
print(number.bitWidth) // 64 because integers have 64 bits
// now if we create a 64 bit number, we can see what happened
// prints -10 because this number is equivalent to the initial number
print(String(~0b0000000000000000000000000000000000000000000000000000000000000001, radix: 2))

Round Half Down in Swift

Is there a rounding mode in Swift that behaves same as ROUND_HALF_DOWN in Java?
Rounding mode to round towards "nearest neighbor" unless both neighbors are equidistant, in which case round down. Behaves as for RoundingMode.UP if the discarded fraction is > 0.5; otherwise, behaves as for RoundingMode.DOWN.
Example:
2.5 rounds down to 2.0
2.6 rounds up to 3.0
2.4 rounds down to 2.0
For a negative number:
-2.5 rounds up to -2.0
-2.6 rounds down to -3.0
-2.4 rounds up to -2.0
There is – as far as I can tell – no FloatingPointRoundingRule with the same behavior as Java's ROUND_HALF_DOWN, but you can get the result with a combination of rounded() and nextDown or nextUp:
func roundHalfDown(_ x: Double) -> Double {
if x >= 0 {
return x.nextDown.rounded()
} else {
return x.nextUp.rounded()
}
}
Examples:
print(roundHalfDown(2.4)) // 2.0
print(roundHalfDown(2.5)) // 2.0
print(roundHalfDown(2.6)) // 3.0
print(roundHalfDown(-2.4)) // -2.0
print(roundHalfDown(-2.5)) // -2.0
print(roundHalfDown(-2.6)) // -3.0
Or as a generic extension method, so that it can be used with all floating point types (Float, Double, CGFloat):
extension FloatingPoint {
func roundedHalfDown() -> Self {
return self >= 0 ? nextDown.rounded() : nextUp.rounded()
}
}
Examples:
print((2.4).roundedHalfDown()) // 2.0
print((2.5).roundedHalfDown()) // 2.0
print((2.6).roundedHalfDown()) // 3.0
print((-2.4).roundedHalfDown()) // -2.0
print((-2.5).roundedHalfDown()) // -2.0
print((-2.6).roundedHalfDown()) // -3.0
Swift implements .round() function with rules, According to Apple
FloatingPointRoundingRule
case awayFromZero
Round to the closest allowed value whose magnitude is greater than or equal to that of the source.
case down
Round to the closest allowed value that is less than or equal to the source.
case toNearestOrAwayFromZero
Round to the closest allowed value; if two values are equally close, the one with greater magnitude is chosen.
case toNearestOrEven
Round to the closest allowed value; if two values are equally close, the even one is chosen.
case towardZero
Round to the closest allowed value whose magnitude is less than or equal to that of the source.
case up
Round to the closest allowed value that is greater than or equal to the source.
Yes, You can do the similar things using NSNumberFormatter and RoundingMode
Read them here
NSNumberFormatter
RoundingMode
var a = 6.54
a.round(.toNearestOrAwayFromZero)
// a == 7.0
var b = 6.54
b.round(.towardZero)
// b == 6.0
var c = 6.54
c.round(.up)
// c == 7.0
var d = 6.54
d.round(.down)
// d == 6.0
You can do like this as well but need to take values after decimal as well.
As #MohmmadS said those are built in methods for rounding.
You can implement custom rounding like this:
func round(_ value: Double, toNearest: Double) -> Double {
return round(value / toNearest) * toNearest
}
func roundDown(_ value: Double, toNearest: Double) -> Double {
return floor(value / toNearest) * toNearest
}
func roundUp(_ value: Double, toNearest: Double) -> Double {
return ceil(value / toNearest) * toNearest
}
Example:
round(52.376, toNearest: 0.01) // 52.38
round(52.376, toNearest: 0.1) // 52.4
round(52.376, toNearest: 0.25) // 52.5
round(52.376, toNearest: 0.5) // 52.5
round(52.376, toNearest: 1) // 52

Why does swift conversion work for floating point division?

Like in many languages, Swift's division operator defaults to integer division, so:
let n = 1 / 2
print(n) // 0
If you want floating point division, you have to do
let n1 = 1.0 / 2
let n2 = 1 / 2.0
let n3 = Double(1) / 2
let n4 = 1 / Double(2)
print(n1) // 0.5
print(n2) // 0.5
print(n3) // 0.5
print(n4) // 0.5
Again, like most other languages, you can't cast the whole operation:
let n5 = Double(1 / 2)
print(n5) // 0.0
Which happens because swift performs the integer division of 1 and 2 (1 / 2) and gets 0, which it then tries to cast to a Double, effectively giving you 0.0.
I am curious as to why the following works:
let n6 = (1 / 2) as Double
print(n6) // 0.5
I feel like this should produce the same results as Double(1 / 2). Why doesn't it?
1 and 2 are literals. They have no type unless you give them a type from context.
let n6 = (1 / 2) as Double
is essentially the same as
let n6: Double = 1 / 2
that means, you tell the compiler that the result is a Double. That means the compiler searches for operator / with a Double result, and that means it will find the operator / on two Double operands and therefore considers both literals as of type Double.
On the other hand,
let n5 = Double(1 / 2)
is a cast (or better said, initialization of a Double). That means the expression 1 / 2 gets evaluated first and then converted to Double.

What is the 0xFFFFFFFF doing in this example?

I understand that arc4random returns an unsigned integer up to (2^32)-1. In this scenario it it always gives a number between 0 and 1.
var x:UInt32 = (arc4random() / 0xFFFFFFFF)
How does the division by 0xFFFFFFFF cause the number to be between 0 - 1?
As you've stated,
arc4random returns an unsigned integer up to (2^32)-1
0xFFFFFFFF is equal to (2^32)-1, which is the largest possible value of arc4random(). So the arithmetic expression (arc4random() / 0xFFFFFFFF) gives you a ratio that is always between 0 and 1 — and as this is an integer division, the result can only be between 0 and 1.
to receive value between 0 and 1, the result must be floating point value
import Foundation
(1..<10).forEach { _ in
let x: Double = (Double(arc4random()) / 0xFFFFFFFF)
print(x)
}
/*
0.909680047749933
0.539794033984606
0.049406117305487
0.644912529188421
0.00758233550181201
0.0036165844657497
0.504160538898818
0.879743074271768
0.980051155663107
*/