how Two Double values equality always coming false even if the values are equal in Swift? - swift

Hi Here I'm comparing to double values equal or not. but its returing always false even if both Double values are Equal.
let latestlogoValue = log(Double(125))/log(5.0)
let latestlogIntValue:Int = Int(latestlogoValue)
print(latestlogoValue)
print(Double(latestlogIntValue))
print(Double(latestlogIntValue) == Double(latestlogoValue)) //Always returning false

Double or float value comparison with == sign, will not give you the exact answer. You may think that the two values are equal to each other but they are slightly different. Doubles are stored differently in memory. You can test it by printing as String like below-
print(String(format: "%.20f", Double(latestlogoValue))) //3.00000000000000044409
print(String(format: "%.20f", Double(latestlogIntValue))) //3.00000000000000000000
So you can update your comparison function as
func isDoubleEqual(_ first: Double, _ second: Double) -> Bool {
return fabs(first - second) < Double.ulpOfOne
}

That is happened because of Double type's precision
If we print your values
print(String(format: "a float number: %.55f", latestlogoValue))
print(String(format: "a float number: %.55f", Double(latestlogIntValue)))
we'll see difference in values:
a float number: 3.00000000000000044408920985006261616945266723632812500
a float number: 3.00000000000000000000000000000000000000000000000000000
So the values are different, if you need to compare float or double values use comparing them with some precision

Look at the precesion values are different they are not identical.
Just print them out.
print(Double(latestlogoValue).debugDescription) // 3.0000000000000004
print(Double(latestlogIntValue).debugDescription) // 3.0
And you are comparing the same which results always false.
print(Double(latestlogIntValue) == Double(latestlogoValue))
// 3.0000000000000004 == 3.0 results false obvious

let latestlogoValue = log(Double(125))/log(5.0)
let latestlogIntValue:Int = Int(latestlogoValue)
print(latestlogoValue)
print(Double(latestlogIntValue))
print(Double(latestlogIntValue) == Double(latestlogoValue))
Here Double(latestlogIntValue) & Double(latestlogoValue)) are getting different values . Compare the values with Int you will get true
print(Int(latestlogIntValue) == Int(latestlogoValue)) // true

Related

Why is x / 100 == 0 when x != 0 in swift? [duplicate]

This question already has answers here:
Is the Swift divide "/" operator not working or have I missed something?
(3 answers)
Division not working properly in Swift
(3 answers)
Closed 1 year ago.
I have created a for loop in which I calculate a few values.
for i in 1...100{
let xValue = i/100
print(xValue) // returns 0 every time except when i == 100
}
This is a recreation of a part of that for loop. Why is it that I do not get the right value for 'xValue'?
For info I have also tried the following:
let xValue: Float = Float(i/100)
And that doesn't work either, despite me being very specific. I must have forgotten something basic about these arithmetic
operators in swift.
When you divide an Int by an Int, the result will be rounded down. Use a floating point type, like Double or Float for more precision.
for i in 1...100 {
let xValue = Float(i)/100
print(xValue)
}
To address your attempted solution - when you do:
let xValue: Float = Float(i/100)
The Int result is first computed in i/100 (and rounded down to 0) then you are casting to a Float.
Therefore, we cast i to a Float before the division so the result is computed as a Float.
Since i and 100 are both integer values, / will do integer division and the result will be truncated to 0.
Even when you do let xValue: Float = Float(i/100), the result of division inside the parentheses is already truncated to 0 before the value can be converted to a Float.
Convert i to a floating-point value before dividing to prevent the result from being truncated.
for i in 1...100{
let xValue = Float(i)/100
print(xValue)
}

How to multiply a float by a bool value of 0 or 1?

My code below gives a breakpoint error:
func doScore(num: Float, binaural: Bool, noise: Bool) -> Float {
if 50 ... 100 ~= num{
let numDoubled = num + (Float(noise.intValue()!) * weighting)// <--- this is where I get my error
return numDoubled.rounded()
}
All I want to do is multiply the number I am putting into the function by the value of binaural or noise which are boolean values. To do this I am getting the Int value however I need it to be a float as 0 or 1 since the number I am putting in is a float. Why would this cause a crash? Thanks.
By doing a simple workaround, you can fix it easily
let numDoubled = num + (noise ? weighting : 0.0)
Converting bool into int is here but in my solution, there's no need to do double job( convert to int and then again convert into float)
Updated as per vacawama comment
Updated answer from your comment:
let numDoubled = num + ( (noise && binaural) ? weighting : 0.0 ) )
Unlike in (Objective-)C where nil, 0, and false are treated as the same value depending on the environment Int and Bool are not related in Swift and a Swift Bool doesn't have an intValue.
Practically you want to add weighting to num if noise is true.
let numDoubled = noise ? num + weighting : num
If you really need to convert false to 0 and true to 1 write
let boolAsInt = aBool ? 1 : 0
Even though implicit conversion between bool and numeric types has been removed, you can explicitly convert a bool to numeric types by first converting it to an NSNumber.
let numDoubled = num + Float(truncating: NSNumber(booleanLiteral: noise)) * weighting
You can also do Float(truncating: noise) as a shorthand notation, but that call implicitly converts the bool to an NSNumber, so for clarity its better to write out the conversion explicitly.

Swift - rounding numbers

I am trying to round a number in swift, and I found a solution using this:
func roundTo(number: Double, precision: Int) -> Double {
var power: Double = 1
for _ in 1...precision {
power *= 10
}
let rounded = Double(round(power * number)/power)
return rounded
}
I have a model class, lets call it MyObject.
class: My Object {
var preciseNumber: Double?
}
I am fetching a number for example:
var myNumber = 10,0123456789
I use my roundTo function to round it so I have 10,0123456 (7 numbers after the decimal point).
When I print a statement:
print("myNumber rounded: \(roundTo(myNumber, precision: 7))") //10,0123456 as a result. Great!
Then next I want to assing rounded myNumber to my class variable preciseNumber so:
let roundedNumber = roundTo(myNumber, precise: 7)
print("Rounded number is: \(roundedNumber)") // 10,01234567 as result
self.preciseNumber = roundedNumber
print("Precise number is now: \(self.preciseNumber)") // 10,01234599999997 as result
What might be causing this? I want to be as precise as possible.
So it sounds like your issue is being able to compare floating point numbers. The best way to do this is to instead find the degree of precision you need. So rather than just checking numOne == numTwo use something like abs(one - two) <= 0.000001
You can create a Swift operator to handle this for you pretty easily:
// `===` is just used as an example
func === (one: Double, two: Double) -> Bool {
return abs(one - two) <= 0.000001
}
Then you can just check numOne === numTwo and it will use a better floating point equality check.
There is also a power function that will help simplify your rounding function:
let power = pow(10.0, precision)

binary operator * cannot be applied to operands of type Int and Double

I'm trying to build a simple Swift app to calculate VAT (Value Added taxes = 20%).
func taxesFree(number: Int) -> Double {
var textfield = self.inputTextField.text.toInt()!
let VAT = 0.2
var result = textfield * VAT
return result
}
For some reason I keep getting
Binary operator * cannot be applied to operands of type Int and Double
on the line
var result = textfield * VAT
You should convert one type to the other one so both variable should be the same types:
var result: Double = Double(textfield) * VAT
It's because you're trying to multiply an Int (textfield) with a Double (VAT). Because with such an operation you could lose the precision of the double Swift doesn't allow to convert one to the other so you need to explicitly cast the Int to a Double ...
var result = Double(textfield) * VAT
The problem here is that the statement given is literally true, because Swift is strongly typed and doesn't coerce implicitly. Just had a similar case myself with "binary operator '-' cannot be applied to operands of type 'Date' and 'Int'".
If you write:
var result = 10 * 0.2
...that's fine, but if you write:
var number = 10
var result = number * 0.2
...that's not fine. This is because untyped explicit values have an appropriate type selected by the compiler, so in fact the first line is taken as being var result = Double(10) * Double(0.2). After all, as a human being you might mean 10 to be floating-point or an integer - you normally wouldn't say which and would expect that to be clear from context. It might be a bit of a pain, but the idea of strong types is that after the code is parsed it can only have one valid compiled expression.
In general you would build a new value using the constructor, so var result = Double(textfield) * VAT in your case. This is different from casting (textfield as Double) because Int is not a subclass of Double; what you are doing instead is asking for a completely new Double value to be built at runtime, losing some accuracy if the value is very high or low. This is what loosely typed languages do implicitly with pretty much all immediate values, at a small but significant time cost.
In your specific case, it wasn't valuable to have an Int in the first place (even if no fraction part is possible) so what you needed was:
func taxesFree(number: Int) -> Double {
var textfield = Double(self.inputTextField.text)!
let VAT = 0.2
var result = textfield * VAT
return result
}
In my case it was just casting to CGFloat:
self.cnsMainFaqsViewHight.constant = CGFloat(self.mainFaqs.count) * 44.0
You can convert like
var result: Double = Double(textfield)
I was misunderstanding the Closed Range Operator in Swift.
You should not wrap the range in an array: [0...10]
for i in [0...10] {
// error: binary operator '+' cannot be applied to operands of type 'CountableClosedRange<Int>' and 'Int'
let i = i + 1
}
for i in 0...10 {
// ok!
let i = i + 1
}
The range is a collection that can itself be iterated. No need to wrap it in an array, as perhaps you would have in Objective-C.
0...3 -> [0, 1, 2, 3]
[0...3] -> [[0, 1, 2, 3]]
Once you realize your object is a nested collection, rather than an array of Ints, it's easy to see why you cannot use numeric operators on the object.
This worked for me when I got the same error message in Playground:
func getMilk(howManyCartons: Int){
print("Buy \(howManyCartons) cartons of milk")
let priceToPay: Float = Float(howManyCartons) * 2.35
print("Pay $\(priceToPay)")
}
getMilk(howManyCartons: 2)

Swift: Double conversion inconsistency. How to correctly compare Doubles?

I have a very simple function to convert temperature from ˚C TO ˚K.
func convertKelvinToCelsius(temp:Double) ->Double {
return temp - 273.15
}
And I have a unit test to drive this function. This is where the problem is:
func testKelvinToCelsius(){
var check1 = conv.convertKelvinToCelsius(200.00) // -73.149999999999977
var check2 = 200.00 - 273.15 // -73.149999999999977
var check3 = Double(-73.15) // -73.150000000000006
//Passes
XCTAssert(conv.convertKelvinToCelsius(200.00).description == Double(-73.15).description, "Shoud convert from celsius kelvin")
//Fails
XCTAssert(conv.convertKelvinToCelsius(200.00) == Double(-73.15), "Shoud convert from celsius kelvin")
}
When you add a breakpoint and check the values of check1, check2 and check3, they are very interesting:
check1 Double -73.149999999999977
check2 Double -73.149999999999977
check3 Double -73.150000000000006
Questions:
Why does Swift return different values for check1/check2 and check3
How can I get the second test to pass, because writing it like I did the test1 smells. Why should I have to convert Doubles to Strings to be able to compare them?
Finally, when I println check1, check2 and check3, they all print to be '-73.15'. Why? Why not print accurately, and not confuse the programmers!?
To Reproduce:
Just type 200 - 273.15 == -73.15 in you playground and watch it go false!!
This is expected behavior for floating point values. They cannot be 100% accurately represented.
You can use the XCTAssertEqualWithAccuracy function to assert floating point values are within a given range of each other.
The reason println prints the same value for all is because it internally rounds them to two decimals (I assume).
This is not a Swift specific issue, this is related to the fact how decimal numbers are created in computers and what is their precision. You will need to work with DBL_EPSILON.
Swift, like most languages, uses binary floating point numbers.
With binary floating point numbers, some numbers can be represented exactly, but most can't. What can be represented exactly are integers unless they are very large (for example, 100000000000000.0 is fine), and such integers multiplied or divided by powers of two (7.375 is fine, it is 59.0 / 8, but 7.3 isn't).
Every floating point operation gives you the exact result, rounded to the nearest floating-point number. So you get
200.0 -> Exactly 200
273.15 -> A number very close to 273.15
200 - 273.15 -> A number very close to -73.15
-73.15 -> A number very close to -73.15
If you compare two numbers that are both very very close to -73.15 they are not necessarily equal. That's not a problem of the == operator; that one will determine correctly whether they are equal or not. The problem is that the two numbers can actually be different.