Could not find an overload for 'init' that accepts the supplied arguments - swift

These two lines of code are giving me the
Could not find an overload for 'init' that accepts the supplied arguments
error:
var w = Int(self.bounds.size.width / Float(worldSize.width))
var h = Int(self.bounds.size.height / Float(worldSize.height))

The error message is misleading. This should work:
var w = Int(self.bounds.size.width / CGFloat(worldSize.width))
var h = Int(self.bounds.size.height / CGFloat(worldSize.height))
The width and height elements of CGSize are declared as CGFloat.
On the 64-bit platform, CGFloat is the same as Double and has 64-bit,
whereas Float has only 32-bit.
So the problem is the division operator,
which requires two operands of the same type. In contrast to (Objective-)C, Swift never implicitly converts values to a
different type.
If worldSize is also a CGSize then you do not need a cast at all:
var w = Int(self.bounds.size.width / worldSize.width)
var h = Int(self.bounds.size.height / worldSize.height)

Related

Why Int and Float literals are allowed to be added, but Int and Float variables are not allowed to do the same in Swift?

I tried adding an Int and Float literal in Swift and it compiled without any error :
var sum = 4 + 5.0 // sum is assigned with value 9.0 and type Double
But, when I tried to do the same with Int and Float variables, I got a compile-time error and I had to type-cast any one operand to the other one's type for it to work:
var i: Int = 4
var f:Float = 5.0
var sum = i + f // Binary operator '+' cannot be applied to operands of type 'Int' and 'Float'
Why is it happening so ? Is it related to type safety in any way ?
If you want Double result:
let i: Int = 4
let f: Float = 5.0
let sum = Double(i) + Double(f)
print("This is the sum:", sum)
If you want Int result:
let i: Int = 4
let f: Float = 5.0
let sum = i + Int(f)
print("This is the sum:", sum)
In case of var sum = 4 + 5.0 the compiler automatically converts 4 to a float as that is what is required to perform the operation.
Same happens if you write var x: Float = 4. The 4 is automatically converted to a float.
In second case, since you have explicitly defined the type of the variable, the compiler does not have the freedom to change is as per the requirement.
For solution, look at #Fabio 's answer
The document on Swift.org says:
Type inference is particularly useful when you declare a constant or variable with an initial value. This is often done by assigning a literal value (or literal) to the constant or variable at the point that you declare it. (A literal value is a value that appears directly in your source code, such as 42 and 3.14159 in the examples below.)
For example, if you assign a literal value of 42 to a new constant
without saying what type it is, Swift infers that you want the
constant to be an Int, because you have initialized it with a number
that looks like an integer:
let meaningOfLife = 42 // meaningOfLife is inferred to be of type Int
Likewise, if you don’t specify a type for a floating-point literal,
Swift infers that you want to create a Double:
let pi = 3.14159 // pi is inferred to be of type Double Swift always
chooses Double (rather than Float) when inferring the type of
floating-point numbers.
If you combine integer and floating-point literals in an expression, a
type of Double will be inferred from the context:
> let anotherPi = 3 + 0.14159 // anotherPi is also inferred to be of
type Double The literal value of 3 has no explicit type in and of
itself, and so an appropriate output type of Double is inferred from
the presence of a floating-point literal as part of the addition.

CGFloat * 2 compiles, CGFloat * Variable where variable is an integer fails

I have to wrap my ints in CGFloat() to compile in Swift 2.3
If I just did * 2, then it would compile. Why does this happen?
Is this fixed in Swift 3?
var multiplier = CGFloat(3)
let y = collectionView.frame.origin.y + (cellSize() * multiplier)
Swift does not directly support mixed type arithmetic. Check what the type of 2 is in Swift, it's probably not what you assume. Your use of CGFloat() is converting the value so the operands of * have the same type.
HTH
type inference. When you write out * 2 without explicitly declaring it an int Swift infers it to be a CGFloat. However when you've already declared it to be an int you can't multiply a CGFloat with an Int.

Elementary Q about Swift (1.2) variables

import Foundation
var x = 17.0
var y = 1.0
var z = 0.5
var isSq : Bool = true
y = ((sqrt(x)) - Int(sqrt(x)))
I am working in Xcode 6.4
Last line produces error: 'could not find an overload for '-' that accepts the supplied arguments'.
Would be nice to understand what is happening here, also is there a function which returns just the decimal part of a double variable - the compliment of Int()?
Many thanks
sqrt(x) has the type Double, and Int(sqrt(x)) has the type
Int. There is no minus operator in Swift that takes a Double as
left operand and an Int as right operand,
and Swift does not implicitly convert between types.
Therefore you have to convert the Int to Double again:
let y = sqrt(x) - Double(Int(sqrt(x)))
You can extract the fractional part also with the fmod() function:
let y = fmod(sqrt(x), 1.0)

binary operator * cannot be applied to operands of type Int and Double

I'm trying to build a simple Swift app to calculate VAT (Value Added taxes = 20%).
func taxesFree(number: Int) -> Double {
var textfield = self.inputTextField.text.toInt()!
let VAT = 0.2
var result = textfield * VAT
return result
}
For some reason I keep getting
Binary operator * cannot be applied to operands of type Int and Double
on the line
var result = textfield * VAT
You should convert one type to the other one so both variable should be the same types:
var result: Double = Double(textfield) * VAT
It's because you're trying to multiply an Int (textfield) with a Double (VAT). Because with such an operation you could lose the precision of the double Swift doesn't allow to convert one to the other so you need to explicitly cast the Int to a Double ...
var result = Double(textfield) * VAT
The problem here is that the statement given is literally true, because Swift is strongly typed and doesn't coerce implicitly. Just had a similar case myself with "binary operator '-' cannot be applied to operands of type 'Date' and 'Int'".
If you write:
var result = 10 * 0.2
...that's fine, but if you write:
var number = 10
var result = number * 0.2
...that's not fine. This is because untyped explicit values have an appropriate type selected by the compiler, so in fact the first line is taken as being var result = Double(10) * Double(0.2). After all, as a human being you might mean 10 to be floating-point or an integer - you normally wouldn't say which and would expect that to be clear from context. It might be a bit of a pain, but the idea of strong types is that after the code is parsed it can only have one valid compiled expression.
In general you would build a new value using the constructor, so var result = Double(textfield) * VAT in your case. This is different from casting (textfield as Double) because Int is not a subclass of Double; what you are doing instead is asking for a completely new Double value to be built at runtime, losing some accuracy if the value is very high or low. This is what loosely typed languages do implicitly with pretty much all immediate values, at a small but significant time cost.
In your specific case, it wasn't valuable to have an Int in the first place (even if no fraction part is possible) so what you needed was:
func taxesFree(number: Int) -> Double {
var textfield = Double(self.inputTextField.text)!
let VAT = 0.2
var result = textfield * VAT
return result
}
In my case it was just casting to CGFloat:
self.cnsMainFaqsViewHight.constant = CGFloat(self.mainFaqs.count) * 44.0
You can convert like
var result: Double = Double(textfield)
I was misunderstanding the Closed Range Operator in Swift.
You should not wrap the range in an array: [0...10]
for i in [0...10] {
// error: binary operator '+' cannot be applied to operands of type 'CountableClosedRange<Int>' and 'Int'
let i = i + 1
}
for i in 0...10 {
// ok!
let i = i + 1
}
The range is a collection that can itself be iterated. No need to wrap it in an array, as perhaps you would have in Objective-C.
0...3 -> [0, 1, 2, 3]
[0...3] -> [[0, 1, 2, 3]]
Once you realize your object is a nested collection, rather than an array of Ints, it's easy to see why you cannot use numeric operators on the object.
This worked for me when I got the same error message in Playground:
func getMilk(howManyCartons: Int){
print("Buy \(howManyCartons) cartons of milk")
let priceToPay: Float = Float(howManyCartons) * 2.35
print("Pay $\(priceToPay)")
}
getMilk(howManyCartons: 2)

Should conditional compilation be used to cope with difference in CGFloat on different architectures?

In answering this earlier question about getting a use of ceil() on a CGFloat to compile for all architectures, I suggested a solution along these lines:
var x = CGFloat(0.5)
var result: CGFloat
#if arch(x86_64) || arch(arm64)
result = ceil(x)
#else
result = ceilf(x)
#endif
// use result
(Background info for those already confused: CGFloat is a "float" type for 32-bit architecture, "double" for 64-bit architecture (i.e. the compilation target), which is why just using either of ceil() or ceilf() on it won't always compile, depending on the target architecture. And note that you don't seem to be able to use CGFLOAT_IS_DOUBLE for conditional compilation, only the architecture flags...)
Now, that's attracted some debate in the comments about fixing things at compile time versus run time, and so forth. My answer was accepted too fast to attract what might be some good debate about this, I think.
So, my new question: is the above a safe, and sensible thing to do, if you want your iOS and OS X code to run on 32- and 64-bit devices? And if it is sane and sensible, is there still a better (at least as efficient, not as "icky") solution?
Matt,
Building on your solution, and if you use it in several places, then a little extension might make it more palatable:
extension CGFloat {
var ceil: CGFloat {
#if arch(x86_64) || arch(arm64)
return ceil(x)
#else
return ceilf(x)
#endif
}
}
The rest of the code will be cleaner:
var x = CGFloat(0.5)
x.ceil
var f : CGFloat = 0.5
var result : CGFloat
result = CGFloat(ceil(Double(f)))
Tell me what I'm missing, but that seems pretty simple to me.
Note that with current version of Swift the solution below is already implemented in the standard library and all mathematical functions are properly overloaded for Double, Float and CGFloat.
Ceil is an arithmetic operation and in the same way as any other arithmetic operation, there should be an overloaded version for both Double and Float.
var f1: Float = 1.0
var f2: Float = 2.0
var d1: Double = 1.0
var d2: Double = 2.0
var f = f1 + f2
var d = d1 + d2
This works because + is overloaded and works for both types.
Unfortunately, by pulling the math functions from the C library which doesn't support function overloading, we are left with two functions instead of one - ceil and ceilf.
I think the best solution is to overload ceil for Float types:
func ceil(f: CFloat) -> CFloat {
return ceilf(f)
}
Allowing us to do:
var f: Float = 0.5
var d: Double = 0.5
var f: Float = ceil(f)
var d: Double = ceil(d)
Once we have the same operations defined for both Float and Double, even CGFloat handling will be much simpler.
To answer the comment:
Depending on target processor architecture, CGFloat can be defined either as Float or a Double. That means we should use ceil or ceilf depending on target architecture.
var cgFloat: CGFloat = 1.5
//on 64bit it's a Double
var rounded: CGFloat = ceil(cgFloat)
//on 32bit it's a Float
var rounded: CGFloat = ceilf(cgFloat)
However, we would have to use the ugly #if.
Another option is to use clever casts
var cgFloat: CGFloat = 1.5
var rounded: CGFloat = CGFloat(ceil(Double(cgFloat))
(casting first to Double, then casting the result to CGFloat)
However, when we are working with numbers, we want math functions to be transparent.
var cgFloat1: CGFloat = 1.5
var cgFloat2: CGFloat = 2.5
// this works on both 32 and 64bit architectures!
var sum: CGFloat = cgFloat1 + cgFloat 2
If we overload ceil for Float as shown above, we are able to do
var cgFloat: CGFloat = 1.5
// this works on both 32 and 64bit architectures!
var rounded: CGFloat = ceil(cgFloat)