Swift Float to UInt8 conversion unnecessary? - swift

I have been getting a lot of errors about conversions in swift as of beta 4. I saw it was a pretty common problem and I was able to fix most of the errors except for this certain one:
let xDistance = testCircle.position.x - circle.position.x
let yDistance = testCircle.position.y - circle.position.y
let distance = (sqrt(pow(Float(xDistance), 2) + pow(Float(yDistance), 2)))
let minDist = testCircle.size.width * 1.5 + circle.size.width/2
if (distance >= minDist) {
return true
}
the if (distance >= minDist) is returning the following error:
'Float' is not convertible to 'UInt8'
I'm not sure why I need to use UInt8 here, but I have tried fixing it and have just caused more conversion error. Anyone see the problem here?

The error message doesn't make any sense, since the problem is that you're comparing a Float and a CGFloat. In previous Swift betas, CGFloat was (at least sometimes) aliased to Float, so you could get away with that, but in beta 4 it's its own type. You can just cast your float value to CGFloat in the comparison:
if (CGFloat(distance) >= minDist) {
return true
}
It's safer to go Float -> CGFloat since on some architectures CGFloat is a Double under the hood.

Related

CGFloat * 2 compiles, CGFloat * Variable where variable is an integer fails

I have to wrap my ints in CGFloat() to compile in Swift 2.3
If I just did * 2, then it would compile. Why does this happen?
Is this fixed in Swift 3?
var multiplier = CGFloat(3)
let y = collectionView.frame.origin.y + (cellSize() * multiplier)
Swift does not directly support mixed type arithmetic. Check what the type of 2 is in Swift, it's probably not what you assume. Your use of CGFloat() is converting the value so the operands of * have the same type.
HTH
type inference. When you write out * 2 without explicitly declaring it an int Swift infers it to be a CGFloat. However when you've already declared it to be an int you can't multiply a CGFloat with an Int.

Swift-Binary operator cannot be applied to operands, when converting degrees to radians

I'm aware of some relatively similar questions on this site, but if they do apply to my problem (which I'm not certain they do) then I certainly don't understand them. Here's my problem;
var degrees = UInt32()
var radians = Double()
let degrees:UInt32 = arc4random_uniform(360)
let radians = angle * (M_PI / 180)
This returns an error, focused on the multiplication star, reading; "Binary operator "*" cannot be applied to operands of type 'UInt32' and 'Double'.
I'm fairly sure I need to have the degrees variable be of type UInt32 to randomise it, and also that the pi constant cannot be made to be of UInt32, or at least I don't know how, as I'm relatively new to Xcode and Swift in general.
I'd be very grateful if anyone had a solution to my problem.
Thanks in advance.
let degree = arc4random_uniform(360)
let radian = Double(degree) * .pi/180
you need to convert the degree to double before the multiplication .
from apple swift book:
Integer and Floating-Point Conversion
Conversions between integer and floating-point numeric types must be made explicit:
let three = 3
let pointOneFourOneFiveNine = 0.14159
let pi = Double(three) + pointOneFourOneFiveNine
// pi equals 3.14159, and is inferred to be of type Double
Here, the value of the constant three is used to create a new value of type Double, so that both sides of
the addition are of the same type. Without this conversion in place, the addition would not be allowed.
Floating-point to integer conversion must also be made explicit. An integer type can be initialized
with a Double or Float value:
1 let integerPi = Int(pi)
2 // integerPi equals 3, and is inferred to be of type Int
Floating-point values are always truncated when used to initialize a new integer value in this way.
This means that 4.75 becomes 4, and -3.9 becomes -3.

How can I make this binary operator work with CGFloats?

I'm a beginner programmer, and I'm making a game at the moment. I haven't run into many errors like this, but I know it's really easy to fix.
Heres the code:
func randInRange(range: Range<Int>) -> Int {
return Int(arc4random_uniform(UInt32(range.endIndex - range.startIndex))) + range.startIndex }
Here is the constant I'm trying to work with:
let random = randInRange(self.frame.size.width * 0.3...self.frame.size.width * 0.6)
The error comes out as this: Binary operator '...' be applied to 2 CGFloat operands.
Your method randInRange is expecting a range of Integers, so you need to convert the result of your expression from CGFloat to Integer.
let random = randInRange(Int(self.frame.size.width * 0.3)...Int(self.frame.size.width * 0.6))

Elementary Q about Swift (1.2) variables

import Foundation
var x = 17.0
var y = 1.0
var z = 0.5
var isSq : Bool = true
y = ((sqrt(x)) - Int(sqrt(x)))
I am working in Xcode 6.4
Last line produces error: 'could not find an overload for '-' that accepts the supplied arguments'.
Would be nice to understand what is happening here, also is there a function which returns just the decimal part of a double variable - the compliment of Int()?
Many thanks
sqrt(x) has the type Double, and Int(sqrt(x)) has the type
Int. There is no minus operator in Swift that takes a Double as
left operand and an Int as right operand,
and Swift does not implicitly convert between types.
Therefore you have to convert the Int to Double again:
let y = sqrt(x) - Double(Int(sqrt(x)))
You can extract the fractional part also with the fmod() function:
let y = fmod(sqrt(x), 1.0)

Should conditional compilation be used to cope with difference in CGFloat on different architectures?

In answering this earlier question about getting a use of ceil() on a CGFloat to compile for all architectures, I suggested a solution along these lines:
var x = CGFloat(0.5)
var result: CGFloat
#if arch(x86_64) || arch(arm64)
result = ceil(x)
#else
result = ceilf(x)
#endif
// use result
(Background info for those already confused: CGFloat is a "float" type for 32-bit architecture, "double" for 64-bit architecture (i.e. the compilation target), which is why just using either of ceil() or ceilf() on it won't always compile, depending on the target architecture. And note that you don't seem to be able to use CGFLOAT_IS_DOUBLE for conditional compilation, only the architecture flags...)
Now, that's attracted some debate in the comments about fixing things at compile time versus run time, and so forth. My answer was accepted too fast to attract what might be some good debate about this, I think.
So, my new question: is the above a safe, and sensible thing to do, if you want your iOS and OS X code to run on 32- and 64-bit devices? And if it is sane and sensible, is there still a better (at least as efficient, not as "icky") solution?
Matt,
Building on your solution, and if you use it in several places, then a little extension might make it more palatable:
extension CGFloat {
var ceil: CGFloat {
#if arch(x86_64) || arch(arm64)
return ceil(x)
#else
return ceilf(x)
#endif
}
}
The rest of the code will be cleaner:
var x = CGFloat(0.5)
x.ceil
var f : CGFloat = 0.5
var result : CGFloat
result = CGFloat(ceil(Double(f)))
Tell me what I'm missing, but that seems pretty simple to me.
Note that with current version of Swift the solution below is already implemented in the standard library and all mathematical functions are properly overloaded for Double, Float and CGFloat.
Ceil is an arithmetic operation and in the same way as any other arithmetic operation, there should be an overloaded version for both Double and Float.
var f1: Float = 1.0
var f2: Float = 2.0
var d1: Double = 1.0
var d2: Double = 2.0
var f = f1 + f2
var d = d1 + d2
This works because + is overloaded and works for both types.
Unfortunately, by pulling the math functions from the C library which doesn't support function overloading, we are left with two functions instead of one - ceil and ceilf.
I think the best solution is to overload ceil for Float types:
func ceil(f: CFloat) -> CFloat {
return ceilf(f)
}
Allowing us to do:
var f: Float = 0.5
var d: Double = 0.5
var f: Float = ceil(f)
var d: Double = ceil(d)
Once we have the same operations defined for both Float and Double, even CGFloat handling will be much simpler.
To answer the comment:
Depending on target processor architecture, CGFloat can be defined either as Float or a Double. That means we should use ceil or ceilf depending on target architecture.
var cgFloat: CGFloat = 1.5
//on 64bit it's a Double
var rounded: CGFloat = ceil(cgFloat)
//on 32bit it's a Float
var rounded: CGFloat = ceilf(cgFloat)
However, we would have to use the ugly #if.
Another option is to use clever casts
var cgFloat: CGFloat = 1.5
var rounded: CGFloat = CGFloat(ceil(Double(cgFloat))
(casting first to Double, then casting the result to CGFloat)
However, when we are working with numbers, we want math functions to be transparent.
var cgFloat1: CGFloat = 1.5
var cgFloat2: CGFloat = 2.5
// this works on both 32 and 64bit architectures!
var sum: CGFloat = cgFloat1 + cgFloat 2
If we overload ceil for Float as shown above, we are able to do
var cgFloat: CGFloat = 1.5
// this works on both 32 and 64bit architectures!
var rounded: CGFloat = ceil(cgFloat)