Binary operator '..<' cannot be applied to two 'Int' operands [duplicate] - swift

I am having the same problem as earlier with a different line of code; but this time, I wasn't able to fix it with the same approach as last time:
var Y : Int = 0
var X : Int = 0
#IBOutlet var ball : UIImageView!
ball.center = CGPointMake(ball.center.x + X, ball.center.y + Y)
This is the error I am getting:
binary operator + cannot be applied to operands of type CGfloat int

Declare them, instead, as the following:
let X : CGFloat = 0.0
let Y : CGFloat = 0.0
Replying to your comment:
The error has nothing to do with them being declared as var or let.
You could declare them as var and if you so insist on declaring them as Int, you would still need to do the following:
var X : Int = 0
var Y : Int = 0
ball.center = CGPointMake(view.center.x + CGFloat(X), view.center.y + CGFloat(Y))

The problem is that you are not having the same variable types, for example you can't add bool and string.
Change it to CGFloat instead of int:
let X : CGFloat = 0.0
let Y : CGFloat = 0.0

ball.center.x is a CGFloat and X is an Int. That's where the compiler is complaining.
Swift likes you to type cast numeric types (as if there wasn't a hierarchy in numeric domains) but you can avoid that by declaring X and Y as CGFloat instead of Int.
You could also get rid of the issue for good by defining the operator (that Swift should already have imho):
infix operator + {}
func +(left:CGFloat, right:Int) -> CGFloat
{ return left + CGFloat(right) }
func +(left:Int, right:CGFloat) -> CGFloat
{ return CGFloat(left) + right }

Related

(-1)^k with random k of either 0 or 1

I want to have something to happen when (-1)^k is 1 and when its -1 -> k is generated randomly.
I tried this but it doesnt work
let test:Int32 = -1
let leftOrRight:CGFloat = pow(CGFloat(test),random(min:0,max:1))
func random(min : CGFloat, max : CGFloat) -> CGFloat{
return random() * (max - min) + min
}
leftOrRight is always NaN.
You are trying to generate a CGFloat value of -1.0 or 1.0 with equal probability. You can do that by generating a value of 0 or 1 (using arc4random_uniform(2)) and testing it before assigning the CGFloat value:
let leftOrRight: CGFloat = arc4random_uniform(2) == 0 ? 1 : -1
In Swift 4.2 (Xcode 10), you could use Bool.random() to simplify it:
let leftOrRight: CGFloat = Bool.random() ? 1 : -1
If you are using Swift 4/4.1 then this should do the trick:
let leftOrRight = 2 * (Double(arc4random_uniform(2)) - 0.5)
If you are using Swift 4.2, you could use:
let array: [CGFloat] = [-1, 1]
let leftOrRight: CGFloat = array.randomElement()
If you want leftOrRightto be a random Boolean:
let leftOrRight: Bool = Bool.random()
For more on what's coming in Swift 4.2 have a look here.

How to generate a float between 1 and 100

I am trying to generate random floats between 1 and 100, but I keep getting errors everytime. Currently I am trying:
func returnDbl ()-> Double {
var randNum = Double(Float(arc4random(101) % 5))
return randNum
}
print(returnDbl())
but to no avail, would someone point me in the right direction?
arc4random is zero-based and returns values between 0 and n-1, pass 100 as the upper bounds and add 1
arc4random_uniform is easier to use, it returns an Int32 type which has to be converted to Float.
func randomFloat() -> Float {
return Float(arc4random_uniform(100) + 1)
}
or Double
func randomDouble() -> Double {
return Double(arc4random_uniform(100) + 1)
}
or generic
func returnFloatingPoint<T : FloatingPointType>()-> T {
return T(arc4random_uniform(100) + 1)
}
let float : Float = returnFloatingPoint()
let double : Double = returnFloatingPoint()
Edit
To return a non-integral Double between 1.000000 and 99.99999 with arc4random_uniform() use
func returnDouble()-> Double {
return Double(arc4random_uniform(UInt32.max)) / 0x100000000 * 99.0 + 1.0
}
0x100000000 is UInt32.max + 1
let a = 1 + drand48() * 99
drand48 is a C function that returns a double in the range [0, 1). You can call it directly from Swift. Multiplying by 99 gives you a double in the range [0, 99). Add one to get into the range [1, 100).
As drand48 returns a double, the Swift type will be Double.
As per the comment, drand48 will by default return the same sequence of numbers upon every launch. You can avoid that by seeding it. E.g.
seed48(UnsafeMutablePointer<UInt16>([arc4random(), arc4random()]))
func returnDbl ()-> Double {
var randNum = Double(Float(arc4random() % 101))
return randNum
}
Ok thank you everybody for all of your help, the setups you showed me helped me figure out how the setup should at least look, my end result is
func returnDbl ()-> Double{
var randNum = Double(Float(arc4random_uniform(99)+1)) / Double(UINT32_MAX)
return randNum
}
print(returnDbl())
it returns floats between 1 and 100.

How to calculate the 21! (21 factorial) in swift?

I am making fuction that calculate factorial in swift. like this
func factorial(factorialNumber: UInt64) -> UInt64 {
if factorialNumber == 0 {
return 1
} else {
return factorialNumber * factorial(factorialNumber - 1)
}
}
let x = factorial(20)
this fuction can calculate untill 20.
I think factorial(21) value bigger than UINT64_MAX.
then How to calculate the 21! (21 factorial) in swift?
func factorial(_ n: Int) -> Double {
return (1...n).map(Double.init).reduce(1.0, *)
}
(1...n): We create an array of all the numbers that are involved in the operation (i.e: [1, 2, 3, ...]).
map(Double.init): We change from Int to Double because we can represent bigger numbers with Doubles than with Ints (https://en.wikipedia.org/wiki/Double-precision_floating-point_format). So, we now have the array of all the numbers that are involved in the operation as Doubles (i.e: [1.0, 2.0, 3.0, ...]).
reduce(1.0, *): We start multiplying 1.0 with the first element in the array (1.0*1.0 = 1.0), then the result of that with the next one (1.0*2.0 = 2.0), then the result of that with the next one (2.0*3.0 = 6.0), and so on.
Step 2 is to avoid the overflow issue.
Step 3 is to save us from explicitly defining a variable for keeping track of the partial results.
Unsigned 64 bit integer has a maximum value of 18,446,744,073,709,551,615. While 21! = 51,090,942,171,709,440,000. For this kind of case, you need a Big Integer type. I found a question about Big Integer in Swift. There's a library for Big Integer in that link.
BigInteger equivalent in Swift?
Did you think about using a double perhaps? Or NSDecimalNumber?
Also calling the same function recursively is really bad performance wise.
How about using a loop:
let value = number.intValue - 1
var product = NSDecimalNumber(value: number.intValue)
for i in (1...value).reversed() {
product = product.multiplying(by: NSDecimalNumber(value: i))
}
Here's a function that accepts any type that conforms to the Numeric protocol, which are all builtin number types.
func factorial<N: Numeric>(_ x: N) -> N {
x == 0 ? 1 : x * factorial(x - 1)
}
First we need to declare temp variable of type double so it can hold size of number.
Then we create a function that takes a parameter of type double.
Then we check, if the number equal 0 we can return or do nothing. We have an if condition so we can break the recursion of the function. Finally we return temp, which holds the factorial of given number.
var temp:Double = 1.0
func factorial(x:Double) -> Double{
if(x==0){
//do nothing
}else{
factorial(x: x-1)
temp *= x
}
return temp
}
factorial(x: 21.0)
I make function calculate factorial like this:
func factorialNumber( namber : Int ) -> Int {
var x = 1
for i in 1...namber {
x *= i
}
return x
}
print ( factorialNumber (namber : 5 ))
If you are willing to give up precision you can use a Double to roughly calculate factorials up to 170:
func factorial(_ n: Int) -> Double {
if n == 0 {
return 1
}
var a: Double = 1
for i in 1...n {
a *= Double(i)
}
return a
}
If not, use a big integer library.
func factoruial(_ num:Int) -> Int{
if num == 0 || num == 1{
return 1
}else{
return(num*factoruial(num - 1))
}
}
Using recursion to solve this problem:
func factorial(_ n: UInt) -> UInt {
return n < 2 ? 1 : n*factorial(n - 1)
}
func factorial(a: Int) -> Int {
return a == 1 ? a : a * factorial(a: a - 1)
}
print(factorial(a : 5))
print(factorial(a: 9))

Swift: "float is not convertible to int"

How can I convert this to Int? I've tried using the initializer: Int(), using the round method.. All of them generates an error. In this case the one I get is: "float is not convertible to int"
let CirclePoints = 84
let PI = 3.14159
let radius: Double!
let xBase: Int!
let yBase: Int!
var xPos = Int()
var yPos = Int()
xPos = round(xBase + radius * cos((PI / 10) * circlePoint))
I'd recommend converting all of your values to double since that is what the round function takes. The round function also returns a double, so you'll have to convert the result of round() into an Int to store it in xPos.
xPos = Int(round(Double(xBase) + Double(radius) * cos((PI / 10) * Double(circlePoint))))
Note that the conversion process here is actually creating new double values from the variables, not changing those variables.

NSNumber is not a subtype of UInt8

I am trying to do some fairly simple SKShapeNode math to set the position of a shape node. It looks like this:
for var i = 0; i < 6; i++ {
var pip: Pip = pips[i]
// set position
let x: CGFloat = tray2.frame.size.width - (i + 1) * pip.frame.size.width - (i + 1) * 2
let y: CGFloat = tray2.frame.size.height - pip.frame.size.height - 1
pip.position = CGPointMake(x, y)
tray2.addChild(pip)
}
However, in the 'let x...' line, an error is produced: "NSNumber is not a subtype of UInt8". What am I doing wrong here?
The problem is that you cannot implicitly convert integers to floats. For example:
(i + 1) * pip.frame.size.width
this is invalid because i is an Int, i + 1 is an Int but pip.frame.size.width is a CGFloat. This would work in languages like Obj-C where i would be implicitly cast to CGFloat but Swift has no implicit casts, so we have to cast explicitly.
The simplest fix is to convert i into a CGFloat first:
let floatI = CGFloat(i)
and then
let x = tray2.frame.size.width - (floatI + 1) * pip.frame.size.width - (floatI + 1) * 2
Wrap the expression in a CGFloat initializer to address the compiler error
for var i = 0; i < 6; i++ {
var pip: Pip = pips[i]
// set position
let x: CGFloat = CGFloat(tray2.frame.size.width - (i + 1) * pip.frame.size.width - (i + 1) * 2)
let y: CGFloat = CGFloat(tray2.frame.size.height - pip.frame.size.height - 1)
pip.position = CGPointMake(x, y)
tray2.addChild(pip)
}