Swift syntax issue: var a:Int64 = -7 - swift

I'm playing with Apple's new Swift language. Below snippet can successfully produce a result:
var a:Int64 = -7
println(a)
However, if I change the code to this:
var a:Int64 =-7
println(a)
I'll get the error:
Error:(12, 12) consecutive statements on a line must be separated by ';'
It seems that Swift thinks =- is an operator, which does not exist in Swift. If so, why generate that error?

Swift is very strict to avoid ambiguity in operators =- in var a:Int64 =-7 basically means unary prefix operator, that is undefined in this case. The =- cannot be split in two as there is no separator and =- operator can be defined any time. To avoid any ambiguity use spaces. var a:Int64 = -7 has clear separation between assignment and unary prefix operator.

Related

Swift type inference and basic addition

Pretty new to Swift and learning about data types.
let partNumber = 3.2
let wholeNumber = 2
partNumber + wholeNumber //Binary operator '+' cannot be applied to operands of type 'Double' and 'Int'
3.2 + 2 // outputs 5.2
I understand that partNumber is a Double type and wholeNumber is an Int. What I don't understand is why playground errors out when I attempt to add both constants together. To add confusion the addition works when not assigned as a constant.
The + operator does not support adding a Double and and Integer together in this way
If you change up your code to make sure wholeNumber is a Double type, then it'll work
let partNumber = 3.2
let wholeNumber: Double = 2
let result = partNumber + wholeNumber
This is all covered in the Swift book under Numeric Type Conversion.
Some relevant quotes from the subsection titled "Integer and Floating-Point Conversion":
Conversions between integer and floating-point numeric types must be made explicit
This is followed by an example similar to your code. Your code needs a cast:
let partNumber = 3.2
let wholeNumber = 2
partNumber + Double(wholeNumber)
and:
The rules for combining numeric constants and variables are different from the rules for numeric literals. The literal value 3 can be added directly to the literal value 0.14159, because number literals don’t have an explicit type in and of themselves. Their type is inferred only at the point that they’re evaluated by the compiler.
Which covers the second part of your question.
To add confusion the addition works when not assigned as a constant.
That doesn't "add to the confusion" at all. It's the answer. There is implicit coercion between numeric types for literals (what you call a "constant") but not for variables. It's as simple as that.

Proper swift 2.2 syntax with for loop [duplicate]

This question already has an answer here:
Replace c style for-loop in Swift 2.2.1
(1 answer)
Closed 6 years ago.
I wrote the following code in swift(2.2):
for var i = 2; sqrt(Double(num)) >= Double(i); i += 1 {.....}
However, I keep getting a warning message which reads "c-style for statement is deprecated and will be removed in future version of swift".
So, what is the proper way of writing the same loop is "Swift style"? Casting the value of num as double gives me error with "Swift style". Any suggestions?
Thank you.
You can refactor your deprecated C-style for loop using the for-in construct
for i in 2...Int(sqrt(Double(num))) { }
However if you really want to go essential try defining this operator to find the square root of an int rounded down to the closest int.
prefix operator √ {}
prefix func √ (number: Int) -> Int {
return Int(sqrt(Double(number)))
}
Now you can write your for loop this way
for i in 2...(√num) { }
The For loop in Swift 2.2 is re-designed for use in iterating over a sequence, such as ranges of numbers, items in an array, or characters in a string. The condition in your For loop is not easily converted into sequence or range, so it would be best to re-write it as a While loop.

Swift float multiplication error

This code fails:
let element: Float = self.getElement(row: 1, column: j)
let multiplier = powf(-1, j+2)*element
with this error:
Playground execution failed: :140:51: error: cannot invoke '*' with an argument list of type '(Float, Float)'
let multiplier = powf(-1, j+2)*element
Bear in mind that this occurs in this block:
for j in 0...self.columnCount {
where columnCount is a Float. Also, the first line does execute and so the getElement method indeed returns a Float.
I am completely puzzled by this as I see no reason why it shouldn't work.
There is no implicit numeric conversion in swift, so you have to do explicit conversion when dealing with different types and/or when the expected type is different than the result of the expression.
In your case, j is an Int whereas powf expects a Float, so it must be converted as follows:
let multiplier = powf(-1, Float(j)+2)*element
Note that the 2 literal, although usually considered an integer, is automatically inferred a Float type by the compiler, so in that case an explicit conversion is not required.
I ended up solving this by using Float(j) instead of j when calling powf(). Evidently, j cannot be implicitly converted to a Float.

Brackets needed when using infix approach?

Really simple Scala question.
How come the infix approach to 1 + 2 does not need brackets?
scala>1 + 2
res7: Int = 3
But the dot approach does?
scala>1 .+(2)
res8: Int = 3
scala> 1 .+2
<console>:1: error: ';' expected but integer literal found.
1 .+2
^
Everything in Scala is an object so 1 .+(2) means to call method + on object 1 with parameter 2. And of course if you call a method like this, you need to enclose parameters in brackets, just like in regular obj.somemethod(someparam,foo,bar).
Infix notation (1 + 2) actually means the same thing (it is syntactic sugar to call method with one parameter).
And the space before dot is needed so that dot is interpreted as method invocation and not as decimal separator. Otherwise 1.+(2) or 1.+2 will be interpreted as 1.0 + 2.
I think it's to do with language definition:
A left-associative binary operation e1 op e2 is interpreted as e1.op(e2).
http://www.scala-lang.org/docu/files/ScalaReference.pdf
The form 1 .+ 2 is not specified anywhere so my guess is that the compiler is looking for either 1 + 2 or 1.+(2). In fact the compiler converts 1+2 into 1.+(2) normally. When using the . it expects a function and not the infix syntax.
Bottom line: you can use either but not something half way there.
PD: someone commented that calling a method you need to use it like this: obj.somemethod(someparam,foo,bar) but on that case you can also do this: obj somemethod (someparam,foo,bar) and you have to leave the spaces for it to work.

Why does Scala's semicolon inference fail here?

On compiling the following code with Scala 2.7.3,
package spoj
object Prime1 {
def main(args: Array[String]) {
def isPrime(n: Int) = (n != 1) && (2 to n/2 forall (n % _ != 0))
val read = new java.util.Scanner(System.in)
var nTests = read nextInt // [*]
while(nTests > 0) {
val (start, end) = (read nextInt, read nextInt)
start to end filter(isPrime(_)) foreach println
println
nTests -= 1
}
}
}
I get the following compile time error :
PRIME1.scala:8: error: illegal start of simple expression
while(nTests > 0) {
^
PRIME1.scala:14: error: block must end in result expression, not in definition
}
^
two errors found
When I add a semicolon at the end of the line commented as [*], the program compiles fine. Can anyone please explain why does Scala's semicolon inference fail to work on that particular line?
Is it because scala is assuming that you are using the syntax a foo b (equivalent to a.foo(b)) in your call to readInt. That is, it assumes that the while loop is the argument to readInt (recall that every expression has a type) and hence the last statement is a declaration:
var ntests = read nextInt x
wherex is your while block.
I must say that, as a point of preference, I've now returned to using the usual a.foo(b) syntax over a foo b unless specifically working with a DSL which was designed with that use in mind (like actors' a ! b). It makes things much clearer in general and you don't get bitten by weird stuff like this!
Additional comment to the answer by oxbow_lakes...
var ntests = read nextInt()
Should fix things for you as an alternative to the semicolon
To add a little more about the semicolon inference, Scala actually does this in two stages. First it infers a special token called nl by the language spec. The parser allows nl to be used as a statement separator, as well as semicolons. However, nl is also permitted in a few other places by the grammar. In particular, a single nl is allowed after infix operators when the first token on the next line can start an expression -- and while can start an expression, which is why it interprets it that way. Unfortunately, although while can start a expression, a while statement cannot be used in an infix expression, hence the error. Personally, it seems a rather quirky way for the parser to work, but there's quite plausibly a sane rationale behind it for all I know!
As yet another option to the others suggested, putting a blank newline between your [*] line and the while line will also fix the problem, because only a single nl is permitted after infix operators, so multiple nls forces a different interpretation by the parser.