I think this question is somewhat related to Kotlin function declaration: equals sign before curly braces
In Scala, every statement is an expression (possibly with Unit type). If we surround multiple expressions with braces, then the final expression is the actual value of the curly braced part. Therefore,
// Scala
val a = {
val b = 1
val c = b + b
c + c
}
println(a)
The type of a is Int and the code prints the value 4.
However, in Kotlin, this is somewhat different.
If we do the same thing in the Kotlin,
// Kotlin
val a = {
val b = 1
val c = b + b
c + c
}
println(a)
The type of a is () -> Int and the code prints the Function0<java.lang.Integer>, which means a 0-ary function object with result type Int.
So if we want to print the value 4, we need to do println(a()).
In fact, the expression {} in Kotlin is a function () -> ().
I cannot find an explanation about this in Kotlin official reference pages. Without a parameter list or ->, curly braces make a function?
When I use Scala, I write many codes like the first code. However, in Kotlin, it creates a function object and we have to call the function, which may be an overhead in a loop or recursion.
Is there any document about this thing: just curly braces make a function object?
Any way to workaround the overhead in the second code if it is used many times?
EDIT
Here is the actual code that iterates many times:
in Java
while ((count = input.read(data, 0, BYTE_BLOCK_SIZE)) != -1) {
....
}
in Scala
while {
count = input.read(data, 0, BYTE_BLOCK_SIZE)
count != -1
} {
....
}
in Kotlin
while ({
count = input.read(data, 0, BYTE_BLOCK_SIZE)
count != -1
}()) {
...
}
You can see, only Kotlin makes a lot of function objects and calls them.
In Kotlin {} are always a lambda expression or a part of a syntax construct like while(true) {}. Probably different from Scala, but easy to grasp, nonetheless.
What you probably want is:
val a = run {
val b = 1
val c = b + b
c + c
}
println(a)
Kotlin has no build-in concept of "code blocks anywhere". Instead we use standard functions-helpers like run in the example above.
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/run.html
If {} defines a function, you can just call the function upon declaring it, which will immediately evaluate it:
val a = {
val b = 1
val c = b + b
c + c
}()
println(a)
prints 4. (Note the () at the end of the block)
In Scala, every statement is an expression (possibly with Unit type)
That's not exactly true. Check: Is everything a function or expression or object in scala?
As Zoltan said {} defines functions so instead of:
// Kotlin
val a = {
val b = 1
val c = b + b
c + c
}
println(a)
it should be:
// Kotlin
val a = {
val b = 1
val c = b + b
c + c
}
println(a()) //you defined a function, which returns `4`, not expression
or
// Kotlin
val a = {
val b = 1
val c = b + b
c + c
}() //note these braces
println(a)
If you would like to print a value, not function, you can try to declare the variable and then use one of Kotlin functions to change a value like in example below:
var sum = 0
ints.filter { it > 0 }.forEach {
sum += it
}
print(sum)
Read: Higher-Order Functions and
Lambdas
Hope it will help
Related
I am learning how to program in Scala and was being told that semicolon is optional in Scala. So with that in mind, I tried with the following nested block of code which does not have semi colon. However, it throws an error in the Scala REPL
scala> { val a = 1
| {val b = a * 2
| {val c = b + 4
| c}
| }
| }
<console>:17: error: Int(1) does not take parameters
{val b = a * 2
And the sample with semi colon worked perfectly fine.
scala> { val a = 1;
| { val b = a*2;
| { val c = b+4; c}
| }
| }
res22: Int = 6
Therefore it seems to me that semi colon is not really optional and is mandatory in some situations. May I ask in what situation the semi colon is mandatory?
I'll try to extract the essence from your example.
Consider the following code snippet:
{ val x = 1 { val y = 2 } }
To the compiler, it looks like syntactic sugar for
{ val x = 1.apply({ val y = 2 }) }
But the object 1 does not have an apply method that takes blocks, therefore the compiler produces an error:
error: Int(1) does not take parameters
{ val x = 1 { val y = 2 } }
^
Contrast this to
object I { def apply(a: => Any): Unit = () }
{ val x = I { val y = 2 } }
This works, because I now does have an apply method.
To make the differentiation between these two cases a little bit easier, the compiler requires a semicolon in the first case.
Now one might wonder why a line break between val x = 1 and the { is not converted into an inferred semicolon. I think the relevant quote from the spec would be this (1.2 Newline Characters) (most parts of enumerations omitted ([...]), emphasis mine):
The Scala grammar [...] contains productions where
optional nl tokens, but not semicolons, are accepted. This has the
effect that a newline in one of these positions does not terminate an
expression or statement. These positions can be summarized as follows:
[...]
in front of an opening brace ‘{’, if that brace is a legal continuation of the current statement or expression,
[...]
Note that this quote covers only the case with a single optional line break. It does not hold for two or more consecutive line breaks, e.g.
scala> {
| val x = 1
|
| { val y = 2 }
| }
is valid, and { val y = 2 } is parsed as a separate expression.
I guess the motivation was to allow embedded DSL's with syntactic sugar like this:
MY_WHILE(x >= 0)
{
println(x)
x -= 1
}
It would be really strange if one had to enclose each such MY_WHILE-statement into an additional pair of round parentheses, wouldn't it?
Adding to Andrey's answer, it's rare that you write code like this in idiomatic Scala, but, when you do, you should use locally:
{
val a = 1
locally {
val b = a * 2
locally {
val c = b + 4
c
}
}
}
This case is exactly why locally exists.
Scala only sometimes desugars
a += b
to
a = a + b
but not always. For example, some mutable collections define a += method, where instead it becomes
a.+=(b)
Is this behaviour
entirely determined by whether there is a suitable += method on a? (incl. are there any other examples of this behaviour?)
independent of whether the objects are val or var?
Relevant example
Adapted from Programming in Scala, in
var s = Set("a", "b")
s += "c"
In this case, the second line of code s += "c" is essentially shorthand for:
s = s + "c"
When does a += b become a = a + b in Scala?
When there is no applicable += method, there is an applicable + method and a is assignable (that is, it's a var or there's an a_= method).
Or as the spec puts it:
The re-interpretation occurs if the following two conditions are fulfilled.
The left-hand-side l does not have a member named +=, and also cannot be converted by an implicit conversion to a value with a member named +=.
The assignment l = l + r is type-correct. In particular this implies that l refers to a variable or object that can be assigned to, and that is convertible to a value with a member named +.
Is this behaviour
entirely determined by whether there is a suitable += method on a?
independent of whether the objects are val or var?
Not quite. If there is a suitable += method, it will be called regardless of any other factors (such as a being assignable). But when there isn't, the other factors determine whether it's desugared or you get an error message.
Note that the error message you get is different than the one you'd get from the desugared version: When the criteria for the desugaring don't apply, you get an error message that tells you "+= is not a member of ...", plus an explanation why the desugaring couldn't be applied (such as "receiver is not assignable" or the type error you'd get from a + b if a + b would produce a type error).
In my first answer I was a little hasty, my apologies. After researching a little and after read better the #sepp2k's answer and comment, I came to the conclusion that some classes in Scala implements the += method, and other just the + method, I played a little in With some Scala code and for instance:
//Set, Int, Double, String implements the "+" method,
//and then "+=" is syntactic sugar of a = a + b
var set = Set("a", "b")
set += "c"
var num = 3
num += 2
var str = "43"
str += 5
var l = List()
l += "someString"
// As you mention, MutableList implements "+=" method, and when you do
// mutL += 4, is the same as call the method mutL.+=
var mutL = new mutable.MutableList[Int]
mutL += 4
I'm reading Programming in Scala by M. Odersky and now I'm trying to understand the meaning of operators. As far as I can see, any operator in Scala is just a method. Consider the following example:
class OperatorTest(var a : Int) {
def +(ot: OperatorTest): OperatorTest = {
val retVal = OperatorTest(0);
retVal.a = a + ot.a;
println("=")
return retVal;
}
}
object OperatorTest {
def apply(a: Int) = new OperatorTest(a);
}
I this case we have only + operator defined in this class. And if we type something like this:
var ot = OperatorTest(10);
var ot2 = OperatorTest(20);
ot += ot2;
println(ot.a);
then
=+
30
will be the output. So I'd assume that for each class (or type?) in Scala we have += operator defined for it, as a += b iff a = a + b. But since every operator is just a method, where the += operator defined? Maybe there is some class (like Object in Java) containing all the defenitions for such operators and so forth.
I looked at AnyRef in hoping to find, but couldn't.
+= and similar operators are desugared by the compiler in case there is a + defined and no += is defined. (Similarly works for other operators too.) Check the Scala Language Specification (6.12.4):
Assignment operators are treated specially in that they can be
expanded to assignments if no other interpretation is valid.
Let's consider an assignment operator such as += in an infix operation
l += r, where l, r are expressions. This operation can be
re-interpreted as an operation which corresponds to the assignment
l = l + r except that the operation's left-hand-side l is evaluated
only once.
The re-interpretation occurs if the following two conditions are
fulfilled.
The left-hand-side l does not have a member named +=, and also cannot
be converted by an implicit conversion to a value with a member named
+=. The assignment l = l + r is type-correct. In particular this implies that l refers to a variable or object that can be assigned to,
and that is convertible to a value with a member named +.
As I read from book, scala operator associativity comes from left to right except operator ends with ":" char.
Given the val a = b = c, it becomes val a = (b = c) and makes a is initialized as Unit.
But why does not it become (val a = b) = c, and it cause that compile error because to use a Unit(returned from a=b) to receive c?
And after I really types (val a = b) = c, the compiler complains illeagal start of simple expression and pointers to val. Why are these two assignment operators not grouped from left to right?
The important thing to see here is that val a = b = c is a declaration and not an expression, but precedence and associativity are notions applying to expressions.
Looking at the Scala 2.11 Language Specification, 4.1 Value Declarations and Definitions, you can see that
val a = b = c
is a basic declaration falling into the PatVarDef case, so it can be read as (simplifying the Pattern2 part to the specific case of varid here):
'val' varid '=' Expr
Thus, here we have
the variable identifier varid is a,
the expression Expr is b = c,
a is assigned the value b = c evaluates to, which happens to be Unit.
For the later see What is the motivation for Scala assignment evaluating to Unit rather than the value assigned?
Note that the above assumes that b is a var, e.g.,
val c = 17
var b = 3
val a = b = c
as b is reassigned in the third line (else you would get error: reassignment to val). If you want to assign the same value to two val for some reason, you can use
val c = 17
val a, b = c
Recently I had an interview for Scala Developer position. I was asked such question
// matrix 100x100 (content unimportant)
val matrix = Seq.tabulate(100, 100) { case (x, y) => x + y }
// A
for {
row <- matrix
elem <- row
} print(elem)
// B
val func = print _
for {
row <- matrix
elem <- row
} func(elem)
and the question was: Which implementation, A or B, is more efficent?
We all know that for comprehensions can be translated to
// A
matrix.foreach(row => row.foreach(elem => print(elem)))
// B
matrix.foreach(row => row.foreach(func))
B can be written as matrix.foreach(row => row.foreach(print _))
Supposedly correct answer is B, because A will create function print 100 times more.
I have checked Language Specification but still fail to understand the answer. Can somebody explain this to me?
In short:
Example A is faster in theory, in practice you shouldn't be able to measure any difference though.
Long answer:
As you already found out
for {xs <- xxs; x <- xs} f(x)
is translated to
xxs.foreach(xs => xs.foreach(x => f(x)))
This is explained in §6.19 SLS:
A for loop
for ( p <- e; p' <- e' ... ) e''
where ... is a (possibly empty) sequence of generators, definitions, or guards, is translated to
e .foreach { case p => for ( p' <- e' ... ) e'' }
Now when one writes a function literal, one gets a new instance every time the function needs to be called (§6.23 SLS). This means that
xs.foreach(x => f(x))
is equivalent to
xs.foreach(new scala.Function1 { def apply(x: T) = f(x)})
When you introduce a local function type
val g = f _; xxs.foreach(xs => xs.foreach(x => g(x)))
you are not introducing an optimization because you still pass a function literal to foreach. In fact the code is slower because the inner foreach is translated to
xs.foreach(new scala.Function1 { def apply(x: T) = g.apply(x) })
where an additional call to the apply method of g happens. Though, you can optimize when you write
val g = f _; xxs.foreach(xs => xs.foreach(g))
because the inner foreach now is translated to
xs.foreach(g())
which means that the function g itself is passed to foreach.
This would mean that B is faster in theory, because no anonymous function needs to be created each time the body of the for comprehension is executed. However, the optimization mentioned above (that the function is directly passed to foreach) is not applied on for comprehensions, because as the spec says the translation includes the creation of function literals, therefore there are always unnecessary function objects created (here I must say that the compiler could optimize that as well, but it doesn't because optimization of for comprehensions is difficult and does still not happen in 2.11). All in all it means that A is more efficient but B would be more efficient if it is written without a for comprehension (and no function literal is created for the innermost function).
Nevertheless, all of these rules can only be applied in theory, because in practice there is the backend of scalac and the JVM itself which both can do optimizations - not to mention optimizations that are done by the CPU. Furthermore your example contains a syscall that is executed on every iteration - it is probably the most expensive operation here that outweighs everything else.
I'd agree with sschaef and say that A is the more efficient option.
Looking at the generated class files we get the following anonymous functions and their apply methods:
MethodA:
anonfun$2 -- row => row.foreach(new anonfun$2$$anonfun$1)
anonfun$2$$anonfun$1 -- elem => print(elem)
i.e. matrix.foreach(row => row.foreach(elem => print(elem)))
MethodB:
anonfun$3 -- x => print(x)
anonfun$4 -- row => row.foreach(new anonfun$4$$anonfun$2)
anonfun$4$$anonfun$2 -- elem => func(elem)
i.e. matrix.foreach(row => row.foreach(elem => func(elem)))
where func is just another indirection before calling to print. In addition func needs to be looked up, i.e. through a method call on an instance (this.func()) for each row.
So for Method B, 1 extra object is created (func) and there are # of elem additional function calls.
The most efficient option would be
matrix.foreach(row => row.foreach(func))
as this has the least number of objects created and does exactly as you would expect.
Benchmark
Summary
Method A is nearly 30% faster than method B.
Link to code: https://gist.github.com/ziggystar/490f693bc39d1396ef8d
Implementation Details
I added method C (two while loops) and D (fold, sum). I also increased the size of the matrix and used an IndexedSeq instead. Also I replaced the print with something less heavy (sum all entries).
Strangely the while construct is not the fastest. But if one uses Array instead of IndexedSeq it becomes the fastest by a large margin (factor 5, no boxing anymore). Using explicitly boxed integers, methods A, B, C are all equally fast. In particular they are faster by 50% compared to the implicitly boxed versions of A, B.
Results
A
4.907797735
4.369745787
4.375195012000001
4.7421321800000005
4.35150636
B
5.955951859000001
5.925475619
5.939570085000001
5.955592247
5.939672226000001
C
5.991946029
5.960122757000001
5.970733164
6.025532582
6.04999499
D
9.278486201
9.265983922
9.228320372
9.255641645
9.22281905
verify results
999000000
999000000
999000000
999000000
>$ scala -version
Scala code runner version 2.11.0 -- Copyright 2002-2013, LAMP/EPFL
Code excerpt
val matrix = IndexedSeq.tabulate(1000, 1000) { case (x, y) => x + y }
def variantA(): Int = {
var r = 0
for {
row <- matrix
elem <- row
}{
r += elem
}
r
}
def variantB(): Int = {
var r = 0
val f = (x:Int) => r += x
for {
row <- matrix
elem <- row
} f(elem)
r
}
def variantC(): Int = {
var r = 0
var i1 = 0
while(i1 < matrix.size){
var i2 = 0
val row = matrix(i1)
while(i2 < row.size){
r += row(i2)
i2 += 1
}
i1 += 1
}
r
}
def variantD(): Int = matrix.foldLeft(0)(_ + _.sum)