I'm looking at this Named Arguments example in Scala in Depth:
scala> class Parent {
| def foo(bar: Int = 1, baz: Int = 2): Int = bar + baz
| }
defined class Parent
scala> class Child extends Parent {
| override def foo(baz: Int = 3, bar: Int = 4): Int = super.foo(baz, bar)
| }
defined class Child
scala> val p = new Parent
p: Parent = Parent#6100756c
scala> p.foo()
res1: Int = 3
scala> val x = new Child
x: Child = Child#70605759
Calling x.foo() evaluates to 7 since Child#foo has default arguments of 3 and 4.
scala> x.foo()
res3: Int = 7
Instantiate a new Child at run-time, but Parent at compile-time. This may or may not be correct
scala> val y: Parent = new Child
y: Parent = Child#540b6fd1
Calling x.foo() evaluates to 7 since Child#foo has default arguments of 3 and 4.
scala> y.foo()
res5: Int = 7
Calling x.foo() evaluates to 4 since Child#foo has a default baz argument of 3.
scala> x.foo(bar = 1)
res6: Int = 4
However, I don't understand why y.foo(bar = 1) returns 5. I would've expected Child#foo to be evaluated since y is a Child type. Passing in a bar of 1 to foo means that baz's default is 3. And so it should produce 4. But my understanding is of course incorrect.
scala> y.foo(bar = 1)
res7: Int = 5
There are 2 reasons:
Default parameters implementation
scala compiler creates helper methods for default parameters:
val p = new Parent()
val c = new Child()
p.`foo$default$1`
// Int = 1
p.`foo$default$2`
// Int = 2
c.`foo$default$1`
// Int = 3
c.`foo$default$2`
// Int = 4
This is why you could use not only constants, but also fields and methods for default parameters:
def test(i: Int = util.Random.nextInt) = i
test()
// Int = -1102682999
test()
// Int = -1994652923
Named parameters implementation
There are no named parameters after compilation - all parameters are positional.
So since bar is second parameter of Child#foo this code:
c.foo(bar = 1)
// Int = 4
is translated by compiler to this:
c.foo(c.`foo$default$1`, /*bar = */1)
// Int = 4
But since bar is first parameter of Parent#foo this code:
val tmp: Parent = c
tmp.foo(bar = 1)
// Int = 5
is translated to this:
tmp.foo(/*bar = */1, tmp.`foo$default$2`)
// Int = 5
As we already know c.foo$default$2 returns 4, so c.foo(1, 4) returns 5.
Related
I know that one can define an operator in Scala like this :
class value(var valu:Int) {
def +(i:Int) = { this.valu + i }
def ==>(i:Int ) = { this.valu = i }
}
But I cannot seem to overload the = operator like this :
class value(var valu:Int) {
def =(i:Int) = { this.valu = i }
}
Do you know if there is any way to do this?
The syntax for making mutable objects isn't obvious and isn't encountered often because mutability is generally undesirable.
class Value(private var valu:Int) {
def update(i:Int) :Unit = valu = i
}
val v = new Value(19)
v() = 52
= is a reserved word like yield, so to use it as an identifier, you put it in backticks, though I suspect no one does that:
scala> class C(var i: Int) { def `=`(n: Int) = i = n }
defined class C
scala> val c = new C(42)
c: C = C#9efcd90
scala> c.`=`(27)
scala> c.i
res1: Int = 27
scala> c `=` 5
scala> c.i
res3: Int = 5
Compare:
scala> val yield = 2
^
error: illegal start of simple pattern
scala> val `yield` = 2
yield: Int = 2
In scala def is used to define a method and val, var are used for defining variables.
Consider the following code:
scala> def i = 3
i: Int
scala> i.getClass()
res0: Class[Int] = int
scala> val v = 2
v: Int = 2
scala> v.getClass()
res1: Class[Int] = int
scala> println(v)
2
scala> println(i)
3
scala> i+v
res4: Int = 5
scala> def o = () => 2+3
o: () => Int
scala> o.getClass()
res5: Class[_ <: () => Int] = class $$Lambda$1139/1753607449
Why does variable definition work using def? If it is defining a function that returns an Int then why does getClass show Int instead of a function object?
Unlike val or var declaration, def i = 3 is not variable declaration. You are defining a method/function which returns a constant 3 and i does not take any parameters.
declaration using val and var get evaluated immediately but in case of lazy val and def evaluation happens when called explicitly.
i is a not argument function. In order to get rid of confusion you could declare it using empty parenthesis as well
def i() = 3
Difference between lazy val and def is
lazy val is lazily evaluated and the result is cached. That means further
def declaration is evaluated every time you call method name.
Example using Scala REPL
scala> lazy val a = { println("a evaluated"); 1}
a: Int = <lazy>
scala> def i = { println("i function evaluated"); 2}
i: Int
scala> a
a evaluated
res0: Int = 1
scala> a
res1: Int = 1
scala> a
res2: Int = 1
scala> i
i function evaluated
res3: Int = 2
scala> i
i function evaluated
res4: Int = 2
scala> i
i function evaluated
res5: Int = 2
Notice that a is evaluated only once and further invocations of a return the cached result i.e lazy val is evaluated once when it is called and the result is stored forever. So you see println output once
Notice function is evaluated every time it is invoked. In this case you see println output every time you invoke the function
General Convention
There's a convention of using an empty parameter list when the method has side effects and leaving them off when its pure.
edited
scala> def i = 1
i: Int
scala> :type i
Int
scala> :type i _
() => Int
EDIT: My answer addresses the question of revision #3.
It is quite useful to look on the code in the middle of the compilation process where you can look on what your code is actually translated to. The following simple program:
object TestApp {
def definedVal = 3
val valVal = 3
lazy val lazyValVal = 3
def main(args: Array[String]) {
println(definedVal)
println(valVal)
println(lazyValVal)
}
}
is translated to the following (using -Xprint:mixin compiler option):
[[syntax trees at end of mixin]] // test.scala
package <empty> {
object TestApp extends Object {
#volatile private[this] var bitmap$0: Boolean = false;
private def lazyValVal$lzycompute(): Int = {
{
TestApp.this.synchronized({
if (TestApp.this.bitmap$0.unary_!())
{
TestApp.this.lazyValVal = 3;
TestApp.this.bitmap$0 = true;
()
};
scala.runtime.BoxedUnit.UNIT
});
()
};
TestApp.this.lazyValVal
};
def definedVal(): Int = 3;
private[this] val valVal: Int = _;
<stable> <accessor> def valVal(): Int = TestApp.this.valVal;
lazy private[this] var lazyValVal: Int = _;
<stable> <accessor> lazy def lazyValVal(): Int = if (TestApp.this.bitmap$0.unary_!())
TestApp.this.lazyValVal$lzycompute()
else
TestApp.this.lazyValVal;
def main(args: Array[String]): Unit = {
scala.this.Predef.println(scala.Int.box(TestApp.this.definedVal()));
scala.this.Predef.println(scala.Int.box(TestApp.this.valVal()));
scala.this.Predef.println(scala.Int.box(TestApp.this.lazyValVal()))
};
def <init>(): TestApp.type = {
TestApp.super.<init>();
TestApp.this.valVal = 3;
()
}
}
}
From the output above it is possible to conclude the following:
definedVal is actually a method.
valVal is a field which is initialized in the constructor and has an automatically generated accessor.
For the lazy field lazyValVal compiler generates compute method which is called only once when the field is accessed the first time.
There is few different concept. Call by name, call by value and call by need. All def is essentially calls by name. What do you mean variable definition using def ??
Looks like duplicate to me:
Call by name vs call by value in Scala, clarification needed
More details in wiki: https://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_name
The scope of a name introduced by a declaration or definition is the
whole statement sequence containing the binding. However, there is a
restriction on forward references in blocks: In a statement sequence
s[1]...s[n] making up a block, if a simple name in s[i] refers to
an entity defined by s[j] where j >= i, then for all s[k]
between and including s[i] and s[j],
s[k] cannot be a variable definition.
If s[k] is a value definition, it must be lazy.
Edit: I am not sure Mikaël Mayer's answer actually explained everything. Consider:
object Test {
def main(args: Array[String]) {
println(x)
lazy val x: Int = 6
}
}
Here, the lazy value x definitely has to be read/evaluated before it is actually defined in the code! Which would contradict Mikaël's claim that lazy evaluation does away with the need to evaluate things before they are defined.
Normally you cannot have this:
val e: Int = 2
val a: Int = b+c
val b: Int = c
val c: Int = 1
val d: Int = 0
because value c is not yet defined at the time of the definition of a. Because a references c, all values between a and c should be lazy so that the dependency is avoided
val e: Int = 2
lazy val a: Int = b+c
lazy val b: Int = c
lazy val c: Int = 1
val d: Int = 0
This in fact translates a, b and c as objects whose value is initialized when it is read, which would be after the declaration, i.e. this would be equivalent to:
val e: Int = 2
var a: LazyEval[Int] = null
var b: LazyEval[Int] = null
var c: LazyEval[Int] = null
a = new LazyEval[Int] {
def evalInternal() = b.eval() + c.eval()
}
b = new LazyEval[Int] {
def evalInternal() = c.eval()
}
c = new LazyEval[Int] {
def evalInternal() = 1
}
val d = 0
where LazyEval would be something like the following (implemented by the compiler itself)
class LazyEval[T] {
var value: T = _
var computed: Boolean = false
def evalInternal(): T // Abstract method to be overriden
def eval(): T = {
if(computed) value else {
value = evalInternal()
computed = true
value
}
}
}
Edit
vals don't really exist in java. They are local variables or do not exist in computation. Therefore, the declaration of lazy val exists before anything is done. And remember that closures are implemented in Scala.
Your block would be rewritten as it:
object Test {
def main(args: Array[String]) {
// Declare all variables, val, vars.
var x: Lazy[Int] = null
// No more variables to declare. Lazy/or not variable definitions
x = new LazyEval[Int] {
def evalInternal() = 6
}
// Now the code starts
println(x)
}
}
You're trying to avoid references to entities which are provably uninitialized (or which are maybe uninitialized).
In a block, assignments occur in source order, but in a class template, members can be overridden and initialized early.
For instance,
{ val a = b ; val b = 1 } // if allowed, value of a is undefined
but in a template
class X { val a = b ; val b = 1 } // warning only
val x = new { override val b = 2 } with X
x.a // this is 2
class Y(override val b: Int) extends X // similarly
You also want to avoid this:
locally {
def a = c
val b = 2 // everything in-between must be lazy, too
def c = b + 1
}
Local objects are explicitly the same as lazy vals:
{ object p { val x = o.y } ; object o { val y = 1 } }
Other kinds of forward reference:
{ val x: X = 3 ; type X = Int }
The spec talks about forward references to "entities" -- a "name refers to an entity" -- which elsewhere means both terms and types, but obviously it really means only terms here.
It will let you harm yourself:
{ def a: Int = b ; def b: Int = a; a }
Maybe your mode of self-destruction must be well-defined. Then it's OK.
Why in this example, no error is thrown and b ends up holding default value?
scala> val b = a; val a = 5
b: Int = 0
a: Int = 5
When you do this in the REPL, you are effectively doing:
class Foobar { val b = a; val a = 5 }
b and a are assigned to in order, so at the time when you're assigning b, there is a field a, but it hasn't yet been assigned to, so it has the default value of 0. In Java, you can't do this because you can't reference a field before it is defined. I believe you can do this in Scala to allow lazy initialisation.
You can see this more clearly if you use the following code:
scala> class Foobar {
println("a=" + a)
val b = a
println("a=" + a)
val a = 5
println("a=" + a)
}
defined class Foobar
scala> new Foobar().b
a=0
a=0
a=5
res6: Int = 0
You can have the correct values assigned if you make a a method:
class Foobar { val b = a; def a = 5 }
defined class Foobar
scala> new Foobar().b
res2: Int = 5
or you can make b a lazy val:
scala> class Foobar { lazy val b = a; val a = 5 }
defined class Foobar
scala> new Foobar().b
res5: Int = 5
When this code is executed:
var a = 24
var b = Array (1, 2, 3)
a = 42
b = Array (3, 4, 5)
b (1) = 42
I see three (five?) assignments here. What is the name of the method call that is called in such circumstances?
Is it operator overloading?
Update:
Can I create a class and overload assignment? ( x = y not x(1) = y )
Having this file:
//assignmethod.scala
object Main {
def main(args: Array[String]) {
var a = 24
var b = Array (1, 2, 3)
a = 42
b = Array (3, 4, 5)
b (1) = 42
}
}
running scalac -print assignmethod.scala gives us:
[[syntax trees at end of cleanup]]// Scala source: assignmethod.scala
package <empty> {
final class Main extends java.lang.Object with ScalaObject {
def main(args: Array[java.lang.String]): Unit = {
var a: Int = 24;
var b: Array[Int] = scala.Array.apply(1, scala.this.Predef.wrapIntArray(Array[Int]{2, 3}));
a = 42;
b = scala.Array.apply(3, scala.this.Predef.wrapIntArray(Array[Int]{4, 5}));
b.update(1, 42)
};
def this(): object Main = {
Main.super.this();
()
}
}
}
As you can see the compiler just changes the last one (b (1) = 42) to the method call:
b.update(1, 42)
Complementing Michael's answer, assignment can't be overridden in Scala, though you can create an assignment-like operator, like :=, for example.
The "assignments" that can be overridden are:
// method update
a(x) = y
// method x_=, assuming method x exists and is also visible
a.x = y
// method +=, though it will be converted to x = x + y if method += doesn't exist
a += y