class Point(val x: Int, val y: Int) {
def this(xArg: Int) = {
this(xArg, 0)
println("constructor")
}
}
class TalkPoint(x: Int, y: Int) extends Point(x, y) {
def talk() = {
println("my position is (" + x + "," + y + ")")
}
}
object Test {
def main(args: Array[String]): Unit = {
val p = new TalkPoint(0, 0)
p.talk()
}
}
Here is an example program in Scala.
I am confused by the missing result "constructor " .
where is the constructor of the parent class Point ?
The constructor is actually part of the definition of the class:
class Point(val x:Int, val y:Int). This line defines not only Point's primary constructor that takes two Ints, by using val, it has made x and y read-only fields of the Point class. def this(xArg:Int) is what is called an auxiliary constructor, and notice how it calls the primary constructor (as all auxiliary constructors in Scala must do): this(xArg,0). In Java, Point's definition is roughly equivalent to
class Point {
public final int x, y;
public Point(int x, int y) {
this.x = x;
this.y = y;
}
public Point(int x) {
this(x,0);
System.out.println("constructor");
}
}
The line class TalkPoint(x:Int,y:Int) extends Point(x,y) not only defines the class TalkPoint, it also defines its primary constructor, declares it is a subclass of Point, and even calls Point's primary constructor. In Java, something like
class TalkPoint extends Point {
public TalkPoint(int x, int y) {
super(x, y);
}
// ...
}
In your main method, you call TalkPoint's primary constructor, which in turn calls Point's primary constructor, and at no point is Point's auxiliary constructor invoked, and thus nothing is printed. Try creating a Point, however, with its auxiliary constructor: val p = new Point(42)
Related
I have the following piece of code
class A(var x: Int, var y: Int){
}
class B(x: Int, y: Int) extends A(x,y){
def setX(xx: Int): this.type = {
this.x = xx
this
}
}
but it gives the following error:
error: reassignment to val
this.x = xx
^
I don't know whats happening since x and y should be variables. WHat's the correct way of doing this?
There is a collision of the names of member variables with the names of the constructor arguments.
The obvious workaround compiles just fine:
class A(var x: Int, var y: Int)
class B(cx: Int, cy: Int) extends A(cx, cy) {
def setX(xx: Int): this.type = {
this.x = xx
this
}
}
The issue doesn't seem to be new, here is a llink to a forum entry from 2009. It has a posting with literally the same error message in the same situation.
The root cause is that constructor arguments can be automatically converted into private vals, because they can be referenced from the methods of the object:
class B(cx: Int, cy: Int) {
def foo: Int = cx
}
package Controls
object TestBreak extends App {
def values = {
x1 = x
y1 = y
}
val (x, y) = (1, 2)
values
var (x1, y1) = (2, 3)
println((x, y))
println((x1, y1))
}
I could see here, the program is Executing successfully without any error.
When i call method values ,even before the intialization of variables x1,y1,..
How does scala handles this case ?
How compilation of code is taking palce ?
App extends DelayedInit
As App extends DelayedInit and Main extends App. Code in the Main constructor executes as part of delayed init. Before jumping into how delayed init works, Lets see the differnce between constructor variable declaration and method variable declaration.
scala> class A {
| def a(): Unit = { println(b) }
| val b = 1
| }
defined class A
In the above example b is declared later and function a() is declared first. As it the constructor of the class Compiler accepts it.
scala> def x: Int = {
| def y: Int = z
| val z = 1
| y
| }
<console>:12: error: forward reference extends over definition of value z
def y: Int = z
^
In the above case compiler complains saying that its a forward reference.
Delayed Init is deprecated
Classes and objects (but note, not traits) inheriting the DelayedInit marker trait will have their initialization code rewritten as follows: code becomes delayedInit(code).
Initialization code comprises all statements and all value definitions that are executed during initialization.
Example:
trait Helper extends DelayedInit {
def delayedInit(body: => Unit) = {
println("dummy text, printed before initialization of C")
body // evaluates the initialization code of C
}
}
class C extends Helper {
println("this is the initialization code of C")
}
object Test extends App {
val c = new C
}
Should result in the following being printed:
dummy text, printed before initialization of C
this is the initialization code of C
I'm trying to implement a trait as follow: (1)
trait FooLike {
def foo(x: Int): Int
}
class Foo extends FooLike {
def foo(x: Int, y: Int = 0): Int = x + y
}
But the compiler complains that the method foo(x: Int): Int is not implemented.
I can do: (2)
class Foo extends FooLike {
def foo(x: Int): Int = foo(x, 0)
def foo(x: Int, y: Int = 0): Int = x + y
}
But it feels like Java, and I don't like it ! Is there a way to avoid this boilerplate ?
I thought that def foo(x: Int, y: Int = 0) would define two methods in the background but apparently it's not the case. What's actually happening ?
--- EDIT : more weirdness ---
Also the following is perfectly legit: (3)
class Foo extends FooLike {
def foo(x: Int): Int = x - 1
def foo(x: Int, y: Int = 0): Int = x + y
}
while it doesn't seems reasonable (foo(4) = 3 while foo(4, 0) = 4).
I think that authorizing (1) and forbidding (3) would have been the most reasonable choice, but instead they made the opposite choice. So why did Scala make those choices ?
It is not possible to override a method of a different type signature in Scala by using default arguments. This is because of how default arguments are implemented. Default arguments are inserted when and where the method is applied, so there is only one version of the method being defined.
According to SID-1: Named and Default Arguments, when a method foo() with default arguments is compiled, only one method foo() is defined, which takes all the arguments. A call with default arguments like f.foo(xValue) is transformed at compile time into code equivalent to the following:
{
val x = xValue
val y = f.foo$default$2
f.foo(x, y)
}
The foo$default$2 method is a hidden method which takes no arguments and returns the default value of argument #2 to the method foo().
So while you can write the same exact functional application foo(xValue) for a method foo(x: Int) or a method foo(x: Int, y: Int = 0), the methods being called "behind the scenes" do not have the same type signature.
You are extending a trait which has a non implemented method, you must implement it in the extending classes or you can implement it in the trait, if you don't need the foo with a single variable you can do:
trait FooLike {
def foo(x: Int, y: Int): Int
}
class Foo extends FooLike {
def foo(x: Int, y: Int = 0): Int = x + y
}
But I suppose you do and so you have to give the compiler an implementation for it, to compare this case to Java, it's like when you extend an abstract class and you don't implement a method, the compiler will complain that either you implement the method or you declare the class abstract.
One other approach which came to mind and is fairly reasonable is to implement the method in the trait so:
trait FooLike {
def foo(x: Int): Int = x
}
class Foo extends FooLike {
def foo(x: Int, y: Int = 0): Int = x + y
}
Then if you want to add a new class with the trait mixed in but with a different method implementation just override the method:
class AnotherFoo extends FooLike {
override def foo(x: Int): Int = x + 1
}
I'm having difficulty transitioning from the world of C++/Templates to scala. I'm used to being able to use any operation on a template parameter T that I want, as long as anything I use to instantiate T with supports those operations (compile-time Duck typing, basically). I cannot find the corresponding idiom in Scala that will allow me to define an abstract class with a single type parameter, and which expects a certain interface for type T.
What I have almost works, but I cannot figure out how to tell the abstract class (Texture[T <: Summable[T]]) that T supports conversion/construction from an Int. How can I add the implicit conversion to the trait Summable so that Texture knows T supports the conversion?
trait Summable[T] {
def += (v : T) : Unit
def -= (v : T) : Unit
}
object Int4 { implicit def int2Int4(i : Int) = new Int4(i, i, i, i) }
class Int4 (var x : Int, var y : Int, var z : Int, var w : Int) extends Summable[Int4] {
def this (v : Int) = this(v, v, v, v)
def += (v : Int4) : Unit = { x += v.x; y += v.y; z += v.z; w += v.w }
def -= (v : Int4) : Unit = { x -= v.x; y -= v.y; z -= v.z; w -= v.w }
}
abstract class Texture[Texel <: Summable[Texel]] {
var counter : Texel
def accumulate(v : Texel) : Unit = { counter += v }
def decrement() : Unit = { counter -= 1 } //< COMPILE ERROR HERE, fails to find implicit
}
class Int4Target extends Texture[Int4] {
var counter : Int4 = new Int4(0, 1, 2, 3)
}
You can define an implicit constructor parameter like this
abstract class Texture[Texel <: Summable[Texel]](implicit int2Texel: Int => Texel) {
//...
This essentially tells the compiler that in order to construct an instance of Texture, there must be an implicit conversion function available from Int to Texel. Assuming you have such a function defined somewhere in scope (which you do), you should no longer get a compile error.
Edit2: Ok I originally misread your code, you actually only need one implicit parameter from Int => Texel. Your code compiles for me with the above modification.
Edit: You'll actually need 2 conversion functions, one from Texel => Int and another from Int => Texel in order to properly reassign the var
A fundamental difference between C++ templates and anything in Scala is that C++ templates are compiled for each use -- that is, if you use a template with int and with double, then two different classes are compiled, and they are only compiled when some code actually makes use of it.
Scala, on the other hand, has separate compilation. Not as good as Java's, given JVM limitations, but still following the basic principle. So, if something has a type parameter, it's still compiled at declaration, and only one such class ever exists. That compiled code has to support all possible parameters than it can be called with, which makes for rather different restrictions than templates.
On the matter of traits and implicit conversions, traits do not support parameters, and implicit conversions (view bounds) are parameters. Instead, use a class.
It is not possible in scala to require an implicit conversion to exist for a type
parameter of a trait. There is a good reason for this. Suppose we defined a
trait like:
trait ATrait[T <% Int] {
def method(v: T) { println(v: Int) }
}
And then made instances of it in two places:
package place1 {
implicit def strToInt(s: String) = 5
val inst = new ATrait[String]
}
package place2 {
implicit def strToInt(s: String) = 6
val inst = new ATrait[String]
}
And then used these instances like:
val a = if (someTest) place1 else place2
a.method("Hello")
Should this print 5 or 6? That is, which implicit conversion should it use?
Implicits have to be found at compile time, but you don't know which implicit
conversion was present for the creation of the object.
In other words, implicits are provided by the scope in which they are used, not
by the objects they are used on; the latter would be impossible.
So, about your problem. Instead of using an implicit, you could use an ordinary
member:
trait Summable[T] {
def -= (v: T): Unit
def -= (v: Int) { this -= (encode(v)) }
def encode(i: Int): T
}
class Int4 (var x: Int, var y: Int, var z: Int, var w: Int) extends Summable[Int4] {
def -= (v : Int4) : Unit = { x -= v.x; y -= v.y; z -= v.z; w -= v.w }
def encode(i: Int) = Int4.int2Int4(i)
}
Now the decrement method compiles correctly.
Another way of saying this is, don't think of implicits as properties belonging
to a type (ie, "can be implicitly converted from an Int" isn't a property of
Int4). They are values, which can be identified using types.
Hope this helps.
Scala's handling of superclass constructor parameters is confusing me...
with this code:
class ArrayElement(val contents: Array[String]) {
...
}
class LineElement(s: String) extends ArrayElement(Array(s)) {
...
}
LineElement is declared to extend ArrayElement, it seems strange to me that the Array(s) parameter in ArrayElement(Array(s)) is creating an Array instance - runtime??? Is this scala's syntatic sugar or is there something else going on here?
Yes, the Array(s) expression is evaluated at run-time.
class Foo (val x: Int)
class Bar (x: Int, y: Int) extends Foo(x + y)
Scala allows expressions in the calls to a superclass' constructor (similar to what Java does with its use of super(...)). These expressions are evaluated at run-time.
Actually The Array(s) is evaluated at run-time because the structure you've used is an effective call to primary constructor of your super class.
To recall, a class can only have on primary constructor that takes arguments in its definition class A(param:AnyRef), other constructors are called this and are mandate to call the primary constructor (or to chain constructors up to it).
And, such a constraint exists on super call, that is, a sub class primary constructor calls the super primary constructor.
Here is how to see such Scala structure
class Foo (val x: Int)
class Bar (x: Int, y: Int) extends Foo(x + y)
the Java counterpart
public class Foo {
private x:int;
public Foo(x:int) {
this.x = x;
}
public int getX() {
return x;
}
}
public class Bar {
private y:int;
public Bar(x:int, y:int) {
/**** here is when your array will be created *****/
super(x+y);
this.y = y;
}
public int getY() {
return y;
}
}