Scala compile error: member of method parameter not visible to class method - scala

I am getting a compile error of:
value txn is not a member of Charge
new Charge(this.txn + that.txn)
^
with the following Scala class definition:
class Charge(txn: Double = 0){
def combine(that:Charge): Charge =
new Charge(this.txn + that.txn)
}
Explicitly declaring txn as a val allows it to work:
class Charge(val txn: Double = 0){
def combine(that:Charge): Charge =
new Charge(this.txn + that.txn)
}
I thought val was assumed? Can somebody explain this? Is it a problem with my understanding of the default constructor or the scope of the method?

In scala, you can define classes in two forms, for ex.
class Charge(txn: Double) -> In this case scala compiler compiles it to java like below
public class Charge {
....
public Charge combine(Charge);
....
public Charge(double);
....
}
As you can notice in compiled java code, there is no public accessor for txn
Let's look at another variation of Charge class,
If you define like this class Charge(val txn: String), it gets compiled to below
public class Charge {
...
public double txn();
...
public Charge combine(Charge);
...
public Charge(double);
...
}
As you can see, in this case compiler generates public accessor for txn hence you are able to access that.txn when you mark it as val
case class Charge(txn: Double): This is data class for which scala generates getters, equals and toString for you.
You can compile this class
scalac Charge.scala
javap -c Charge.class
And then see what it generates

What you pass to constructor is essentially constructor parameters whose scope is limited to the constructor. If you want to make them visible from the outside, you have to declare them as vals or reassign to some other vals in the constructor body.

Related

What happens during template initialization?

In Scala official docs, it says that:
If this is the template of a trait then its mixin-evaluation consists
of an evaluation of the statement sequence statsstats.
If this is not a template of a trait, then its evaluation consists of
the following steps.
First, the superclass constructor sc is evaluated.
Then, all base classes in the template's linearization up to the template's
superclass denoted by sc are mixin-evaluated. Mixin-evaluation
happens in reverse order of occurrence in the linearization.
Finally, the statement sequence statsstats is evaluated.
I am wondering what the "mixin-evaluation" and "superclass constructor evaluation" mean here? Why is superclass constructor sc treated differently from traits mt1, mt2, mt3, etc.?
Well, this is one of those complicated things for which I don't think there is a good short answer unless you already know what the answer is. I think the short answer is that this is a result of the fact that Scala is compiled to the JVM bytecode and thus has to match restrictions of that target platform. Unfortunately I don't think this answer is clear so my real answer is going to be long.
Disclaimer: (a shameful self-promotion for the future readers): If you find this quite long answer useful, you might also take a look at my another long answer to another question by Lifu Huang on a similar topic.
Disclaimer: Java code in the translation examples is provided solely for illustrative purpose. They are inspired by what the Scala compiler actually does but don't match the "real thing" in many details. Moreover those examples are not guaranteed to work or compile. Unless explicitly mentioned otherwise I will use (simpler) code examples that are simplified versions of Scala 2.12/Java 8 translation rather than older (and more complicated) Scala/Java translations.
Some theory on mix-ins
Mixin is an idea in an object-oriented design to have some more or less encapsulated piece of logic that however doesn't make sense on its own and so it is added to some other classes. This is in a sense similar to multiple inheritance and multiple inheritance is actually the way this feature is designed in Scala.
If you want some real world examples of mix-ins in Scala, here are some:
Scala collection library implementation is based on mix-ins. If you look at definition of something like scala.collection.immutable.List you'll see a lot of mix-ins
sealed abstract class List[+A] extends AbstractSeq[A]
with LinearSeq[A]
with Product // this one is not a mix-in!
with GenericTraversableTemplate[A, List]
with LinearSeqOptimized[A, List[A]] {
In this example mix-ins are used to share implementation of advanced methods via core methods along the deep and wide Scala collections hierarchy.
Cake pattern used for dependency injection is based on mix-ins but this time the mixed-in logic typically doesn't make sense on its own at all.
What is important here is that in Scala you can mix-in both logic (methods) and data (fields).
Some theory on Java/JVM and multiple inheritance
"Naive" multiple inheritance as done in languages like C++ has an infamous Diamond problem. To fix it original design of Java didn't support multiple-inheritance of any logic or fields. You may "extend" exactly one base class (fully inheriting its behavior) and additionally you can "implement" many interfaces which means that the class claims to have all methods from the interface(s) but you can't have any real logic inherited from your base interface. The same restrictions existed in the JVM. 20 years later in Java 8 Default Methods were added. So now you can inherit some methods but still can't inherit any fields. This simplified implementation of mix-ins in Scala 2.12 at the price of requiring Java 8 as its target platform. Still interfaces can't have (non-static) fields and thus can't have constructors. This is one of the major reasons why the superclass constructor sc is treated differently from the traits mt1, mt2, mt3, etc.
Also it is important to note that Java was designed as a pretty safe language. Particularly it fights against "undefined behaviors" that might happen if you re-use some values that are just leftover (garbage) in the memory. So Java ensures that you can't access any fields of the base class until its constructor is called. This makes super call pretty much mandatory first line in any child constructor.
Scala and mix-ins (simple example)
So now imagine you are a designer of the Scala language and you want it to have mix-ins but your target platform (JVM) doesn't support them. What should you do? Obviously your compiler should be able to convert mix-ins into something that JVM supports. And here is a rough approximation of how it is done for a simple (and nonsense) example:
class Base(val baseValue: Int) {
}
trait TASimple {
val aValueNI: AtomicInteger
val aValueI: AtomicInteger = new AtomicInteger(0)
def aIncrementAndGetNI(): Int = aValueNI.incrementAndGet()
def aIncrementAndGetI(): Int = aValueI.incrementAndGet()
}
class SimpleChild(val childValue: Int, baseValue: Int) extends Base(baseValue) with TASimple {
override val aValueNI = new AtomicInteger(5)
}
So in words you have:
a base class Base with some field
a mix-in trait TASimple which contains 2 fields (one initial and one not initialized) and two methods
a child class SimpleChild
Since TASimple is more than just a declaration of methods, it can't be compiled to just a simple Java interface. It is actually compiled into something like this (in Java code):
public abstract interface TASimple
{
abstract void TASimple_setter_aValueI(AtomicInteger param);
abstract AtomicInteger aValueNI();
abstract AtomicInteger aValueI();
default int aIncrementAndGetNI() { return aValueNI().incrementAndGet(); }
default int aIncrementAndGetI() { return aValueI().incrementAndGet(); }
public static void init(TASimple $this)
{
$this.TASimple_setter_aValueI(new AtomicInteger(0));
}
}
public class SimpleChild extends Base implements TASimple
{
private final int childValue;
private final AtomicInteger aValueNI;
private final AtomicInteger aValueI;
public AtomicInteger aValueI() { return this.aValueI; }
public void TASimple_setter_aValueI(AtomicInteger param) { this.aValueI = param; }
public int childValue() { return this.childValue; }
public AtomicInteger aValueNI() { return this.aValueNI; }
public SimpleChild(int childValue, int baseValue)
{
super(baseValue);
TASimple.init(this);
this.aValueNI = new AtomicInteger(5);
}
}
So what TASimple contains and how it is translated (to Java 8):
aValueNI and aValueI as a part of val declarations. Those must be implemented by SimpleChild backing them with some fields (no tricks whatsoever).
aIncrementAndGetNI and aIncrementAndGetI methods with some logic. Those methods can be inherited by SimpleChild and will work basing on the aValueNI and aValueI methods.
A piece of logic that initializes aValueI. If TASimple was a class, it would have a constructor and this logic might have been there. However TASimple is translated to an interface. Thus that "constructor" piece of logic is moved to a static void init(TASimple $this) method and that init is called from the SimpleChild constructor. Note that Java spec enforces that the super call (i.e. the constructor of the base class) must be called before it.
The logic in the item #3 is what stands behind
First, the superclass constructor sc is evaluated.
Then, all base classes in the template's linearization up to the template's superclass denoted by sc are mixin-evaluated
Again this is the logic enforce by the JVM itself: you first have to call the base constructor and only then you can (and should) call all the other simulated "constructors" of all mix-ins.
Side note (Scala pre-2.12/Java pre-8)
Before Java 8 and default methods translation would be even more complicated. TASimple would be translated into an interface and class such as
public abstract interface TASimple
{
public abstract void TASimple_setter_aValueI(AtomicInteger param);
public abstract AtomicInteger aValueNI();
public abstract AtomicInteger aValueI();
public abstract int aIncrementAndGetNI();
public abstract int aIncrementAndGetI();
}
public abstract class TASimpleImpl
{
public static int aIncrementAndGetNI(TASimple $this) { return $this.aValueNI().incrementAndGet(); }
public static int aIncrementAndGetI(TASimple $this) { return $this.aValueI().incrementAndGet(); }
public static void init(TASimple $this)
{
$this.TASimple_setter_aValueI(new AtomicInteger(0));
}
}
public class SimpleChild extends Base implements TASimple
{
private final int childValue;
private final AtomicInteger aValueNI;
private final AtomicInteger aValueI;
public AtomicInteger aValueI() { return this.aValueI; }
public void TASimple_setter_aValueI(AtomicInteger param) { this.aValueI = param; }
public int aIncrementAndGetNI() { return TASimpleImpl.aIncrementAndGetNI(this); }
public int aIncrementAndGetI() { return TASimpleImpl.aIncrementAndGetI(this); }
public int childValue() { return this.childValue; }
public AtomicInteger aValueNI() { return this.aValueNI; }
public SimpleChild(int childValue, int baseValue)
{
super(baseValue);
TASimpleImpl.init(this);
this.aValueNI = new AtomicInteger(5);
}
}
Note how now implementations of aIncrementAndGetNI and aIncrementAndGetI are moved to some static methods that take explicit $this as a parameter.
Scala and mix-ins #2 (complicated example)
Example in the previous section illustrated some of the ideas but not all of them. For a more detailed illustration a more complicated example is required.
Mixin-evaluation happens in reverse order of occurrence in the linearization.
This part is relevant when you have several mix-ins and especially in the case of the diamond problem. Consider following example:
trait TA {
val aValueNI0: AtomicInteger
val aValueNI1: AtomicInteger
val aValueNI2: AtomicInteger
val aValueNI12: AtomicInteger
val aValueI: AtomicInteger = new AtomicInteger(0)
def aIncrementAndGetNI0(): Int = aValueNI0.incrementAndGet()
def aIncrementAndGetNI1(): Int = aValueNI1.incrementAndGet()
def aIncrementAndGetNI2(): Int = aValueNI2.incrementAndGet()
def aIncrementAndGetNI12(): Int = aValueNI12.incrementAndGet()
def aIncrementAndGetI(): Int = aValueI.incrementAndGet()
}
trait TB1 extends TA {
val b1ValueNI: AtomicInteger
val b1ValueI: AtomicInteger = new AtomicInteger(1)
override val aValueNI1: AtomicInteger = new AtomicInteger(11)
override val aValueNI12: AtomicInteger = new AtomicInteger(111)
def b1IncrementAndGetNI(): Int = b1ValueNI.incrementAndGet()
def b1IncrementAndGetI(): Int = b1ValueI.incrementAndGet()
}
trait TB2 extends TA {
val b2ValueNI: AtomicInteger
val b2ValueI: AtomicInteger = new AtomicInteger(2)
override val aValueNI2: AtomicInteger = new AtomicInteger(22)
override val aValueNI12: AtomicInteger = new AtomicInteger(222)
def b2IncrementAndGetNI(): Int = b2ValueNI.incrementAndGet()
def b2IncrementAndGetI(): Int = b2ValueI.incrementAndGet()
}
class Base(val baseValue: Int) {
}
class ComplicatedChild(val childValue: Int, baseValue: Int) extends Base(baseValue) with TB1 with TB2 {
override val aValueNI0 = new AtomicInteger(5)
override val b1ValueNI = new AtomicInteger(6)
override val b2ValueNI = new AtomicInteger(7)
}
What is interesting here is that ComplicatedChild inherits from TA in two ways: via TB1 and TB2. Moreover both TB1 and TB2 define some initialization of aValueNI12 but with different values. First of all it should be mentioned that ComplicatedChild will have only one copy of fields for each val defined in TA. But then what would happen if you try this:
val cc = new inheritance.ComplicatedChild(42, 12345)
println(cc.aIncrementAndGetNI12())
Which value (TB1 or TB2) would win? And will the behavior be deterministic at all? The answer to the last question is - yes, the behavior will be deterministic both between runs and between compilations. This is achieved via so called "traits linearization" which is an entirely different topic. In short the Scala compiler sorts all the inherited (directly and indirectly) traits in some fixed defined order such that it manifests some good behaviors (such as the parent trait is always after its child trait in the list). So going back to the quote:
Mixin-evaluation happens in reverse order of occurrence in the linearization.
This traits linearization order ensures
That all "base" fields are already initialized by the corresponding parent (simulated) constructors by the time the simulated constructor for some trait is called.
Order of the simulated constructors calls is fixed so behavior is deterministic.
In this particular case the linearization order will be ComplicatedChild > TB2 > TB1 > TA > Base. It means that ComplicatedChild constructor is actually translated into something like:
public ComplicatedChild(int childValue, int baseValue)
{
super(baseValue);
TA.init(this);
TB1.init(this);
TB2.init(this);
this.aValueNI0 = new AtomicInteger(5);
this.b1ValueNI = new AtomicInteger(6);
this.b2ValueNI = new AtomicInteger(7);
}
and so aValueNI12 will be initialized by TB2 (which will overwrite the value set by the TB1 "constructor").
Hope this clarifies a bit what's going on and why. Let me know if something is not clear.
Update (answer to comment)
The spec says
Then, all base classes in the template's linearization up to the template's superclass denoted by scsc are mixin-evaluated. Mixin-evaluation happens in reverse order of occurrence in the linearization.
what does the “up to” precisely mean here?
Let's extend the "simple" example adding one more base trait as following:
trait TX0 {
val xValueI: AtomicInteger = new AtomicInteger(-1)
}
class Base(val baseValue: Int) extends TX0 {
}
trait TASimple extends TX0 {
val aValueNI: AtomicInteger
val aValueI: AtomicInteger = new AtomicInteger(0)
def aIncrementAndGetNI(): Int = aValueNI.incrementAndGet()
def aIncrementAndGetI(): Int = aValueI.incrementAndGet()
}
class SimpleChild(val childValue: Int, baseValue: Int) extends Base(baseValue) with TASimple {
override val aValueNI = new AtomicInteger(5)
}
Note how here TX0 is inherited by both BaseClass and TASimple. In this case I expect linearization to produce the following order SimpleChild > TASimple > Base > TX0 > Any. I interpret that sentence as following: in this case the constructor of the SimpleChild will not call the "simulated" constructor of the TX0 because it comes in the order after the Base (= sc). I think the logic for this behavior is obvious: from the point of view of the SimpleChild constructor the "simulated" constructor of the TX0 should have already been called by the Base constructor, moreover Base might have updated results of that call so calling the "simulated" constructor of the TX0 second time might actually break the Base.

How do Kotlin's extension functions work?

Let's say I want an integer that supplies a square method.
Kotlin:
fun Int.square() = this * this
usage:
println("${20.square()}")
doc:
Extensions do not actually modify classes they extend. By defining an extension, you do not insert new members into a class, but merely make new functions callable with the dot-notation on variables of this type.
We would like to emphasize that extension functions are dispatched statically
My expectation would've been that they simply add it to the member functions of the extended class during compilation, but that is what they explicitly deny, so my next thought was it could be "sort of" like scala implicits.
Scala:
object IntExtensions{
implicit Class SquareableInt(i:Int){
def square = i*i
}
}
usage:
import IntExtensions._
and then
println(f"${20.square}")
doc:
An implicit class is desugared into a class and implicit method pairing, where the implciit method mimics the constructor of the class.
The generated implicit method will have the same name as the implicit class.
But scala implicits create a new class, that would disable the usage of this.
So ... how IS it that Kotlin extends classes? "Make callable" isn't telling me much.
In your case Kotlin just create simple utils-class with name "filename"Kt and static method "int square(int x)" (java pseudo-code)
From Java it look something like this
// filename int-utils.kt
final class IntUtilsKt {
public static int square(int x) {
return x * x;
}
}
And after this all calls to
val result = 20.square()
will be transformed (on byte-code level) to
val result = IntUtilsKt.square(20);
P.S.
You can see it yourself using IDEA action "Show Kotlin byte-code"

Scala - what is case class private

I am analyzing my existing project, I found some this like this(conceptually):
case class AA private(id: String) {}
case class BB(id: String) {}
After I created those two classes to observe the difference. I analysed their java source by using java decompiler. I did not find any different.
What is the need of private there.
What is the importance of that.
A case class is a class which gets a Companion object automatically defined with a few helper functions. One of these is an apply method which essentially allows to skip out the 'new' keyword when defining a class. The private keyword in your example makes the constuction of a new AA using the 'new' keyword private. Eg:
case class A private(id: Int)
case class B(id: Int)
A(1) //Using public method
B(1) //Using public method
new A(1) // Using PRIVATE method
new B(1) // Using public method
You can understand this better using Scala REPL
scala> case class A private(a: String)
defined class A
scala> new A("")
<console>:14: error: constructor A in class A cannot be accessed in object $iw
new A("")
^
scala> A("")
res3: A = A()
Notice that instantiation of the A cannot be done using new keyword. private helps restrict the instantiation of A using new(makes it private)

Merging of custom and compiler generated companion objects for a case class. What are the merging rules?

I just tried out this code below and it worked as expected. It prints 1.
Now, my problem is that I don't understand what is going on under the hood.
How can a case class have two companion objects (One generated by the compiler and one written by me) ? Probably it cannot. So they must be merged somehow under the hood. I just don't understand how are they merged ? Are there any special merging rules I should be aware of ?
Is it so that, if the set of definitions defined in both companion objects are disjoint then the set of definitions in the resulting case class is simply the union of two disjoint sets ? I would think this is how they are merged, but I am not sure. Can someone please confirm whether this merging rule is the one that is implemented in the Scala compiler? Or is there something extra to it ?
More specifically, what are the rules by which the compiler generated companion object and my companion object are merged ? Are these rules specified somewhere ?
I have not really seen this topic discussed in the few Scala books I have - perhaps too superficially - read.
object A{
implicit def A2Int(a:A)=a.i1
}
case class A(i1:Int,i2:Int)
object Run extends App{
val a=A(1,2)
val i:Int=a
println(i)
}
I'm not aware of where the algorithm for merging automatic and explicit companion objects is described or documented (other than the compiler source) but by compiling your code and then examining the generated companion object (using javap), we can see what the differences are (this is with scala 2.10.4).
Here's the companion object generated for the case class (without your additional companion object):
Compiled from "zip.sc"
public final class A$ extends scala.runtime.AbstractFunction2<Object, Object, A> implements scala.Serializable {
public static final A$ MODULE$;
public static {};
public A apply(int, int);
public scala.Option<scala.Tuple2<java.lang.Object, java.lang.Object>> unapply(A);
public java.lang.Object apply(java.lang.Object, java.lang.Object);
public final java.lang.String toString();
}
After adding your companion object, here's what is generated:
Compiled from "zip.sc"
public final class A$ implements scala.Serializable {
public static final A$ MODULE$;
public static {};
public A apply(int, int);
public scala.Option<scala.Tuple2<java.lang.Object, java.lang.Object>> unapply(A);
public int A2Int(A);
}
The differences in the generated companion object caused by the explicit companion object definition appear to be:
it no longer extends AbstractFunction2
it no longer has the factory method (apply) related to bullet 1
it no longer overrides the toString method (I suppose you are expected to supply one, if needed)
your A2Int method is added
If the case class is changed to an ordinary class (along with minimal changes required to get it to compile), the result is the following:
Compiled from "zip.sc"
public final class A$ {
public static final A$ MODULE$;
public static {};
public A apply(int, int);
public int A2Int(A);
}
So it seems that if you declare your own companion object, at least in this simple example, the effect is that your new method is added to the companion object, and some of it's implementation and functionality are lost as well. It would be interesting to see what would happen if we tried to override some of the remaining auto-generated stuff, but there's not much left, so that in general is unlikely to cause conflict.
Some of the benefits of case classes are unrelated to the generated code, such as making the class variables public without having to explicitly add the 'val' keyword. Here's the modified source code for all 3 decompiled examples above.
version 1 (no explicit companion object):
case class A(i1:Int,i2:Int)
version 2 is your original version.
version 3 (no case-class):
object A {
implicit def A2Int(a:A)=a.i1
def apply(a:Int,b:Int):A = new A(a,b)
}
class A(val i1:Int,val i2:Int)
object Run extends App{
import A._
val a=A(1,2)
val i:Int=a
}
In version 3, we need to add val to class A parameters (otherwise they're private), and we have to either add the factory method to our companion object, or use the 'new' keyword when creating an instance of A(1,2).

Is there any advantage to definining a val over a def in a trait?

In Scala, a val can override a def, but a def cannot override a val.
So, is there an advantage to declaring a trait e.g. like this:
trait Resource {
val id: String
}
rather than this?
trait Resource {
def id: String
}
The follow-up question is: how does the compiler treat calling vals and defs differently in practice and what kind of optimizations does it actually do with vals? The compiler insists on the fact that vals are stable — what does in mean in practice for the compiler? Suppose the subclass is actually implementing id with a val. Is there a penalty for having it specified as a def in the trait?
If my code itself does not require stability of the id member, can it be considered good practice to always use defs in these cases and to switch to vals only when a performance bottleneck has been identified here — however unlikely this may be?
Short answer:
As far as I can tell, the values are always accessed through the accessor method. Using def defines a simple method, which returns the value. Using val defines a private [*] final field, with an accessor method. So in terms of access, there is very little difference between the two. The difference is conceptual, def gets reevaluated each time, and val is only evaluated once. This can obviously have an impact on performance.
[*] Java private
Long answer:
Let's take the following example:
trait ResourceDef {
def id: String = "5"
}
trait ResourceVal {
val id: String = "5"
}
The ResourceDef & ResourceVal produce the same code, ignoring initializers:
public interface ResourceVal extends ScalaObject {
volatile void foo$ResourceVal$_setter_$id_$eq(String s);
String id();
}
public interface ResourceDef extends ScalaObject {
String id();
}
For the subsidiary classes produced (which contain the implementation of the methods), the ResourceDef produces is as you would expect, noting that the method is static:
public abstract class ResourceDef$class {
public static String id(ResourceDef $this) {
return "5";
}
public static void $init$(ResourceDef resourcedef) {}
}
and for the val, we simply call the initialiser in the containing class
public abstract class ResourceVal$class {
public static void $init$(ResourceVal $this) {
$this.foo$ResourceVal$_setter_$id_$eq("5");
}
}
When we start extending:
class ResourceDefClass extends ResourceDef {
override def id: String = "6"
}
class ResourceValClass extends ResourceVal {
override val id: String = "6"
def foobar() = id
}
class ResourceNoneClass extends ResourceDef
Where we override, we get a method in the class which just does what you expect. The def is simple method:
public class ResourceDefClass implements ResourceDef, ScalaObject {
public String id() {
return "6";
}
}
and the val defines a private field and accessor method:
public class ResourceValClass implements ResourceVal, ScalaObject {
public String id() {
return id;
}
private final String id = "6";
public String foobar() {
return id();
}
}
Note that even foobar() doesn't use the field id, but uses the accessor method.
And finally, if we don't override, then we get a method which calls the static method in the trait auxiliary class:
public class ResourceNoneClass implements ResourceDef, ScalaObject {
public volatile String id() {
return ResourceDef$class.id(this);
}
}
I've cut out the constructors in these examples.
So, the accessor method is always used. I assume this is to avoid complications when extending multiple traits which could implement the same methods. It gets complicated really quickly.
Even longer answer:
Josh Suereth did a very interesting talk on Binary Resilience at Scala Days 2012, which covers the background to this question. The abstract for this is:
This talk focuses on binary compatibility on the JVM and what it means
to be binary compatible. An outline of the machinations of binary
incompatibility in Scala are described in depth, followed by a set of rules and guidelines that will help developers ensure their own
library releases are both binary compatible and binary resilient.
In particular, this talk looks at:
Traits and binary compatibility
Java Serialization and anonymous classes
The hidden creations of lazy vals
Developing code that is binary resilient
The difference is mainly that you can implement/override a def with a val but not the other way around. Moreover val are evaluated only once and def are evaluated every time they are used, using def in the abstract definition will give the code who mixes the trait more freedom about how to handle and/or optimize the implementation. So my point is use defs whenever there isn't a clear good reason to force a val.
A val expression is evaluated once on variable declaration, it is strict and immutable.
A def is re-evaluated each time you call it
def is evaluated by name and val by value. This means more or less that val must always return an actual value, while def is more like a promess that you can get a value when evaluating it. For example, if you have a function
def trace(s: => String ) { if (level == "trace") println s } // note the => in parameter definition
that logs an event only if the log level is set to trace and you want to log an objects toString. If you have overriden toString with a value, then you need to pass that value to the trace function. If toString however is a def, it will only be evaluated once it's sure that the log level is trace, which could save you some overhead.
def gives you more flexibility, while val is potentially faster
Compilerwise, traits are compiled to java interfaces so when defining a member on a trait, it makes no difference if its a var or def. The difference in performance would depend on how you choose to implement it.