If I have a linked node in some collection structure I don't really want its next link to be an AtomicReference (I need atomic CAS update) so I declare it as:
#volatile var next: Node[A] = _n
and then in the companion declare:
val updater = AtomicReferenceFieldUpdater.newUpdater(classOf[Link[_]], classOf[Node[_]], "n")
def cas[A](target: Link[A], old: Node[A], newNode: Node[A]) = updater.compareAndSet(target, old, newNode);
At runtime I get the following error:
java.lang.RuntimeException: java.lang.IllegalAccessException:
Class concurrent.Link$ can not access a member of class concurrent.Link
with modifiers "private volatile"
at java.util.concurrent.atomic.AtomicReferenceFieldUpdater$AtomicReferenceFieldUpdaterImpl.<init>(AtomicReferenceFieldUpdater.java:189)
at java.util.concurrent.atomic.AtomicReferenceFieldUpdater.newUpdater(AtomicReferenceFieldUpdater.java:65)
at concurrent.Link$.<init>(Link.scala:106)
...
So, at runtime the companion object is concurrent.Link$ not concurrent.Link and a different class cannot access the private member of another.
BUT, if I javap -p concurrent.Link
I get:
Compiled from "Link.scala"
public final class concurrent.Link implements concurrent.Node,scala.ScalaObject,scala.Product,java.io.Serializable{
private final java.lang.Object value;
private volatile com.atlassian.util.scala.concurrent.Node node;
public static final boolean cas(com.atlassian.util.scala.concurrent.Link, com.atlassian.util.scala.concurrent.Node, com.atlassian.util.scala.concurrent.Node);
public static final java.util.concurrent.atomic.AtomicReferenceFieldUpdater updater();
So, I have everything but the static instance of the AtomicReferenceFieldUpdater declared on my Link class.
The question is, how do I get an instance of AtomicReferenceFieldUpdater in Scala that points to a volatile var?
The only way I've found so far is to go back to Java (implement an AbstractLink with the next Node field and a static AtomicReferenceFieldUpdater) and inherit from that, which is ugly.
Hard. Since Scala makes fields private, and only accessor methods available, this might not be possible.
When doing this, I eventually decided to do it by creating Java base class with the volatile field and updating through there.
Java file:
public class Base {
volatile Object field = null;
}
Scala file:
class Cls extends Base {
val updater = AtomicReferenceFieldUpdater.newUpdater(classOf[Base[_]], classOf[Object[_]], "field")
def cas[A](old: Object, n: Object) = updater.compareAndSet(this, old, n);
}
I haven't come up with a different approach.
I've been told there is no way to do this in pure scala, and you need to define a class in Java to do this. Fortunately cross-compilation is pretty simple, but still, annoying!
Related
In Scala official docs, it says that:
If this is the template of a trait then its mixin-evaluation consists
of an evaluation of the statement sequence statsstats.
If this is not a template of a trait, then its evaluation consists of
the following steps.
First, the superclass constructor sc is evaluated.
Then, all base classes in the template's linearization up to the template's
superclass denoted by sc are mixin-evaluated. Mixin-evaluation
happens in reverse order of occurrence in the linearization.
Finally, the statement sequence statsstats is evaluated.
I am wondering what the "mixin-evaluation" and "superclass constructor evaluation" mean here? Why is superclass constructor sc treated differently from traits mt1, mt2, mt3, etc.?
Well, this is one of those complicated things for which I don't think there is a good short answer unless you already know what the answer is. I think the short answer is that this is a result of the fact that Scala is compiled to the JVM bytecode and thus has to match restrictions of that target platform. Unfortunately I don't think this answer is clear so my real answer is going to be long.
Disclaimer: (a shameful self-promotion for the future readers): If you find this quite long answer useful, you might also take a look at my another long answer to another question by Lifu Huang on a similar topic.
Disclaimer: Java code in the translation examples is provided solely for illustrative purpose. They are inspired by what the Scala compiler actually does but don't match the "real thing" in many details. Moreover those examples are not guaranteed to work or compile. Unless explicitly mentioned otherwise I will use (simpler) code examples that are simplified versions of Scala 2.12/Java 8 translation rather than older (and more complicated) Scala/Java translations.
Some theory on mix-ins
Mixin is an idea in an object-oriented design to have some more or less encapsulated piece of logic that however doesn't make sense on its own and so it is added to some other classes. This is in a sense similar to multiple inheritance and multiple inheritance is actually the way this feature is designed in Scala.
If you want some real world examples of mix-ins in Scala, here are some:
Scala collection library implementation is based on mix-ins. If you look at definition of something like scala.collection.immutable.List you'll see a lot of mix-ins
sealed abstract class List[+A] extends AbstractSeq[A]
with LinearSeq[A]
with Product // this one is not a mix-in!
with GenericTraversableTemplate[A, List]
with LinearSeqOptimized[A, List[A]] {
In this example mix-ins are used to share implementation of advanced methods via core methods along the deep and wide Scala collections hierarchy.
Cake pattern used for dependency injection is based on mix-ins but this time the mixed-in logic typically doesn't make sense on its own at all.
What is important here is that in Scala you can mix-in both logic (methods) and data (fields).
Some theory on Java/JVM and multiple inheritance
"Naive" multiple inheritance as done in languages like C++ has an infamous Diamond problem. To fix it original design of Java didn't support multiple-inheritance of any logic or fields. You may "extend" exactly one base class (fully inheriting its behavior) and additionally you can "implement" many interfaces which means that the class claims to have all methods from the interface(s) but you can't have any real logic inherited from your base interface. The same restrictions existed in the JVM. 20 years later in Java 8 Default Methods were added. So now you can inherit some methods but still can't inherit any fields. This simplified implementation of mix-ins in Scala 2.12 at the price of requiring Java 8 as its target platform. Still interfaces can't have (non-static) fields and thus can't have constructors. This is one of the major reasons why the superclass constructor sc is treated differently from the traits mt1, mt2, mt3, etc.
Also it is important to note that Java was designed as a pretty safe language. Particularly it fights against "undefined behaviors" that might happen if you re-use some values that are just leftover (garbage) in the memory. So Java ensures that you can't access any fields of the base class until its constructor is called. This makes super call pretty much mandatory first line in any child constructor.
Scala and mix-ins (simple example)
So now imagine you are a designer of the Scala language and you want it to have mix-ins but your target platform (JVM) doesn't support them. What should you do? Obviously your compiler should be able to convert mix-ins into something that JVM supports. And here is a rough approximation of how it is done for a simple (and nonsense) example:
class Base(val baseValue: Int) {
}
trait TASimple {
val aValueNI: AtomicInteger
val aValueI: AtomicInteger = new AtomicInteger(0)
def aIncrementAndGetNI(): Int = aValueNI.incrementAndGet()
def aIncrementAndGetI(): Int = aValueI.incrementAndGet()
}
class SimpleChild(val childValue: Int, baseValue: Int) extends Base(baseValue) with TASimple {
override val aValueNI = new AtomicInteger(5)
}
So in words you have:
a base class Base with some field
a mix-in trait TASimple which contains 2 fields (one initial and one not initialized) and two methods
a child class SimpleChild
Since TASimple is more than just a declaration of methods, it can't be compiled to just a simple Java interface. It is actually compiled into something like this (in Java code):
public abstract interface TASimple
{
abstract void TASimple_setter_aValueI(AtomicInteger param);
abstract AtomicInteger aValueNI();
abstract AtomicInteger aValueI();
default int aIncrementAndGetNI() { return aValueNI().incrementAndGet(); }
default int aIncrementAndGetI() { return aValueI().incrementAndGet(); }
public static void init(TASimple $this)
{
$this.TASimple_setter_aValueI(new AtomicInteger(0));
}
}
public class SimpleChild extends Base implements TASimple
{
private final int childValue;
private final AtomicInteger aValueNI;
private final AtomicInteger aValueI;
public AtomicInteger aValueI() { return this.aValueI; }
public void TASimple_setter_aValueI(AtomicInteger param) { this.aValueI = param; }
public int childValue() { return this.childValue; }
public AtomicInteger aValueNI() { return this.aValueNI; }
public SimpleChild(int childValue, int baseValue)
{
super(baseValue);
TASimple.init(this);
this.aValueNI = new AtomicInteger(5);
}
}
So what TASimple contains and how it is translated (to Java 8):
aValueNI and aValueI as a part of val declarations. Those must be implemented by SimpleChild backing them with some fields (no tricks whatsoever).
aIncrementAndGetNI and aIncrementAndGetI methods with some logic. Those methods can be inherited by SimpleChild and will work basing on the aValueNI and aValueI methods.
A piece of logic that initializes aValueI. If TASimple was a class, it would have a constructor and this logic might have been there. However TASimple is translated to an interface. Thus that "constructor" piece of logic is moved to a static void init(TASimple $this) method and that init is called from the SimpleChild constructor. Note that Java spec enforces that the super call (i.e. the constructor of the base class) must be called before it.
The logic in the item #3 is what stands behind
First, the superclass constructor sc is evaluated.
Then, all base classes in the template's linearization up to the template's superclass denoted by sc are mixin-evaluated
Again this is the logic enforce by the JVM itself: you first have to call the base constructor and only then you can (and should) call all the other simulated "constructors" of all mix-ins.
Side note (Scala pre-2.12/Java pre-8)
Before Java 8 and default methods translation would be even more complicated. TASimple would be translated into an interface and class such as
public abstract interface TASimple
{
public abstract void TASimple_setter_aValueI(AtomicInteger param);
public abstract AtomicInteger aValueNI();
public abstract AtomicInteger aValueI();
public abstract int aIncrementAndGetNI();
public abstract int aIncrementAndGetI();
}
public abstract class TASimpleImpl
{
public static int aIncrementAndGetNI(TASimple $this) { return $this.aValueNI().incrementAndGet(); }
public static int aIncrementAndGetI(TASimple $this) { return $this.aValueI().incrementAndGet(); }
public static void init(TASimple $this)
{
$this.TASimple_setter_aValueI(new AtomicInteger(0));
}
}
public class SimpleChild extends Base implements TASimple
{
private final int childValue;
private final AtomicInteger aValueNI;
private final AtomicInteger aValueI;
public AtomicInteger aValueI() { return this.aValueI; }
public void TASimple_setter_aValueI(AtomicInteger param) { this.aValueI = param; }
public int aIncrementAndGetNI() { return TASimpleImpl.aIncrementAndGetNI(this); }
public int aIncrementAndGetI() { return TASimpleImpl.aIncrementAndGetI(this); }
public int childValue() { return this.childValue; }
public AtomicInteger aValueNI() { return this.aValueNI; }
public SimpleChild(int childValue, int baseValue)
{
super(baseValue);
TASimpleImpl.init(this);
this.aValueNI = new AtomicInteger(5);
}
}
Note how now implementations of aIncrementAndGetNI and aIncrementAndGetI are moved to some static methods that take explicit $this as a parameter.
Scala and mix-ins #2 (complicated example)
Example in the previous section illustrated some of the ideas but not all of them. For a more detailed illustration a more complicated example is required.
Mixin-evaluation happens in reverse order of occurrence in the linearization.
This part is relevant when you have several mix-ins and especially in the case of the diamond problem. Consider following example:
trait TA {
val aValueNI0: AtomicInteger
val aValueNI1: AtomicInteger
val aValueNI2: AtomicInteger
val aValueNI12: AtomicInteger
val aValueI: AtomicInteger = new AtomicInteger(0)
def aIncrementAndGetNI0(): Int = aValueNI0.incrementAndGet()
def aIncrementAndGetNI1(): Int = aValueNI1.incrementAndGet()
def aIncrementAndGetNI2(): Int = aValueNI2.incrementAndGet()
def aIncrementAndGetNI12(): Int = aValueNI12.incrementAndGet()
def aIncrementAndGetI(): Int = aValueI.incrementAndGet()
}
trait TB1 extends TA {
val b1ValueNI: AtomicInteger
val b1ValueI: AtomicInteger = new AtomicInteger(1)
override val aValueNI1: AtomicInteger = new AtomicInteger(11)
override val aValueNI12: AtomicInteger = new AtomicInteger(111)
def b1IncrementAndGetNI(): Int = b1ValueNI.incrementAndGet()
def b1IncrementAndGetI(): Int = b1ValueI.incrementAndGet()
}
trait TB2 extends TA {
val b2ValueNI: AtomicInteger
val b2ValueI: AtomicInteger = new AtomicInteger(2)
override val aValueNI2: AtomicInteger = new AtomicInteger(22)
override val aValueNI12: AtomicInteger = new AtomicInteger(222)
def b2IncrementAndGetNI(): Int = b2ValueNI.incrementAndGet()
def b2IncrementAndGetI(): Int = b2ValueI.incrementAndGet()
}
class Base(val baseValue: Int) {
}
class ComplicatedChild(val childValue: Int, baseValue: Int) extends Base(baseValue) with TB1 with TB2 {
override val aValueNI0 = new AtomicInteger(5)
override val b1ValueNI = new AtomicInteger(6)
override val b2ValueNI = new AtomicInteger(7)
}
What is interesting here is that ComplicatedChild inherits from TA in two ways: via TB1 and TB2. Moreover both TB1 and TB2 define some initialization of aValueNI12 but with different values. First of all it should be mentioned that ComplicatedChild will have only one copy of fields for each val defined in TA. But then what would happen if you try this:
val cc = new inheritance.ComplicatedChild(42, 12345)
println(cc.aIncrementAndGetNI12())
Which value (TB1 or TB2) would win? And will the behavior be deterministic at all? The answer to the last question is - yes, the behavior will be deterministic both between runs and between compilations. This is achieved via so called "traits linearization" which is an entirely different topic. In short the Scala compiler sorts all the inherited (directly and indirectly) traits in some fixed defined order such that it manifests some good behaviors (such as the parent trait is always after its child trait in the list). So going back to the quote:
Mixin-evaluation happens in reverse order of occurrence in the linearization.
This traits linearization order ensures
That all "base" fields are already initialized by the corresponding parent (simulated) constructors by the time the simulated constructor for some trait is called.
Order of the simulated constructors calls is fixed so behavior is deterministic.
In this particular case the linearization order will be ComplicatedChild > TB2 > TB1 > TA > Base. It means that ComplicatedChild constructor is actually translated into something like:
public ComplicatedChild(int childValue, int baseValue)
{
super(baseValue);
TA.init(this);
TB1.init(this);
TB2.init(this);
this.aValueNI0 = new AtomicInteger(5);
this.b1ValueNI = new AtomicInteger(6);
this.b2ValueNI = new AtomicInteger(7);
}
and so aValueNI12 will be initialized by TB2 (which will overwrite the value set by the TB1 "constructor").
Hope this clarifies a bit what's going on and why. Let me know if something is not clear.
Update (answer to comment)
The spec says
Then, all base classes in the template's linearization up to the template's superclass denoted by scsc are mixin-evaluated. Mixin-evaluation happens in reverse order of occurrence in the linearization.
what does the “up to” precisely mean here?
Let's extend the "simple" example adding one more base trait as following:
trait TX0 {
val xValueI: AtomicInteger = new AtomicInteger(-1)
}
class Base(val baseValue: Int) extends TX0 {
}
trait TASimple extends TX0 {
val aValueNI: AtomicInteger
val aValueI: AtomicInteger = new AtomicInteger(0)
def aIncrementAndGetNI(): Int = aValueNI.incrementAndGet()
def aIncrementAndGetI(): Int = aValueI.incrementAndGet()
}
class SimpleChild(val childValue: Int, baseValue: Int) extends Base(baseValue) with TASimple {
override val aValueNI = new AtomicInteger(5)
}
Note how here TX0 is inherited by both BaseClass and TASimple. In this case I expect linearization to produce the following order SimpleChild > TASimple > Base > TX0 > Any. I interpret that sentence as following: in this case the constructor of the SimpleChild will not call the "simulated" constructor of the TX0 because it comes in the order after the Base (= sc). I think the logic for this behavior is obvious: from the point of view of the SimpleChild constructor the "simulated" constructor of the TX0 should have already been called by the Base constructor, moreover Base might have updated results of that call so calling the "simulated" constructor of the TX0 second time might actually break the Base.
In Chapter 19 ,"Programming in scala 2nd edition",how to explain the bold sentences ?
object private members can be accessed only from within the object in
which they are defined. It turns out that accesses to variables from
the same object in which they are defined do not cause problems with
variance. The intuitive explanation is that, in order to construct a
case where variance would lead to type errors, you need to have a
reference to a containing object that has a statically weaker type
than the type the object was defined with. For accesses to object
private values, however,this is impossible.
I think the most intuitive way to explain what Martin is trying to say is to look at arrays in Java. Arrays in Java are covariant but don't type check according to covariance rules. This means they explode at runtime instead of compile time:
abstract class Animal {}
class Girafee extends Animal {}
class Lion extends Animal {}
public class Foo {
public static void main(String[] args) {
Animal[] animals = new Girafee[10];
animals[0] = new Lion();
}
}
The fact that I can do this is because:
Java doesn't restrict this at compile time (due to a design decision)
I have a reference to the underlying array which allows me to manipulate it's internal values
This doesn't hold when talking about private fields of a class from the outside.
For example, assume the following class:
class Holder[+T](initialValue: Option[T]) {
private[this] var value: Option[T] = initialValue
}
When creating an instance of Holder, I am not visible to it's internal fields, thus I cannot manipulate them directly like I did with the Java array. This way, the compiler makes sure they are protected, and each manipulation to the field will have to be through a method, where the type checker is strict and doesn't allow funky business.
I just tried out this code below and it worked as expected. It prints 1.
Now, my problem is that I don't understand what is going on under the hood.
How can a case class have two companion objects (One generated by the compiler and one written by me) ? Probably it cannot. So they must be merged somehow under the hood. I just don't understand how are they merged ? Are there any special merging rules I should be aware of ?
Is it so that, if the set of definitions defined in both companion objects are disjoint then the set of definitions in the resulting case class is simply the union of two disjoint sets ? I would think this is how they are merged, but I am not sure. Can someone please confirm whether this merging rule is the one that is implemented in the Scala compiler? Or is there something extra to it ?
More specifically, what are the rules by which the compiler generated companion object and my companion object are merged ? Are these rules specified somewhere ?
I have not really seen this topic discussed in the few Scala books I have - perhaps too superficially - read.
object A{
implicit def A2Int(a:A)=a.i1
}
case class A(i1:Int,i2:Int)
object Run extends App{
val a=A(1,2)
val i:Int=a
println(i)
}
I'm not aware of where the algorithm for merging automatic and explicit companion objects is described or documented (other than the compiler source) but by compiling your code and then examining the generated companion object (using javap), we can see what the differences are (this is with scala 2.10.4).
Here's the companion object generated for the case class (without your additional companion object):
Compiled from "zip.sc"
public final class A$ extends scala.runtime.AbstractFunction2<Object, Object, A> implements scala.Serializable {
public static final A$ MODULE$;
public static {};
public A apply(int, int);
public scala.Option<scala.Tuple2<java.lang.Object, java.lang.Object>> unapply(A);
public java.lang.Object apply(java.lang.Object, java.lang.Object);
public final java.lang.String toString();
}
After adding your companion object, here's what is generated:
Compiled from "zip.sc"
public final class A$ implements scala.Serializable {
public static final A$ MODULE$;
public static {};
public A apply(int, int);
public scala.Option<scala.Tuple2<java.lang.Object, java.lang.Object>> unapply(A);
public int A2Int(A);
}
The differences in the generated companion object caused by the explicit companion object definition appear to be:
it no longer extends AbstractFunction2
it no longer has the factory method (apply) related to bullet 1
it no longer overrides the toString method (I suppose you are expected to supply one, if needed)
your A2Int method is added
If the case class is changed to an ordinary class (along with minimal changes required to get it to compile), the result is the following:
Compiled from "zip.sc"
public final class A$ {
public static final A$ MODULE$;
public static {};
public A apply(int, int);
public int A2Int(A);
}
So it seems that if you declare your own companion object, at least in this simple example, the effect is that your new method is added to the companion object, and some of it's implementation and functionality are lost as well. It would be interesting to see what would happen if we tried to override some of the remaining auto-generated stuff, but there's not much left, so that in general is unlikely to cause conflict.
Some of the benefits of case classes are unrelated to the generated code, such as making the class variables public without having to explicitly add the 'val' keyword. Here's the modified source code for all 3 decompiled examples above.
version 1 (no explicit companion object):
case class A(i1:Int,i2:Int)
version 2 is your original version.
version 3 (no case-class):
object A {
implicit def A2Int(a:A)=a.i1
def apply(a:Int,b:Int):A = new A(a,b)
}
class A(val i1:Int,val i2:Int)
object Run extends App{
import A._
val a=A(1,2)
val i:Int=a
}
In version 3, we need to add val to class A parameters (otherwise they're private), and we have to either add the factory method to our companion object, or use the 'new' keyword when creating an instance of A(1,2).
In Scala, a val can override a def, but a def cannot override a val.
So, is there an advantage to declaring a trait e.g. like this:
trait Resource {
val id: String
}
rather than this?
trait Resource {
def id: String
}
The follow-up question is: how does the compiler treat calling vals and defs differently in practice and what kind of optimizations does it actually do with vals? The compiler insists on the fact that vals are stable — what does in mean in practice for the compiler? Suppose the subclass is actually implementing id with a val. Is there a penalty for having it specified as a def in the trait?
If my code itself does not require stability of the id member, can it be considered good practice to always use defs in these cases and to switch to vals only when a performance bottleneck has been identified here — however unlikely this may be?
Short answer:
As far as I can tell, the values are always accessed through the accessor method. Using def defines a simple method, which returns the value. Using val defines a private [*] final field, with an accessor method. So in terms of access, there is very little difference between the two. The difference is conceptual, def gets reevaluated each time, and val is only evaluated once. This can obviously have an impact on performance.
[*] Java private
Long answer:
Let's take the following example:
trait ResourceDef {
def id: String = "5"
}
trait ResourceVal {
val id: String = "5"
}
The ResourceDef & ResourceVal produce the same code, ignoring initializers:
public interface ResourceVal extends ScalaObject {
volatile void foo$ResourceVal$_setter_$id_$eq(String s);
String id();
}
public interface ResourceDef extends ScalaObject {
String id();
}
For the subsidiary classes produced (which contain the implementation of the methods), the ResourceDef produces is as you would expect, noting that the method is static:
public abstract class ResourceDef$class {
public static String id(ResourceDef $this) {
return "5";
}
public static void $init$(ResourceDef resourcedef) {}
}
and for the val, we simply call the initialiser in the containing class
public abstract class ResourceVal$class {
public static void $init$(ResourceVal $this) {
$this.foo$ResourceVal$_setter_$id_$eq("5");
}
}
When we start extending:
class ResourceDefClass extends ResourceDef {
override def id: String = "6"
}
class ResourceValClass extends ResourceVal {
override val id: String = "6"
def foobar() = id
}
class ResourceNoneClass extends ResourceDef
Where we override, we get a method in the class which just does what you expect. The def is simple method:
public class ResourceDefClass implements ResourceDef, ScalaObject {
public String id() {
return "6";
}
}
and the val defines a private field and accessor method:
public class ResourceValClass implements ResourceVal, ScalaObject {
public String id() {
return id;
}
private final String id = "6";
public String foobar() {
return id();
}
}
Note that even foobar() doesn't use the field id, but uses the accessor method.
And finally, if we don't override, then we get a method which calls the static method in the trait auxiliary class:
public class ResourceNoneClass implements ResourceDef, ScalaObject {
public volatile String id() {
return ResourceDef$class.id(this);
}
}
I've cut out the constructors in these examples.
So, the accessor method is always used. I assume this is to avoid complications when extending multiple traits which could implement the same methods. It gets complicated really quickly.
Even longer answer:
Josh Suereth did a very interesting talk on Binary Resilience at Scala Days 2012, which covers the background to this question. The abstract for this is:
This talk focuses on binary compatibility on the JVM and what it means
to be binary compatible. An outline of the machinations of binary
incompatibility in Scala are described in depth, followed by a set of rules and guidelines that will help developers ensure their own
library releases are both binary compatible and binary resilient.
In particular, this talk looks at:
Traits and binary compatibility
Java Serialization and anonymous classes
The hidden creations of lazy vals
Developing code that is binary resilient
The difference is mainly that you can implement/override a def with a val but not the other way around. Moreover val are evaluated only once and def are evaluated every time they are used, using def in the abstract definition will give the code who mixes the trait more freedom about how to handle and/or optimize the implementation. So my point is use defs whenever there isn't a clear good reason to force a val.
A val expression is evaluated once on variable declaration, it is strict and immutable.
A def is re-evaluated each time you call it
def is evaluated by name and val by value. This means more or less that val must always return an actual value, while def is more like a promess that you can get a value when evaluating it. For example, if you have a function
def trace(s: => String ) { if (level == "trace") println s } // note the => in parameter definition
that logs an event only if the log level is set to trace and you want to log an objects toString. If you have overriden toString with a value, then you need to pass that value to the trace function. If toString however is a def, it will only be evaluated once it's sure that the log level is trace, which could save you some overhead.
def gives you more flexibility, while val is potentially faster
Compilerwise, traits are compiled to java interfaces so when defining a member on a trait, it makes no difference if its a var or def. The difference in performance would depend on how you choose to implement it.
I think I'd like to be able to do something like the following (clearly garbage) code illustrates:
// Clearly nonsensical
case class Example(a: String) {
def a: Array[Byte] = a.getBytes
}
The gist of it is that I want to write an accessor method for a case class that is named identically to one of its constructor arguments.
I'm using a JSON serialization library called Jerkson that, according to my understanding, will behave in the way I want it to if I define a class in this manner. I'm basing that assumption on this code. Currently, I'm stumped.
If this isn't possible, could anyone offer some insight on what the Jerkson library code is attempting to do?
Scala automatically creates a method with the same name as any val declared in a class (including the fields of case classes) to support a concept called referential transparency. This is also why you can override a def with a val. If you're still skeptical, you can test it yourself like this:
First, create a Scala file with a single case class.
// MyCase.scala
case class MyCase(myField1: Int, myField2: String)
Now, compile the file with scalac. This should result in two classes. For the example above I get MyCase.class (representing the actual case class type) and MyCase$.class (representing the auto-generated companion object for the case class).
$ scalac MyCase.scala
$ ls
MyCase$.class MyCase.class MyCase.scala
Now you can examine the resulting .class file corresponding to the case class you declared using javap. (javap standard tool for examining Java bytecode—it's distributed along with javac in the JDK.)
$ javap -private MyCase
Compiled from "MyCase.scala"
public class MyCase extends java.lang.Object implements scala.Product,scala.Serializable{
private final int myField1;
private final java.lang.String myField2;
public static final scala.Function1 tupled();
public static final scala.Function1 curry();
public static final scala.Function1 curried();
public scala.collection.Iterator productIterator();
public scala.collection.Iterator productElements();
public int myField1();
public java.lang.String myField2();
public MyCase copy(int, java.lang.String);
public java.lang.String copy$default$2();
public int copy$default$1();
public int hashCode();
public java.lang.String toString();
public boolean equals(java.lang.Object);
public java.lang.String productPrefix();
public int productArity();
public java.lang.Object productElement(int);
public boolean canEqual(java.lang.Object);
private final boolean gd1$1(int, java.lang.String);
public MyCase(int, java.lang.String);
}
Notice how the resulting class has both a private final int myField1 and a public int myField1() corresponding to the case class's myField1 field. The same for myField2.
On the JVM method return types are not part of the method signature. This means that if two methods have the same name and the same argument types then they're considered to be conflicting method declarations. This means you can't declare the def a: Array[Byte] in your example because val a: String already exists, also taking no arguments.
Update:
I just looked at the library code and according to the examples the case classes should just work. There is a note in the README saying that parsing case classes does not work in the REPL. Could that be your problem? If not, you should really post the error you're getting. Edit: Never mind, I see the error you're talking about in your link to your other post. If I think of a response to that problem I'll post it over there.
No, it's not possible. The reason is that constructor arguments of case classes are automatically public values, like if you declare them with val. To quote A Tour of Scala: Case Classes
The constructor parameters of case classes are treated as public values and can be accessed directly.
Therefore, for each constructor argument Scala creates a corresponding accessor method with the same name. You cannot create a method with the same name, it's already there.
This is actually what case classes are about. The idea is that they can be used for pattern matching, so the values retrieved from them should be the same as the values used to construct them.
(Is it a requirement that you use case classes? Using regular classes seems to solve the problem.)