Difference between case object and object - scala

Is there any difference between case object and object in scala?

Here's one difference - case objects extend the Serializable trait, so they can be serialized. Regular objects cannot by default:
scala> object A
defined module A
scala> case object B
defined module B
scala> import java.io._
import java.io._
scala> val bos = new ByteArrayOutputStream
bos: java.io.ByteArrayOutputStream =
scala> val oos = new ObjectOutputStream(bos)
oos: java.io.ObjectOutputStream = java.io.ObjectOutputStream#e7da60
scala> oos.writeObject(B)
scala> oos.writeObject(A)
java.io.NotSerializableException: A$

Case classes differ from regular classes in that they get:
pattern matching support
default implementations of equals and hashCode
default implementations of serialization
a prettier default implementation of toString, and
the small amount of functionality that they get from automatically inheriting from scala.Product.
Pattern matching, equals and hashCode don't matter much for singletons (unless you do something really degenerate), so you're pretty much just getting serialization, a nice toString, and some methods you probably won't ever use.

scala> object foo
defined object foo
scala> case object foocase
defined object foocase
Serialization difference:
scala> foo.asInstanceOf[Serializable]
java.lang.ClassCastException: foo$ cannot be cast to scala.Serializable
... 43 elided
scala> foocase.asInstanceOf[Serializable]
res1: Serializable = foocase
toString difference:
scala> foo
res2: foo.type = foo$#7bf0bac8
scala> foocase
res3: foocase.type = foocase

A huge necro, but it is the highest result for this question in Google outside official tutorial which, as always, is pretty vague about the details. Here are some bare bones objects:
object StandardObject
object SerializableObject extends Serializable
case object CaseObject
Now, lets use the very useful feature of IntelliJ 'decompile Scala to Java' on compiled .class files:
//decompiled from StandardObject$.class
public final class StandardObject$ {
public static final StandardObject$ MODULE$ = new StandardObject$();
private StandardObject$() {
}
}
//decompiled from StandardObject.class
import scala.reflect.ScalaSignature;
#ScalaSignature(<byte array string elided>)
public final class StandardObject {
}
As you can see, a pretty straightforward singleton pattern, except for reasons outside the scope of this question, two classes are generated: the static StandardObject (which would contain static forwarder methods should the object define any) and the actual singleton instance StandardObject$, where all methods defined in the code end up as instance methods. Things get more intresting when you implement Serializable:
//decompiled from SerializableObject.class
import scala.reflect.ScalaSignature;
#ScalaSignature(<byte array string elided>)
public final class SerializableObject {
}
//decompiled from SerializableObject$.class
import java.io.Serializable;
import scala.runtime.ModuleSerializationProxy;
public final class SerializableObject$ implements Serializable {
public static final SerializableObject$ MODULE$ = new SerializableObject$();
private Object writeReplace() {
return new ModuleSerializationProxy(SerializableObject$.class);
}
private SerializableObject$() {
}
}
The compiler doesn't limit itself to simply making the 'instance' (non-static) class Serializable, it adds a writeReplace method. writeReplace is an alternative to writeObject/readObject; what it does, it serializes a different object whenether the Serializable class having this method is being serialized. On deserializention then, that proxy object's readResolve method is invoked once it is deserialized. Here, a ModuleSerializableProxy instance is serialized with a field carrying the Class[SerializableObject], so it knows what object needs to be resolved. The readResolve method of that class simply returns SerializableObject - as it is a singleton with a parameterless constructor, scala object is always structurally equal to itself between diffrent VM instances and different runs and, in this way, the property that only a single instance of that class is created per one VM instance is preserved. A thing of note is that there is a security hole here: no readObject method is added to SerializableObject$, meaning an attacker can maliciously prepare a binary file which matches standard Java serialization format for SerializableObject$ and a separate instance of the 'singleton' will be created.
Now, lets move to the case object:
//decompiled from CaseObject.class
import scala.collection.Iterator;
import scala.reflect.ScalaSignature;
#ScalaSignature(<byte array string elided>)
public final class CaseObject {
public static String toString() {
return CaseObject$.MODULE$.toString();
}
public static int hashCode() {
return CaseObject$.MODULE$.hashCode();
}
public static boolean canEqual(final Object x$1) {
return CaseObject$.MODULE$.canEqual(var0);
}
public static Iterator productIterator() {
return CaseObject$.MODULE$.productIterator();
}
public static Object productElement(final int x$1) {
return CaseObject$.MODULE$.productElement(var0);
}
public static int productArity() {
return CaseObject$.MODULE$.productArity();
}
public static String productPrefix() {
return CaseObject$.MODULE$.productPrefix();
}
public static Iterator productElementNames() {
return CaseObject$.MODULE$.productElementNames();
}
public static String productElementName(final int n) {
return CaseObject$.MODULE$.productElementName(var0);
}
}
//decompiled from CaseObject$.class
import java.io.Serializable;
import scala.Product;
import scala.collection.Iterator;
import scala.runtime.ModuleSerializationProxy;
import scala.runtime.Statics;
import scala.runtime.ScalaRunTime.;
public final class CaseObject$ implements Product, Serializable {
public static final CaseObject$ MODULE$ = new CaseObject$();
static {
Product.$init$(MODULE$);
}
public String productElementName(final int n) {
return Product.productElementName$(this, n);
}
public Iterator productElementNames() {
return Product.productElementNames$(this);
}
public String productPrefix() {
return "CaseObject";
}
public int productArity() {
return 0;
}
public Object productElement(final int x$1) {
Object var2 = Statics.ioobe(x$1);
return var2;
}
public Iterator productIterator() {
return .MODULE$.typedProductIterator(this);
}
public boolean canEqual(final Object x$1) {
return x$1 instanceof CaseObject$;
}
public int hashCode() {
return 847823535;
}
public String toString() {
return "CaseObject";
}
private Object writeReplace() {
return new ModuleSerializationProxy(CaseObject$.class);
}
private CaseObject$() {
}
}
A lot more is going on, as CaseObject$ now implements also Product0, with its iterator and accessor methods. I am unaware of a use case for this feature, it is probably done for consistency with case class which is always a product of its fields. The main practical difference here is that we get canEqual, hashCode and toString methods for free. canEqual is relevant only if you decide to compare it with a Product0 instance which is not a singleton object, toString saves us from implementing a single simple method, which is useful when case objects are used as enumeration constants without any behaviour implemented. Finally, as one might suspect, hashCode returns a constant, so it is the same for all VM instances. This matters if one serializes some flawed hash map implementation, but both standard java and scala hash maps wisely rehash all contents on deserialization, so it shouldn't matter. Note that equals is not overriden, so it is still reference equality, and that the security hole is still there. A huge caveat here: if a case object inherit equals/toString from some supertype other than Object, the corresponding methods are not generated, and the inherited definitions are used instead.
TL;DR: the only difference that matters in practice is the toString returning the unqualified name of the object.
I must make a disclamer here, though: I cannot guarantee that the compiler doesn't treat case objects specially in addition to what is actually in the bytecode. It certainly does so when patterm matching case classes, aside from them implementing unapply.

It's similar with case class and class ,we just use case object instead of case class when there isn't any fields representing additional state information.

case objects implicitly come with implementations of methods toString, equals, and hashCode, but simple objects don't.
case objects can be serialized while simple objects cannot, which makes case objects very useful as messages with Akka-Remote.
Adding the case keyword before object keyword makes the object serializable.

We know objects and "case class" before. But "case object" is a mix of both i.e it is a singleton similar to an object and with a lot of boilerplate as in a case class. The only difference is that the boilerplate is done for an object instead of a class.
case objects won't come with the below ones:
Apply, Un-apply methods.
here are no copy methods since this is a singleton.
No method for structural equality comparison.
No constructor as well.

Related

What happens during template initialization?

In Scala official docs, it says that:
If this is the template of a trait then its mixin-evaluation consists
of an evaluation of the statement sequence statsstats.
If this is not a template of a trait, then its evaluation consists of
the following steps.
First, the superclass constructor sc is evaluated.
Then, all base classes in the template's linearization up to the template's
superclass denoted by sc are mixin-evaluated. Mixin-evaluation
happens in reverse order of occurrence in the linearization.
Finally, the statement sequence statsstats is evaluated.
I am wondering what the "mixin-evaluation" and "superclass constructor evaluation" mean here? Why is superclass constructor sc treated differently from traits mt1, mt2, mt3, etc.?
Well, this is one of those complicated things for which I don't think there is a good short answer unless you already know what the answer is. I think the short answer is that this is a result of the fact that Scala is compiled to the JVM bytecode and thus has to match restrictions of that target platform. Unfortunately I don't think this answer is clear so my real answer is going to be long.
Disclaimer: (a shameful self-promotion for the future readers): If you find this quite long answer useful, you might also take a look at my another long answer to another question by Lifu Huang on a similar topic.
Disclaimer: Java code in the translation examples is provided solely for illustrative purpose. They are inspired by what the Scala compiler actually does but don't match the "real thing" in many details. Moreover those examples are not guaranteed to work or compile. Unless explicitly mentioned otherwise I will use (simpler) code examples that are simplified versions of Scala 2.12/Java 8 translation rather than older (and more complicated) Scala/Java translations.
Some theory on mix-ins
Mixin is an idea in an object-oriented design to have some more or less encapsulated piece of logic that however doesn't make sense on its own and so it is added to some other classes. This is in a sense similar to multiple inheritance and multiple inheritance is actually the way this feature is designed in Scala.
If you want some real world examples of mix-ins in Scala, here are some:
Scala collection library implementation is based on mix-ins. If you look at definition of something like scala.collection.immutable.List you'll see a lot of mix-ins
sealed abstract class List[+A] extends AbstractSeq[A]
with LinearSeq[A]
with Product // this one is not a mix-in!
with GenericTraversableTemplate[A, List]
with LinearSeqOptimized[A, List[A]] {
In this example mix-ins are used to share implementation of advanced methods via core methods along the deep and wide Scala collections hierarchy.
Cake pattern used for dependency injection is based on mix-ins but this time the mixed-in logic typically doesn't make sense on its own at all.
What is important here is that in Scala you can mix-in both logic (methods) and data (fields).
Some theory on Java/JVM and multiple inheritance
"Naive" multiple inheritance as done in languages like C++ has an infamous Diamond problem. To fix it original design of Java didn't support multiple-inheritance of any logic or fields. You may "extend" exactly one base class (fully inheriting its behavior) and additionally you can "implement" many interfaces which means that the class claims to have all methods from the interface(s) but you can't have any real logic inherited from your base interface. The same restrictions existed in the JVM. 20 years later in Java 8 Default Methods were added. So now you can inherit some methods but still can't inherit any fields. This simplified implementation of mix-ins in Scala 2.12 at the price of requiring Java 8 as its target platform. Still interfaces can't have (non-static) fields and thus can't have constructors. This is one of the major reasons why the superclass constructor sc is treated differently from the traits mt1, mt2, mt3, etc.
Also it is important to note that Java was designed as a pretty safe language. Particularly it fights against "undefined behaviors" that might happen if you re-use some values that are just leftover (garbage) in the memory. So Java ensures that you can't access any fields of the base class until its constructor is called. This makes super call pretty much mandatory first line in any child constructor.
Scala and mix-ins (simple example)
So now imagine you are a designer of the Scala language and you want it to have mix-ins but your target platform (JVM) doesn't support them. What should you do? Obviously your compiler should be able to convert mix-ins into something that JVM supports. And here is a rough approximation of how it is done for a simple (and nonsense) example:
class Base(val baseValue: Int) {
}
trait TASimple {
val aValueNI: AtomicInteger
val aValueI: AtomicInteger = new AtomicInteger(0)
def aIncrementAndGetNI(): Int = aValueNI.incrementAndGet()
def aIncrementAndGetI(): Int = aValueI.incrementAndGet()
}
class SimpleChild(val childValue: Int, baseValue: Int) extends Base(baseValue) with TASimple {
override val aValueNI = new AtomicInteger(5)
}
So in words you have:
a base class Base with some field
a mix-in trait TASimple which contains 2 fields (one initial and one not initialized) and two methods
a child class SimpleChild
Since TASimple is more than just a declaration of methods, it can't be compiled to just a simple Java interface. It is actually compiled into something like this (in Java code):
public abstract interface TASimple
{
abstract void TASimple_setter_aValueI(AtomicInteger param);
abstract AtomicInteger aValueNI();
abstract AtomicInteger aValueI();
default int aIncrementAndGetNI() { return aValueNI().incrementAndGet(); }
default int aIncrementAndGetI() { return aValueI().incrementAndGet(); }
public static void init(TASimple $this)
{
$this.TASimple_setter_aValueI(new AtomicInteger(0));
}
}
public class SimpleChild extends Base implements TASimple
{
private final int childValue;
private final AtomicInteger aValueNI;
private final AtomicInteger aValueI;
public AtomicInteger aValueI() { return this.aValueI; }
public void TASimple_setter_aValueI(AtomicInteger param) { this.aValueI = param; }
public int childValue() { return this.childValue; }
public AtomicInteger aValueNI() { return this.aValueNI; }
public SimpleChild(int childValue, int baseValue)
{
super(baseValue);
TASimple.init(this);
this.aValueNI = new AtomicInteger(5);
}
}
So what TASimple contains and how it is translated (to Java 8):
aValueNI and aValueI as a part of val declarations. Those must be implemented by SimpleChild backing them with some fields (no tricks whatsoever).
aIncrementAndGetNI and aIncrementAndGetI methods with some logic. Those methods can be inherited by SimpleChild and will work basing on the aValueNI and aValueI methods.
A piece of logic that initializes aValueI. If TASimple was a class, it would have a constructor and this logic might have been there. However TASimple is translated to an interface. Thus that "constructor" piece of logic is moved to a static void init(TASimple $this) method and that init is called from the SimpleChild constructor. Note that Java spec enforces that the super call (i.e. the constructor of the base class) must be called before it.
The logic in the item #3 is what stands behind
First, the superclass constructor sc is evaluated.
Then, all base classes in the template's linearization up to the template's superclass denoted by sc are mixin-evaluated
Again this is the logic enforce by the JVM itself: you first have to call the base constructor and only then you can (and should) call all the other simulated "constructors" of all mix-ins.
Side note (Scala pre-2.12/Java pre-8)
Before Java 8 and default methods translation would be even more complicated. TASimple would be translated into an interface and class such as
public abstract interface TASimple
{
public abstract void TASimple_setter_aValueI(AtomicInteger param);
public abstract AtomicInteger aValueNI();
public abstract AtomicInteger aValueI();
public abstract int aIncrementAndGetNI();
public abstract int aIncrementAndGetI();
}
public abstract class TASimpleImpl
{
public static int aIncrementAndGetNI(TASimple $this) { return $this.aValueNI().incrementAndGet(); }
public static int aIncrementAndGetI(TASimple $this) { return $this.aValueI().incrementAndGet(); }
public static void init(TASimple $this)
{
$this.TASimple_setter_aValueI(new AtomicInteger(0));
}
}
public class SimpleChild extends Base implements TASimple
{
private final int childValue;
private final AtomicInteger aValueNI;
private final AtomicInteger aValueI;
public AtomicInteger aValueI() { return this.aValueI; }
public void TASimple_setter_aValueI(AtomicInteger param) { this.aValueI = param; }
public int aIncrementAndGetNI() { return TASimpleImpl.aIncrementAndGetNI(this); }
public int aIncrementAndGetI() { return TASimpleImpl.aIncrementAndGetI(this); }
public int childValue() { return this.childValue; }
public AtomicInteger aValueNI() { return this.aValueNI; }
public SimpleChild(int childValue, int baseValue)
{
super(baseValue);
TASimpleImpl.init(this);
this.aValueNI = new AtomicInteger(5);
}
}
Note how now implementations of aIncrementAndGetNI and aIncrementAndGetI are moved to some static methods that take explicit $this as a parameter.
Scala and mix-ins #2 (complicated example)
Example in the previous section illustrated some of the ideas but not all of them. For a more detailed illustration a more complicated example is required.
Mixin-evaluation happens in reverse order of occurrence in the linearization.
This part is relevant when you have several mix-ins and especially in the case of the diamond problem. Consider following example:
trait TA {
val aValueNI0: AtomicInteger
val aValueNI1: AtomicInteger
val aValueNI2: AtomicInteger
val aValueNI12: AtomicInteger
val aValueI: AtomicInteger = new AtomicInteger(0)
def aIncrementAndGetNI0(): Int = aValueNI0.incrementAndGet()
def aIncrementAndGetNI1(): Int = aValueNI1.incrementAndGet()
def aIncrementAndGetNI2(): Int = aValueNI2.incrementAndGet()
def aIncrementAndGetNI12(): Int = aValueNI12.incrementAndGet()
def aIncrementAndGetI(): Int = aValueI.incrementAndGet()
}
trait TB1 extends TA {
val b1ValueNI: AtomicInteger
val b1ValueI: AtomicInteger = new AtomicInteger(1)
override val aValueNI1: AtomicInteger = new AtomicInteger(11)
override val aValueNI12: AtomicInteger = new AtomicInteger(111)
def b1IncrementAndGetNI(): Int = b1ValueNI.incrementAndGet()
def b1IncrementAndGetI(): Int = b1ValueI.incrementAndGet()
}
trait TB2 extends TA {
val b2ValueNI: AtomicInteger
val b2ValueI: AtomicInteger = new AtomicInteger(2)
override val aValueNI2: AtomicInteger = new AtomicInteger(22)
override val aValueNI12: AtomicInteger = new AtomicInteger(222)
def b2IncrementAndGetNI(): Int = b2ValueNI.incrementAndGet()
def b2IncrementAndGetI(): Int = b2ValueI.incrementAndGet()
}
class Base(val baseValue: Int) {
}
class ComplicatedChild(val childValue: Int, baseValue: Int) extends Base(baseValue) with TB1 with TB2 {
override val aValueNI0 = new AtomicInteger(5)
override val b1ValueNI = new AtomicInteger(6)
override val b2ValueNI = new AtomicInteger(7)
}
What is interesting here is that ComplicatedChild inherits from TA in two ways: via TB1 and TB2. Moreover both TB1 and TB2 define some initialization of aValueNI12 but with different values. First of all it should be mentioned that ComplicatedChild will have only one copy of fields for each val defined in TA. But then what would happen if you try this:
val cc = new inheritance.ComplicatedChild(42, 12345)
println(cc.aIncrementAndGetNI12())
Which value (TB1 or TB2) would win? And will the behavior be deterministic at all? The answer to the last question is - yes, the behavior will be deterministic both between runs and between compilations. This is achieved via so called "traits linearization" which is an entirely different topic. In short the Scala compiler sorts all the inherited (directly and indirectly) traits in some fixed defined order such that it manifests some good behaviors (such as the parent trait is always after its child trait in the list). So going back to the quote:
Mixin-evaluation happens in reverse order of occurrence in the linearization.
This traits linearization order ensures
That all "base" fields are already initialized by the corresponding parent (simulated) constructors by the time the simulated constructor for some trait is called.
Order of the simulated constructors calls is fixed so behavior is deterministic.
In this particular case the linearization order will be ComplicatedChild > TB2 > TB1 > TA > Base. It means that ComplicatedChild constructor is actually translated into something like:
public ComplicatedChild(int childValue, int baseValue)
{
super(baseValue);
TA.init(this);
TB1.init(this);
TB2.init(this);
this.aValueNI0 = new AtomicInteger(5);
this.b1ValueNI = new AtomicInteger(6);
this.b2ValueNI = new AtomicInteger(7);
}
and so aValueNI12 will be initialized by TB2 (which will overwrite the value set by the TB1 "constructor").
Hope this clarifies a bit what's going on and why. Let me know if something is not clear.
Update (answer to comment)
The spec says
Then, all base classes in the template's linearization up to the template's superclass denoted by scsc are mixin-evaluated. Mixin-evaluation happens in reverse order of occurrence in the linearization.
what does the “up to” precisely mean here?
Let's extend the "simple" example adding one more base trait as following:
trait TX0 {
val xValueI: AtomicInteger = new AtomicInteger(-1)
}
class Base(val baseValue: Int) extends TX0 {
}
trait TASimple extends TX0 {
val aValueNI: AtomicInteger
val aValueI: AtomicInteger = new AtomicInteger(0)
def aIncrementAndGetNI(): Int = aValueNI.incrementAndGet()
def aIncrementAndGetI(): Int = aValueI.incrementAndGet()
}
class SimpleChild(val childValue: Int, baseValue: Int) extends Base(baseValue) with TASimple {
override val aValueNI = new AtomicInteger(5)
}
Note how here TX0 is inherited by both BaseClass and TASimple. In this case I expect linearization to produce the following order SimpleChild > TASimple > Base > TX0 > Any. I interpret that sentence as following: in this case the constructor of the SimpleChild will not call the "simulated" constructor of the TX0 because it comes in the order after the Base (= sc). I think the logic for this behavior is obvious: from the point of view of the SimpleChild constructor the "simulated" constructor of the TX0 should have already been called by the Base constructor, moreover Base might have updated results of that call so calling the "simulated" constructor of the TX0 second time might actually break the Base.

Is it possible to specify a static function in a Kotlin interface?

I want to do something like this:
interface Serializable<FromType, ToType> {
fun serialize(): ToType
companion object {
abstract fun deserialize(serialized: ToType): FromType
}
}
or even this would work for me:
interface Serializable<ToType> {
fun serialize(): ToType
constructor(serialized: ToType)
}
but neither compiles. Is there a syntax for this, or will I be forced to use make this an interface for a factory? Or is there another answer? 😮 That'd be neat!
Basically, nothing in a companion object can be abstract or open (and thus be overridden), and there's no way to require the implementations' companion objects to have a method or to define/require a constructor in an interface.
A possible solution for you is to separate these two functions into two interfaces:
interface Serializable<ToType> {
fun serialize(): ToType
}
interface Deserializer<FromType, ToType> {
fun deserialize(serialized: ToType): FromType
}
This way, you will be able to implement the first interface in a class and make its companion object implement the other one:
class C: Serializable<String> {
override fun serialize(): String = "..."
companion object : Deserializer<C, String> {
override fun deserialize(serialized: String): C = C()
}
}
Also, there's a severe limitation that only a single generic specialization of a type can be used as a supertype, so this model of serializing through the interface implementation may turn out not scalable enough, not allowing multiple implementations with different ToTypes.
For future uses, it's also possible to give the child class to a function as a receiver parameter:
val encodableClass = EncodableClass("Some Value")
//The encode function is accessed like a member function on an instance
val stringRepresentation = encodableClass.encode()
//The decode function is accessed statically
val decodedClass = EncodableClass.decode(stringRepresentation)
interface Encodable<T> {
fun T.encode(): String
fun decode(stringRepresentation: String): T
}
class EncodableClass(private val someValue: String) {
// This is the remaining awkwardness,
// you have to give the containing class as a Type Parameter
// to its own Companion Object
companion object : Encodable<EncodableClass> {
override fun EncodableClass.encode(): String {
//You can access (private) fields here
return "This is a string representation of the class with value: $someValue"
}
override fun decode(stringRepresentation: String): EncodableClass {
return EncodableClass(stringRepresentation)
}
}
}
//You also have to import the encode function separately:
// import codingProtocol.EncodableClass.Companion.encode
This is the more optimal use case for me. Instead of one function in the instanced object and the other in the companion object like your example, we move both functions to the companion object and extend the instance.

Merging of custom and compiler generated companion objects for a case class. What are the merging rules?

I just tried out this code below and it worked as expected. It prints 1.
Now, my problem is that I don't understand what is going on under the hood.
How can a case class have two companion objects (One generated by the compiler and one written by me) ? Probably it cannot. So they must be merged somehow under the hood. I just don't understand how are they merged ? Are there any special merging rules I should be aware of ?
Is it so that, if the set of definitions defined in both companion objects are disjoint then the set of definitions in the resulting case class is simply the union of two disjoint sets ? I would think this is how they are merged, but I am not sure. Can someone please confirm whether this merging rule is the one that is implemented in the Scala compiler? Or is there something extra to it ?
More specifically, what are the rules by which the compiler generated companion object and my companion object are merged ? Are these rules specified somewhere ?
I have not really seen this topic discussed in the few Scala books I have - perhaps too superficially - read.
object A{
implicit def A2Int(a:A)=a.i1
}
case class A(i1:Int,i2:Int)
object Run extends App{
val a=A(1,2)
val i:Int=a
println(i)
}
I'm not aware of where the algorithm for merging automatic and explicit companion objects is described or documented (other than the compiler source) but by compiling your code and then examining the generated companion object (using javap), we can see what the differences are (this is with scala 2.10.4).
Here's the companion object generated for the case class (without your additional companion object):
Compiled from "zip.sc"
public final class A$ extends scala.runtime.AbstractFunction2<Object, Object, A> implements scala.Serializable {
public static final A$ MODULE$;
public static {};
public A apply(int, int);
public scala.Option<scala.Tuple2<java.lang.Object, java.lang.Object>> unapply(A);
public java.lang.Object apply(java.lang.Object, java.lang.Object);
public final java.lang.String toString();
}
After adding your companion object, here's what is generated:
Compiled from "zip.sc"
public final class A$ implements scala.Serializable {
public static final A$ MODULE$;
public static {};
public A apply(int, int);
public scala.Option<scala.Tuple2<java.lang.Object, java.lang.Object>> unapply(A);
public int A2Int(A);
}
The differences in the generated companion object caused by the explicit companion object definition appear to be:
it no longer extends AbstractFunction2
it no longer has the factory method (apply) related to bullet 1
it no longer overrides the toString method (I suppose you are expected to supply one, if needed)
your A2Int method is added
If the case class is changed to an ordinary class (along with minimal changes required to get it to compile), the result is the following:
Compiled from "zip.sc"
public final class A$ {
public static final A$ MODULE$;
public static {};
public A apply(int, int);
public int A2Int(A);
}
So it seems that if you declare your own companion object, at least in this simple example, the effect is that your new method is added to the companion object, and some of it's implementation and functionality are lost as well. It would be interesting to see what would happen if we tried to override some of the remaining auto-generated stuff, but there's not much left, so that in general is unlikely to cause conflict.
Some of the benefits of case classes are unrelated to the generated code, such as making the class variables public without having to explicitly add the 'val' keyword. Here's the modified source code for all 3 decompiled examples above.
version 1 (no explicit companion object):
case class A(i1:Int,i2:Int)
version 2 is your original version.
version 3 (no case-class):
object A {
implicit def A2Int(a:A)=a.i1
def apply(a:Int,b:Int):A = new A(a,b)
}
class A(val i1:Int,val i2:Int)
object Run extends App{
import A._
val a=A(1,2)
val i:Int=a
}
In version 3, we need to add val to class A parameters (otherwise they're private), and we have to either add the factory method to our companion object, or use the 'new' keyword when creating an instance of A(1,2).

Is there any advantage to definining a val over a def in a trait?

In Scala, a val can override a def, but a def cannot override a val.
So, is there an advantage to declaring a trait e.g. like this:
trait Resource {
val id: String
}
rather than this?
trait Resource {
def id: String
}
The follow-up question is: how does the compiler treat calling vals and defs differently in practice and what kind of optimizations does it actually do with vals? The compiler insists on the fact that vals are stable — what does in mean in practice for the compiler? Suppose the subclass is actually implementing id with a val. Is there a penalty for having it specified as a def in the trait?
If my code itself does not require stability of the id member, can it be considered good practice to always use defs in these cases and to switch to vals only when a performance bottleneck has been identified here — however unlikely this may be?
Short answer:
As far as I can tell, the values are always accessed through the accessor method. Using def defines a simple method, which returns the value. Using val defines a private [*] final field, with an accessor method. So in terms of access, there is very little difference between the two. The difference is conceptual, def gets reevaluated each time, and val is only evaluated once. This can obviously have an impact on performance.
[*] Java private
Long answer:
Let's take the following example:
trait ResourceDef {
def id: String = "5"
}
trait ResourceVal {
val id: String = "5"
}
The ResourceDef & ResourceVal produce the same code, ignoring initializers:
public interface ResourceVal extends ScalaObject {
volatile void foo$ResourceVal$_setter_$id_$eq(String s);
String id();
}
public interface ResourceDef extends ScalaObject {
String id();
}
For the subsidiary classes produced (which contain the implementation of the methods), the ResourceDef produces is as you would expect, noting that the method is static:
public abstract class ResourceDef$class {
public static String id(ResourceDef $this) {
return "5";
}
public static void $init$(ResourceDef resourcedef) {}
}
and for the val, we simply call the initialiser in the containing class
public abstract class ResourceVal$class {
public static void $init$(ResourceVal $this) {
$this.foo$ResourceVal$_setter_$id_$eq("5");
}
}
When we start extending:
class ResourceDefClass extends ResourceDef {
override def id: String = "6"
}
class ResourceValClass extends ResourceVal {
override val id: String = "6"
def foobar() = id
}
class ResourceNoneClass extends ResourceDef
Where we override, we get a method in the class which just does what you expect. The def is simple method:
public class ResourceDefClass implements ResourceDef, ScalaObject {
public String id() {
return "6";
}
}
and the val defines a private field and accessor method:
public class ResourceValClass implements ResourceVal, ScalaObject {
public String id() {
return id;
}
private final String id = "6";
public String foobar() {
return id();
}
}
Note that even foobar() doesn't use the field id, but uses the accessor method.
And finally, if we don't override, then we get a method which calls the static method in the trait auxiliary class:
public class ResourceNoneClass implements ResourceDef, ScalaObject {
public volatile String id() {
return ResourceDef$class.id(this);
}
}
I've cut out the constructors in these examples.
So, the accessor method is always used. I assume this is to avoid complications when extending multiple traits which could implement the same methods. It gets complicated really quickly.
Even longer answer:
Josh Suereth did a very interesting talk on Binary Resilience at Scala Days 2012, which covers the background to this question. The abstract for this is:
This talk focuses on binary compatibility on the JVM and what it means
to be binary compatible. An outline of the machinations of binary
incompatibility in Scala are described in depth, followed by a set of rules and guidelines that will help developers ensure their own
library releases are both binary compatible and binary resilient.
In particular, this talk looks at:
Traits and binary compatibility
Java Serialization and anonymous classes
The hidden creations of lazy vals
Developing code that is binary resilient
The difference is mainly that you can implement/override a def with a val but not the other way around. Moreover val are evaluated only once and def are evaluated every time they are used, using def in the abstract definition will give the code who mixes the trait more freedom about how to handle and/or optimize the implementation. So my point is use defs whenever there isn't a clear good reason to force a val.
A val expression is evaluated once on variable declaration, it is strict and immutable.
A def is re-evaluated each time you call it
def is evaluated by name and val by value. This means more or less that val must always return an actual value, while def is more like a promess that you can get a value when evaluating it. For example, if you have a function
def trace(s: => String ) { if (level == "trace") println s } // note the => in parameter definition
that logs an event only if the log level is set to trace and you want to log an objects toString. If you have overriden toString with a value, then you need to pass that value to the trace function. If toString however is a def, it will only be evaluated once it's sure that the log level is trace, which could save you some overhead.
def gives you more flexibility, while val is potentially faster
Compilerwise, traits are compiled to java interfaces so when defining a member on a trait, it makes no difference if its a var or def. The difference in performance would depend on how you choose to implement it.

In Scala, is it possible to have a case class containing methods with identical names to fields?

I think I'd like to be able to do something like the following (clearly garbage) code illustrates:
// Clearly nonsensical
case class Example(a: String) {
def a: Array[Byte] = a.getBytes
}
The gist of it is that I want to write an accessor method for a case class that is named identically to one of its constructor arguments.
I'm using a JSON serialization library called Jerkson that, according to my understanding, will behave in the way I want it to if I define a class in this manner. I'm basing that assumption on this code. Currently, I'm stumped.
If this isn't possible, could anyone offer some insight on what the Jerkson library code is attempting to do?
Scala automatically creates a method with the same name as any val declared in a class (including the fields of case classes) to support a concept called referential transparency. This is also why you can override a def with a val. If you're still skeptical, you can test it yourself like this:
First, create a Scala file with a single case class.
// MyCase.scala
case class MyCase(myField1: Int, myField2: String)
Now, compile the file with scalac. This should result in two classes. For the example above I get MyCase.class (representing the actual case class type) and MyCase$.class (representing the auto-generated companion object for the case class).
$ scalac MyCase.scala
$ ls
MyCase$.class MyCase.class MyCase.scala
Now you can examine the resulting .class file corresponding to the case class you declared using javap. (javap standard tool for examining Java bytecode—it's distributed along with javac in the JDK.)
$ javap -private MyCase
Compiled from "MyCase.scala"
public class MyCase extends java.lang.Object implements scala.Product,scala.Serializable{
private final int myField1;
private final java.lang.String myField2;
public static final scala.Function1 tupled();
public static final scala.Function1 curry();
public static final scala.Function1 curried();
public scala.collection.Iterator productIterator();
public scala.collection.Iterator productElements();
public int myField1();
public java.lang.String myField2();
public MyCase copy(int, java.lang.String);
public java.lang.String copy$default$2();
public int copy$default$1();
public int hashCode();
public java.lang.String toString();
public boolean equals(java.lang.Object);
public java.lang.String productPrefix();
public int productArity();
public java.lang.Object productElement(int);
public boolean canEqual(java.lang.Object);
private final boolean gd1$1(int, java.lang.String);
public MyCase(int, java.lang.String);
}
Notice how the resulting class has both a private final int myField1 and a public int myField1() corresponding to the case class's myField1 field. The same for myField2.
On the JVM method return types are not part of the method signature. This means that if two methods have the same name and the same argument types then they're considered to be conflicting method declarations. This means you can't declare the def a: Array[Byte] in your example because val a: String already exists, also taking no arguments.
Update:
I just looked at the library code and according to the examples the case classes should just work. There is a note in the README saying that parsing case classes does not work in the REPL. Could that be your problem? If not, you should really post the error you're getting. Edit: Never mind, I see the error you're talking about in your link to your other post. If I think of a response to that problem I'll post it over there.
No, it's not possible. The reason is that constructor arguments of case classes are automatically public values, like if you declare them with val. To quote A Tour of Scala: Case Classes
The constructor parameters of case classes are treated as public values and can be accessed directly.
Therefore, for each constructor argument Scala creates a corresponding accessor method with the same name. You cannot create a method with the same name, it's already there.
This is actually what case classes are about. The idea is that they can be used for pattern matching, so the values retrieved from them should be the same as the values used to construct them.
(Is it a requirement that you use case classes? Using regular classes seems to solve the problem.)