Create proper late initialization of an abstract val in trait - scala

Consider the following case:
trait A {
protected val mydata = ???
def f(args) = ??? //uses mydata
}
class B
class C
class D(arg1: String) extends B with A {
override val mydata = ??? /// some calculation based on arg1
}
class E(arg1: String) extends C with A{
override val mydata = ??? /// some calculation based on arg1
}
A must be a trait as it is used by different unrelated classes. The problem is how to implement the definition of mydata.
The standard way (suggested in many places would be to define mydata as def and override it in the children. However, if f assumes mydata never changes then it can cause issues when some child extends with a function which changes between calls instead of with a val.
Another way would be to do:
trait A {
protected val mydata = g
protected def g()
}
The problem with this (beyond adding another function) is that if g depends on construction variables in the child then these must become members of the child (which can be a problem for example if the data is large and given in the construction):
class D(arg1: Seq[String]) {
def g() = ??? // some operation on arg1
}
If I leave the val in the trait as abstract I can reach issues such as those found here).
What I am looking for is a way to define the value of the val in the children, ensuring it would be a val and without having to save data for late calculations. Something similar as to how in java I can define a final val and fill it in the constructor

The standard way (suggested in many places would be to define mydata as def and override it in the children... If I leave the val in the trait as abstract I can reach issues such as those found here).
This is a common misunderstanding, shown in the accepted answer to the linked question as well. The issue is implementing as a val, which you require anyway. Having a concrete val which is overridden only makes it worse: abstract one can at least be implemented by a lazy val. The only way to avoid the issue for you is to ensure mydata is not accessed in a constructor of A or its subtypes, directly or indirectly, until it's initialized. Using it in f is safe (provided f is not called in a constructor, again, which would be an indirect access to mydata).
If you can ensure this requirement, then
trait A {
protected val mydata
def f(args) = ??? //uses mydata
}
class D(arg1: String) extends B with A {
override val mydata = ??? /// some calculation based on arg1
}
class E(arg1: String) extends C with A{
override val mydata = ??? /// some calculation based on arg1
}
is exactly the correct definition.
If you can't, then you have to live with your last solution despite the drawback, but mydata needs to be lazy to avoid similar initialization order issues, which would already give the same drawback on its own.

Related

How to do early initializations of variables in Scala?

I have an architectural problem, more precisely, a suboptimal situation.
For an adaptable test environment, there is a context that is updated by a range of definition methods, which each define different entities, i.e. alter the context. For simplicity, the definitions here will just be integers, and the context a growing Seq[Int].
trait Abstract_Test_Environment {
def definition(d: Int): Unit
/* Make definitions: */
definition(1)
definition(2)
definition(3)
}
This idea is now implemented by a consecutively altered “var” holding the current context:
trait Less_Abstract_Test_Environment extends Abstract_Test_Environment {
/* Implement the definition framework: */
var context: Context = initial_context
val initial_context: Context
override def definition(d: Int) = context = context :+ d
}
Since the context must be set before “definition” is applied, it cannot be set by variable assignment in the concluding class:
class Concrete_Test_Environment extends Less_Abstract_Test_Environment {
context = Seq.empty
}
An intermediate “initial_context” is required but a plain overriding does not do the job either:
class Concrete_Test_Environment extends Less_Abstract_Test_Environment {
override val initial_context = Seq.empty
}
The only viable solution seems to be an early initialization, which most likely is the purpose this feature has been created for:
class Concrete_Test_Environment extends {
override val initial_context = Seq.empty
} with Less_Abstract_Test_Environment
HOWEVER, our setting still fails because when “definition” is applied in “Abstract_Test_Environment”, the VAR “context” in “Less_Abstract_Test_Environment” is still not bound, i.e. null. Whereas the def “definition” is “initialized on demand” in “Less_Abstract_Test_Environment” (from “Abstract_Test_Environment”), the var “context” is not.
The “solution” I came up with is merging “Abstract_Test_Environment” and “Less_Abstract_Test_Environment”. This is not what I wanted since it destroys the natural separation of interface/goal and implementation, which has been realized by the two traits.
Do you see any better solution? I am sure Scala can do better.
Simple solution: Do not initialize your object during its creation, except you are in the bottom level class. Instead, add an init method, which contains all of the initialization code and then call it either in the most bottom level class (which is safe, since all parent classes have already been created) or wherever the object is created.
Side effect of the whole thing is that you can even override the initialization code in a subclass.
One possibility is to make your intermediate trait a class:
abstract class Less_Abstract_Test_Environment(var context: Context = Seq.empty) extends Abstract_Test_Environment {
override def definition(d: Int) = context = context :+ d
}
You can now subclass it, and pass different initial contexts in as parameters to constructor.
You can do this at the "concrete" level too, if you'd rather have the intermediate as a trait:
trait Less_Abstract_Test_Environment extends Abstract_Test_Environment {
var context: Context
override def definition(d: Int) = context = context :+ d
}
class Concrete_Test_Environment(override var context: Context = Seq.empty) extends Less_Abstract_Test_Environment
What would be even better though is using functional approach: context should be a val, and definion should take the previous value, and return the new one:
trait Abstract {
type Context
def initialContext: Context
val context: Context = Range(1, 4)
.foldLeft(initialContext) { case (c, n) => definition(c, n) }
def definition(context: Context, n: Int): Context
}
trait LessAbstract extends Abstract {
override type Context = Seq[Int]
override def definition(context: Context, n: Int) = context :+ n
}
class Concrete extends LessAbstract {
override def initialContext = Seq(0)
}
You can employ the idea of a whiteboard, which contains only data, which is shared by a number of traits which contain only logic (not data!). See below some untested code off the cuff:
trait WhiteBoard {
var counter: Int = 0
}
trait Display {
var counter: Int
def show: Unit = println(counter)
}
trait Increment {
var counter: Int
def inc: Unit = { counter = counter + 1 }
}
Then you write unit tests like this:
val o = new Object with Whiteboard with Display with Increment
o.show
o.inc
o.show
Doing this way, you separate definition of the data from places where the data is required, which basically means that you can potentially mix in traits in any order. The only requirement is that the whiteboard (which defines data) is the first trait mixed in.

Force inheriting class to implement methods as protected

I've a got a trait:
trait A {
def some: Int
}
and an object mixing it in:
object B extends A {
def some = 1
}
The question is, is there a way to declare some in A in a way that all inheriting objects have to declare the some method as protected for example? Something that would make the compiler yell at the above implementation of some in B?
UPDATE:
Just a clarification on the purpose of my question: Within an organization, there are some software development standards that are agreed upon. These standards, for example 'The some method is to always be declared as private when inheriting from trait A', are in general communicated via specs or documents listing all the standards or via tools such as Jenkins, etc... I am wondering if we could go even further and have these standards right in the code, which would save a lot of time correcting issues raised by Jenkins for example.
UPDATE 2:
A solution I could think of is as follows:
abstract class A(
protected val some: Int
){
protected def none: String
}
Use an abstract class instead of a trait and have the functions or values that I need to be protected by default passed in the constructor:
object B extends A(some = 1) {
def none: String = "none"
}
Note that in this case, some is by default protected unless the developer decides to expose it through another method. However, there will be no guarantee that, by default, none will be protected as well.
This works for the use case I described above. The problem with this implementation is that if we have a hierarchy of abstract classes, we would have to add the all the constructor parameters of the parent to every inheriting child in the hierarchy. For example:
abstract class A(
protected val some: Int
)
abstract class B(
someImp: Int,
protected val none: String
) extends A(some = someImp)
object C extends B(
someImp = 1,
none = "none"
)
In contrast, using traits, we could have been able to simply write:
trait A{
protected val some: Int
}
trait B extends A{
protected val none: String
}
object C extends B{
val some = 1
val none = "none"
}
I don't see any straight way to restrict subclasses from choosing a wider visibility for inherited members.
It depends on why you want to hide the field some, but if the purpose is just to forbid end-users from accessing the field, you can use a slightly modified form of the cake pattern:
trait A {
trait A0 {
protected def some: Int
}
def instance: A0
}
object B extends A {
def instance = new A0 {
def some = 5
}
}
Yeah, it looks nasty but the compiler will yell when someone tries to do:
B.instance.some
Another version of this solution is just to do things like in your example (adding protected to the member "some" in A), but to never expose directly a reference of type B (always return references of type A instead)

scala: how to view subclass methods with a generic instantiation

I have the following where I set information and extractors for different schemes of data:
trait DataScheme {
type Type <: List[Any]
class ExtractorMethods(ticker: String, dataList: List[Type]) {
def getDatetime(datum: Type): Date = new Date(datum(columnIndex(Names.datetime)).toString)
def upperDatum(date: Date): Type = dataList.minBy(datum => getDatetime(datum) >= date)
def lowerDatum(date: Date): Type = dataList.maxBy(datum => getDatetime(datum) <= date)
}
}
trait IndexScheme extends DataScheme {
type Type = (Date, Double, Double, Double, Double, Long)
class ExtractorMethods(ticker: String, dataList: List[Type]) extends super.ExtractorMethods(ticker: String, dataList: List[Type]){
def testing12(int: Int):Int = 12
val test123 = 123
}
}
I want anything extending DataScheme to use its ExtractorMethods methods (e.g. lowerDatum) but also have its own methods (e.g. testing12).
There is a class definition for lists of data elements:
class Data[+T <: DataScheme](val ticker: String, val dataList: List[T#Type], val isSorted: Boolean)
(implicit m: Manifest[T], mm: Manifest[T#Type]) extends Symbols {
def this(ticker: String, dataList: List[T#Type])(implicit m: Manifest[T], mm: Manifest[T#Type]) = this(ticker, dataList, false)(m: Manifest[T], mm: Manifest[T#Type])
val dataScheme: T
val extractorMethods = new dataScheme.ExtractorMethods(ticker, dataList.asInstanceOf[List[dataScheme.Type]])
}
A Data class should make accessible the methods in ExtractorMethods of the scheme so they can be used in the main program through the instance of Data that has been defined. For example if sortedData is an instance of Data[IndexScheme], the following works:
val lowerDatum = sortedData.extractorMethods.lowerDatum(new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").parse("2010-03-31 00:00:00"))
but this does not:
val testing = sortedData.extractorMethods.testing12(123)
because 'testing 123 is not a member of sortedData.dataScheme.extractorMethods'. So my question is how can the subclasses of ExtractorMethods in the subtraits of DataScheme like IndexScheme be made accessible? How is it possible using Manifests and TypeTags? Thanks.
So you want the generic class Data[DataScheme] or Data[IndexScheme] to have access to the methods of whichever type Data has been parameterised with. You've tried to do this several different ways, from the evidence in your code.
To answer your last question - manifests can't help in this particular case and TypeTags are only part of the answer. If you really want to do this, you do it with mirrors.
However, you will have to make some changes to your code. Scala only has instance methods; there are no such things as static methods in Scala. This means that you can only use reflection to invoke a method on an instance of a class, trait or object. Your traits are abstract and can't be instantiated.
I can't really tell you how to clean up your code, because what you have pasted up here is a bit of a mess and is full of different things you have tried. What I can show you is how to do it with a simpler set of classes:
import scala.reflect.runtime.universe._
class t1 {
class Methods {
def a = "a"
def b = "b"
}
def methods = new Methods
}
class t2 extends t1 {
class Methods extends super.Methods {
def one = 1
def two = 2
}
override def methods = new Methods
}
class c[+T <: t1](implicit tag: TypeTag[T]) {
def generateT = {
val mirror = runtimeMirror(getClass.getClassLoader)
val cMirror = mirror.reflectClass(typeOf[T].typeSymbol.asClass)
cMirror.reflectConstructor(typeOf[T].declaration(nme.CONSTRUCTOR).asMethod)
}
val t = generateT().asInstanceOf[T]
}
val v1 = new c[t1]
val v2 = new c[t2]
If you run that, you'll find that v1.t.methods gives you a class with only methods a and b, but v2.t.methods gives a class with methods one and two as well.
This really is not how to do this - reaching for reflection for this kind of job shows a very broken model. But I guess that's your business.
I stick by what I said below, though. You should be using implicit conversions (and possibly implicit parameters) with companion objects. Use Scala's type system the way it's designed - you are fighting it all the way.
ORIGINAL ANSWER
Well, I'm going to start by saying that I would never do things the way you are doing this; it seems horribly over-complicated. But you can do what you want to do, roughly the way you are doing it, by
Using mixins
Moving the extractorMethods creation code into the traits.
Here's a greatly simplified example:
trait t1 {
class Methods {
def a = "a"
def b = "b"
}
def methods = new Methods
}
trait t2 extends t1 {
class Methods extends super.Methods {
def one = 1
def two = 2
}
override def methods = new Methods
}
class c1 extends t1
val v1 = new c1
// v1.methods.a will return "a", but v1.methods.one does not exist
class c2 extends c1 with t2
val v2 = new c2
// v2.methods.a returns "a" and v2.methods.one returns 1
I could replicate your modus operandi more closely by defining c1 like this:
class c1 extends t1 {
val myMethods = methods
}
in which case v1.myMethods would only have methods a and b but v2.myMethods would have a, b, one and two.
You should be able to see how you can adapt this to your own class and trait structure. I know my example doesn't have any of your complex type logic in it, but you know better than I what you are trying to achieve there. I'm just trying to demonstrate a simple mechanism.
But dude, way to make your life difficult...
EDIT
There are so many things I could say about what is wrong with your approach here, both on the small and large scale. I'm going to restrict myself to saying two things:
You can't do what you are trying to do in the Data class because it is abstract. You cannot force Scala to magically replace an uninitialised, abstract method of a non-specific type with the specific type, just by littering everything with Type annotations. You can only solve this with a concrete class which provides the specific type.
You should be doing this with implicit conversions. Implicits would help you do it the wrong way you seem fixated on, but would also help you do it the right way. Oh, and use a companion object, either for the implicits or to hold a factory (or bot).

Implementing '.clone' in Scala

I'm trying to figure out how to .clone my own objects, in Scala.
This is for a simulation so mutable state is a must, and from that arises the whole need for cloning. I'll clone a whole state structure before moving the simulation time ahead.
This is my current try:
abstract trait Cloneable[A] {
// Seems we cannot declare the prototype of a copy constructor
//protected def this(o: A) // to be defined by the class itself
def myClone= new A(this)
}
class S(var x: String) extends Cloneable[S] {
def this(o:S)= this(o.x) // for 'Cloneable'
def toString= x
}
object TestX {
val s1= new S("say, aaa")
println( s1.myClone )
}
a. Why does the above not compile. Gives:
error: class type required but A found
def myClone= new A(this)
^
b. Is there a way to declare the copy constructor (def this(o:A)) in the trait, so that classes using the trait would be shown to need to provide one.
c. Is there any benefit from saying abstract trait?
Finally, is there a way better, standard solution for all this?
I've looked into Java cloning. Does not seem to be for this. Also Scala copy is not - it's only for case classes and they shouldn't have mutable state.
Thanks for help and any opinions.
Traits can't define constructors (and I don't think abstract has any effect on a trait).
Is there any reason it needs to use a copy constructor rather than just implementing a clone method? It might be possible to get out of having to declare the [A] type on the class, but I've at least declared a self type so the compiler will make sure that the type matches the class.
trait DeepCloneable[A] { self: A =>
def deepClone: A
}
class Egg(size: Int) extends DeepCloneable[Egg] {
def deepClone = new Egg(size)
}
object Main extends App {
val e = new Egg(3)
println(e)
println(e.deepClone)
}
http://ideone.com/CS9HTW
It would suggest a typeclass based approach. With this it is possible to also let existing classes be cloneable:
class Foo(var x: Int)
trait Copyable[A] {
def copy(a: A): A
}
implicit object FooCloneable extends Copyable[Foo] {
def copy(foo: Foo) = new Foo(foo.x)
}
implicit def any2Copyable[A: Copyable](a: A) = new {
def copy = implicitly[Copyable[A]].copy(a)
}
scala> val x = new Foo(2)
x: Foo = Foo#8d86328
scala> val y = x.copy
y: Foo = Foo#245e7588
scala> x eq y
res2: Boolean = false
a. When you define a type parameter like the A it gets erased after the compilation phase.
This means that the compiler uses type parameters to check that you use the correct types, but the resulting bytecode retains no information of A.
This also implies that you cannot use A as a real class in code but only as a "type reference", because at runtime this information is lost.
b & c. traits cannot define constructor parameters or auxiliary constructors by definition, they're also abstract by definition.
What you can do is define a trait body that gets called upon instantiation of the concrete implementation
One alternative solution is to define a Cloneable typeclass. For more on this you can find lots of blogs on the subject, but I have no suggestion for a specific one.
scalaz has a huge part built using this pattern, maybe you can find inspiration there: you can look at Order, Equal or Show to get the gist of it.

scala inheritance issue: val vs. def

Writing a simple example from Odersky's book resulted in the following problem:
// AbstractElement.scala
abstract class AbstractElement {
val contents: Array[String]
val height: Int = contents.length // line 3
}
class UnifiedElement(ch: Char, _width: Int, _height: Int) extends AbstractElement { // line 6
val contents = Array.fill(_height)(ch.toString() * _width)
}
object AbstractElement {
def create(ch: Char): AbstractElement = {
new UnifiedElement(ch, 1, 1) // line 12
}
}
,
// ElementApp.scala
import AbstractElement.create
object ElementApp {
def main(args: Array[String]): Unit = {
val e1 = create(' ') // line 6
println(e1.height)
}
}
The compiler throws the following trace:
Exception in thread "main" java.lang.NullPointerException
at AbstractElement.<init>(AbstractElement.scala:3)
at UnifiedElement.<init>(AbstractElement.scala:6)
at AbstractElement$.create(AbstractElement.scala:12)
at ElementApp$.main(ElementApp.scala:6)
at ElementApp.main(ElementApp.scala)
So the compiler thinks that contents is still null, but I defined it in UnifiedContainer!
Things get even more weird when I replace val with def and evrth works perfect!
Could you please xplain this behaviour?
Here is a great article by Paul P that explains the initialization order intricacies in Scala. As a rule of thumb, you should never use abstract vals. Always use abstract defs and lazy vals.
In the definition of AbstractElement, you're in practice defining a constructor which initializes contents to null and computes contents.length. The constructor of UnifiedElement calls AbstractElement's constructor and only then initializes contents.
EDIT: in other words, we have a new instance of a problem already existing in Java (and any OOP language): the constructor of a superclass calls a method implemented in a subclass, but the latter cannot be safely called because the subclass is not yet constructed. Abstract vals are only one of the ways to trigger it.
The simplest solution here is to just make height a def, which is better anwyay, and be aware of initialization rules linked in the other answer.
abstract class AbstractElement {
val contents: Array[String]
def height: Int = contents.length //Make this a def
}
The slightly more complex solution, instead, is to force contents to be initialized before height, which you can do with this syntax:
class UnifiedElement(ch: Char, _width: Int, _height: Int) extends {
val contents = Array.fill(_height)(ch.toString() * _width)
} with AbstractElement {
//...
}
Note that mixin composition, that is with, is not symmetrical - it works left-to-right. And note that {} at the end can be omitted, if you define no other members.
Lazy vals are also a solution, but they incur quite some run-time overhead - whenever you read the variable, the generated code will read a volatile bitmap to check that the field was already initialized.
Making contents a def here seems a bad idea, because it will be recomputed too often.
Finally, avoiding abstract vals is IMHO an extreme measure. Sometimes they are just the right thing - you should just be careful with concrete vals referring to abstract vals.
EDIT: It seems that instead of an abstract val, one could use an abstract definition and override it with a concrete val. That is indeed possible, but it does not help if there are concrete vals referring to the abstract definition. Consider this variant of the above code, and pay attention to how members are defined:
abstract class AbstractElement {
def contents: Array[String]
val height: Int = contents.length // line 3
}
class UnifiedElement(ch: Char, _width: Int, _height: Int) extends AbstractElement {
val contents = Array.fill(_height)(ch.toString() * _width)
}
This code has the same runtime behavior as the code given by the OP, even if AbstractElement.contents is now a def: the body of the accessor reads a field which is initialized only by the subclass constructor. The only difference between an abstract value and an abstract definition seems to be that an abstract value can only be overridden by a concrete value, so it can be useful to constrain the behavior of subclasses if that is what you want.