I have an architectural problem, more precisely, a suboptimal situation.
For an adaptable test environment, there is a context that is updated by a range of definition methods, which each define different entities, i.e. alter the context. For simplicity, the definitions here will just be integers, and the context a growing Seq[Int].
trait Abstract_Test_Environment {
def definition(d: Int): Unit
/* Make definitions: */
definition(1)
definition(2)
definition(3)
}
This idea is now implemented by a consecutively altered “var” holding the current context:
trait Less_Abstract_Test_Environment extends Abstract_Test_Environment {
/* Implement the definition framework: */
var context: Context = initial_context
val initial_context: Context
override def definition(d: Int) = context = context :+ d
}
Since the context must be set before “definition” is applied, it cannot be set by variable assignment in the concluding class:
class Concrete_Test_Environment extends Less_Abstract_Test_Environment {
context = Seq.empty
}
An intermediate “initial_context” is required but a plain overriding does not do the job either:
class Concrete_Test_Environment extends Less_Abstract_Test_Environment {
override val initial_context = Seq.empty
}
The only viable solution seems to be an early initialization, which most likely is the purpose this feature has been created for:
class Concrete_Test_Environment extends {
override val initial_context = Seq.empty
} with Less_Abstract_Test_Environment
HOWEVER, our setting still fails because when “definition” is applied in “Abstract_Test_Environment”, the VAR “context” in “Less_Abstract_Test_Environment” is still not bound, i.e. null. Whereas the def “definition” is “initialized on demand” in “Less_Abstract_Test_Environment” (from “Abstract_Test_Environment”), the var “context” is not.
The “solution” I came up with is merging “Abstract_Test_Environment” and “Less_Abstract_Test_Environment”. This is not what I wanted since it destroys the natural separation of interface/goal and implementation, which has been realized by the two traits.
Do you see any better solution? I am sure Scala can do better.
Simple solution: Do not initialize your object during its creation, except you are in the bottom level class. Instead, add an init method, which contains all of the initialization code and then call it either in the most bottom level class (which is safe, since all parent classes have already been created) or wherever the object is created.
Side effect of the whole thing is that you can even override the initialization code in a subclass.
One possibility is to make your intermediate trait a class:
abstract class Less_Abstract_Test_Environment(var context: Context = Seq.empty) extends Abstract_Test_Environment {
override def definition(d: Int) = context = context :+ d
}
You can now subclass it, and pass different initial contexts in as parameters to constructor.
You can do this at the "concrete" level too, if you'd rather have the intermediate as a trait:
trait Less_Abstract_Test_Environment extends Abstract_Test_Environment {
var context: Context
override def definition(d: Int) = context = context :+ d
}
class Concrete_Test_Environment(override var context: Context = Seq.empty) extends Less_Abstract_Test_Environment
What would be even better though is using functional approach: context should be a val, and definion should take the previous value, and return the new one:
trait Abstract {
type Context
def initialContext: Context
val context: Context = Range(1, 4)
.foldLeft(initialContext) { case (c, n) => definition(c, n) }
def definition(context: Context, n: Int): Context
}
trait LessAbstract extends Abstract {
override type Context = Seq[Int]
override def definition(context: Context, n: Int) = context :+ n
}
class Concrete extends LessAbstract {
override def initialContext = Seq(0)
}
You can employ the idea of a whiteboard, which contains only data, which is shared by a number of traits which contain only logic (not data!). See below some untested code off the cuff:
trait WhiteBoard {
var counter: Int = 0
}
trait Display {
var counter: Int
def show: Unit = println(counter)
}
trait Increment {
var counter: Int
def inc: Unit = { counter = counter + 1 }
}
Then you write unit tests like this:
val o = new Object with Whiteboard with Display with Increment
o.show
o.inc
o.show
Doing this way, you separate definition of the data from places where the data is required, which basically means that you can potentially mix in traits in any order. The only requirement is that the whiteboard (which defines data) is the first trait mixed in.
Related
The thing is, is there any way (That does not involve reflective black magic), to implicitly override a method, when two known traits are implemented?
Think that we have a class SysImpl that implements two mixins: System and Container:
// Where system acts as the interface for another consumer
trait System {
def isEmpty = false
}
trait Container[A] {
val elements = mutable.Set[A]()
}
// Then, we typically implement:
class SysImpl extends System with Container[Int] {
// We override the default system implementation here
override def isEmpty = elements.isEmpty
}
Just as an example.
Is there any way of implementing either a third trait, or making something to the original, that makes the implementation implicitly override the isEmpty method, in case that System & Container[A] are present?
The first thing that comes to my mind is creating an extension method, but that would be shadowing at its best (Wouldn't it?). I need the method to be overridden properly, as the call is dispatched by a consumer who only sees Systems.
(Example, omitting details)
class AImpl extends System with Container[A]
object Consumer {
def consume(sys: System) = if (sys.isEmpty) { /* Do things */ }
}
// Somewhere...
object Main {
def main() = {
Consumer.consume(new AImpl)
}
}
Just
trait Mix[A] extends Container[A] with System {
override def isEmpty = elements.isEmpty
}
Consider the following case:
trait A {
protected val mydata = ???
def f(args) = ??? //uses mydata
}
class B
class C
class D(arg1: String) extends B with A {
override val mydata = ??? /// some calculation based on arg1
}
class E(arg1: String) extends C with A{
override val mydata = ??? /// some calculation based on arg1
}
A must be a trait as it is used by different unrelated classes. The problem is how to implement the definition of mydata.
The standard way (suggested in many places would be to define mydata as def and override it in the children. However, if f assumes mydata never changes then it can cause issues when some child extends with a function which changes between calls instead of with a val.
Another way would be to do:
trait A {
protected val mydata = g
protected def g()
}
The problem with this (beyond adding another function) is that if g depends on construction variables in the child then these must become members of the child (which can be a problem for example if the data is large and given in the construction):
class D(arg1: Seq[String]) {
def g() = ??? // some operation on arg1
}
If I leave the val in the trait as abstract I can reach issues such as those found here).
What I am looking for is a way to define the value of the val in the children, ensuring it would be a val and without having to save data for late calculations. Something similar as to how in java I can define a final val and fill it in the constructor
The standard way (suggested in many places would be to define mydata as def and override it in the children... If I leave the val in the trait as abstract I can reach issues such as those found here).
This is a common misunderstanding, shown in the accepted answer to the linked question as well. The issue is implementing as a val, which you require anyway. Having a concrete val which is overridden only makes it worse: abstract one can at least be implemented by a lazy val. The only way to avoid the issue for you is to ensure mydata is not accessed in a constructor of A or its subtypes, directly or indirectly, until it's initialized. Using it in f is safe (provided f is not called in a constructor, again, which would be an indirect access to mydata).
If you can ensure this requirement, then
trait A {
protected val mydata
def f(args) = ??? //uses mydata
}
class D(arg1: String) extends B with A {
override val mydata = ??? /// some calculation based on arg1
}
class E(arg1: String) extends C with A{
override val mydata = ??? /// some calculation based on arg1
}
is exactly the correct definition.
If you can't, then you have to live with your last solution despite the drawback, but mydata needs to be lazy to avoid similar initialization order issues, which would already give the same drawback on its own.
I am tinkling with Scala and would like to produce some generic code. I would like to have two classes, one "outer" class and one "inner" class. The outer class should be generic and accept any kind of inner class which follow a few constraints. Here is the kind of architecture I would want to have, in uncompilable code. Outer is a generic type, and Inner is an example of type that could be used in Outer, among others.
class Outer[InType](val in: InType) {
def update: Outer[InType] = new Outer[InType](in.update)
def export: String = in.export
}
object Outer {
def init[InType]: Outer[InType] = new Outer[InType](InType.empty)
}
class Inner(val n: Int) {
def update: Inner = new Inner(n + 1)
def export: String = n.toString
}
object Inner {
def empty: Inner = new Inner(0)
}
object Main {
def main(args: Array[String]): Unit = {
val outerIn: Outer[Inner] = Outer.empty[Inner]
println(outerIn.update.export) // expected to print 1
}
}
The important point is that, whatever InType is, in.update must return an "updated" InType object. I would also like the companion methods to be callable, like InType.empty. This way both Outer[InType] and InType are immutable types, and methods defined in companion objects are callable.
The previous code does not compile, as it is written like a C++ generic type (my background). What is the simplest way to correct this code according to the constraints I mentionned ? Am I completely wrong and should I use another approach ?
One approach I could think of would require us to use F-Bounded Polymorphism along with Type Classes.
First, we'd create a trait which requires an update method to be available:
trait AbstractInner[T <: AbstractInner[T]] {
def update: T
def export: String
}
Create a concrete implementation for Inner:
class Inner(val n: Int) extends AbstractInner[Inner] {
def update: Inner = new Inner(n + 1)
def export: String = n.toString
}
Require that Outer only take input types that extend AbstractInner[InType]:
class Outer[InType <: AbstractInner[InType]](val in: InType) {
def update: Outer[InType] = new Outer[InType](in.update)
}
We got the types working for creating an updated version of in and we need somehow to create a new instance with empty. The Typeclass Pattern is classic for that. We create a trait which builds an Inner type:
trait InnerBuilder[T <: AbstractInner[T]] {
def empty: T
}
We require Outer.empty to only take types which extend AbstractInner[InType] and have an implicit InnerBuilder[InType] in scope:
object Outer {
def empty[InType <: AbstractInner[InType] : InnerBuilder] =
new Outer(implicitly[InnerBuilder[InType]].empty)
}
And provide a concrete implementation for Inner:
object AbstractInnerImplicits {
implicit def innerBuilder: InnerBuilder[Inner] = new InnerBuilder[Inner] {
override def empty = new Inner(0)
}
}
Invoking inside main:
object Experiment {
import AbstractInnerImplicits._
def main(args: Array[String]): Unit = {
val outerIn: Outer[Inner] = Outer.empty[Inner]
println(outerIn.update.in.export)
}
}
Yields:
1
And there we have it. I know this may be a little overwhelming to grasp at first. Feel free to ask more questions as you read this.
I can think of 2 ways of doing it without referring to black magic:
with trait:
trait Updatable[T] { self: T =>
def update: T
}
class Outer[InType <: Updatable[InType]](val in: InType) {
def update = new Outer[InType](in.update)
}
class Inner(val n: Int) extends Updatable[Inner] {
def update = new Inner(n + 1)
}
first we use trait, to tell type system that update method is available, then we put restrains on the type to make sure that Updatable is used correctly (self: T => will make sure it is used as T extends Updatable[T] - as F-bounded type), then we also make sure that InType will implement it (InType <: Updatable[InType]).
with type class:
trait Updatable[F] {
def update(value: F): F
}
class Outer[InType](val in: InType)(implicit updatable: Updatable[InType]) {
def update: Outer[InType] = new Outer[InType](updatable.update(in))
}
class Inner(val n: Int) {
def update: Inner = new Inner(n + 1)
}
implicit val updatableInner = new Updatable[Inner] {
def update(value: Inner): Inner = value.update
}
First we define type class, then we are implicitly requiring its implementation for our type, and finally we are providing and using it. Putting whole theoretical stuff aside, the practical difference is that this interface is that you are not forcing InType to extend some Updatable[InType], but instead require presence of some Updatable[InType] implementation to be available in your scope - so you can provide the functionality not by modifying InType, but by providing some additional class which would fulfill your constrains or InType.
As such type classes are much more extensible, you just need to provide implicit for each supported type.
Among other methods available to you are e.g. reflection (however that might kind of break type safety and your abilities to refactor).
Scala keeps a lot of very useful constructs like Option and Try in its standard library.
Why is lazy given special treatment by having its own keyword when languages such as C#, which lacks afore mentioned types, choose to implement it as a library feature?
It is true that you could define a lazy value for example like this:
object Lazy {
def apply[A](init: => A): Lazy[A] = new Lazy[A] {
private var value = null.asInstanceOf[A]
#volatile private var initialized = false
override def toString =
if (initialized) value.toString else "<lazy>#" + hashCode.toHexString
def apply(): A = {
if (!initialized) this.synchronized {
if (!initialized) {
value = init
initialized = true
}
}
value
}
}
implicit def unwrap[A](l: Lazy[A]): A = l()
}
trait Lazy[+A] { def apply(): A }
Usage:
val x = Lazy {
println("aqui")
42
}
def test(i: Int) = i * i
test(x)
On the other hand, having lazy as a language provided modifier has the advantage of allowing it to participate in the uniform access principle. I tried to look up a blog entry for it, but there isn't any that goes beyond getters and setters. This principle is actually more fundamental. For values, the following are unified: val, lazy val, def, var, object:
trait Foo[A] {
def bar: A
}
class FooVal[A](val bar: A) extends Foo[A]
class FooLazyVal[A](init: => A) extends Foo[A] {
lazy val bar: A = init
}
class FooVar[A](var bar: A) extends Foo[A]
class FooProxy[A](peer: Foo[A]) extends Foo[A] {
def bar: A = peer.bar
}
trait Bar {
def baz: Int
}
class FooObject extends Foo[Bar] {
object bar extends Bar {
val baz = 42
}
}
Lazy values were introduced in Scala 2.6. There is a Lambda the Ultimate comment which suggests that the reasoning might have to do with formalising the possibility to have cyclic references:
Cyclic dependencies require binding with lazy values. Lazy values can also be used to enforce that component initialization occurs in dependency order. Component shutdown order, sadly, must be coded by hand
I do not know why cyclic references could not be automatically handled by the compiler; perhaps there were reasons of complexity or performance penality. A blog post by Iulian Dragos confirms some of these assumptions.
The current lazy implementation uses an int bitmask to track if a field has been initialized, and no other memory overhead. This field is shared between multiple lazy vals (up to 32 lazy vals per field). It would be impossible to implement the feature with a similar memory efficiency as a library feature.
Lazy as a library would probably look roughly like this:
class LazyVal[T](f: =>T) {
#volatile private var initialized = false
/*
this does not need to be volatile since there will always be an access to the
volatile field initialized before this is read.
*/
private var value:T = _
def apply() = {
if(!initialized) {
synchronized {
if(!initialized) {
value = f
initialized = true
}
}
}
value
}
}
The overhead of this would be an object for the closure f that generates the value, and another object for the LazyVal itself. So it would be substantial for a feature that is used as often as this.
On the CLR you have value types, so the overhead is not as bad if you implement your LazyVal as a struct in C#
However, now that macros are available, it might be a good idea to turn lazy into a library feature or at least allow to customize the lazy initialiation. Many use cases of lazy val do not require thread synchronization, so it is wasteful to have the #volatile/synchronized overhead every time you use lazy val.
Search results so far have led me to believe this is impossible without either a non-primary constructor
class Foo { // NOT OK: 2 extra lines--doesn't leverage Scala's conciseness
private var _x = 0
def this(x: Int) { this(); _x = x }
def x = _x
}
val f = new Foo(x = 123) // OK: named parameter is 'x'
or sacrificing the name of the parameter in the primary constructor (making calls using named parameters ugly)
class Foo(private var _x: Int) { // OK: concise
def x = _x
}
val f = new Foo(_x = 123) // NOT OK: named parameter should be 'x' not '_x'
ideally, one could do something like this:
class Foo(private var x: Int) { // OK: concise
// make just the getter public
public x
}
val f = new Foo(x = 123) // OK: named parameter is 'x'
I know named parameters are a new thing in the Java world, so it's probably not that important to most, but coming from a language where named parameters are more popular (Python), this issue immediately pops up.
So my question is: is this possible? (probably not), and if not, why is such an (in my opinion) important use case left uncovered by the language design? By that, I mean that the code either has to sacrifice clean naming or concise definitions, which is a hallmark of Scala.
P.S. Consider the case where a public field needs suddenly to be made private, while keeping the getter public, in which case the developer has to change 1 line and add 3 lines to achieve the effect while keeping the interface identical:
class Foo(var x: Int) {} // no boilerplate
->
class Foo { // lots of boilerplate
private var _x: Int = 0
def this(x: Int) { this(); _x = x }
def x = _x
}
Whether this is indeed a design flaw is rather debatable. One would consider that complicating the syntax to allow this particular use case is not worthwhile.
Also, Scala is after all a predominantly functional language, so the presence of vars in your program should not be that frequent, again raising the question if this particular use case needs to be handled in a special way.
However, it seems that a simple solution to your problem would be to use an apply method in the companion object:
class Foo private(private var _x: Int) {
def x = _x
}
object Foo {
def apply(x: Int): Foo = new Foo(x)
}
Usage:
val f = Foo(x = 3)
println(f.x)
LATER EDIT:
Here is a solution similar to what you originally requested, but that changes the naming a bit:
class Foo(initialX: Int) {
private var _x = initialX
def x = _x
}
Usage:
val f = new Foo(initialX = 3)
The concept you are trying to express, which is an object whose state is mutable from within the object and yet immutable from the perspective of other objects ... that would probably be expressed as an Akka actor within the context of an actor system. Outside the context of an actor system, it would seem to be a Java conception of what it means to be an object, transplanted to Scala.
import akka.actor.Actor
class Foo(var x: Int) extends Actor {
import Foo._
def receive = {
case WhatIsX => sender ! x
}
}
object Foo {
object WhatIsX
}
Not sure about earlier versions, but In Scala 3 it can easily be implemented like follows:
// class with no argument constructor
class Foo {
// prive field
private var _x: Int = 0
// public getter
def x: Int = _x
// public setter
def x_=(newValue: Int): Unit =
_x = newValue
//auxiliary constructor
def this(value: Int) =
this()
_x = value
}
Note
Any definition within the primary constructor makes the definition public, unless you prepend it with private modifier
Append _= after a method name with Unit return type to make it a setter
Prepending a constructor parameter neither with val nor with var, makes it private
Then it follows:
val noArgFoo = Foo() // no argument case
println(noArgFoo.x) // the public getter prints 0
val withArgFoo = Foo(5) // with argument case
println(withArgFoo.x) // the public getter prints 5
noArgFoo.x = 100 // use the public setter to update x value
println(noArgFoo.x) // the public getter prints 100
withArgFoo.x = 1000 // use the public setter to update x value
println(withArgFoo.x) // the public getter prints 1000
This solution is exactly what you asked; in a principled way and without any ad hoc workaround e.g. using companion objects and the apply method.