I'm trying create an instance of a class and mix in certain traits based on certain conditions. So Given:
class Foo
trait A
trait B
I can do something like
if (fooType == "A")
new Foo with A
else if (footType == "B")
new Foo With B
That works just fine for a simple case like this. My issue is that multiple traits can be mixed into the same instance based on other conditions, and on top of that the class being instantiated has a fair amount of parameters so this leads to a pretty ugly conditional block.
What I would like to do is determine the traits to be mixed in before hand something like this (which I know is not legal scala code):
val t1 = fooType match {
case "a" => A
case "b" => B
}
val t2 = fooScope match {
case "x" => X
case "y" => Y
}
new Foo with t1 with t2
Where A, B, X, and Y are all previously defined traits, and fooType and fooScope are inputs to my function. I'm not sure if there is anything I can do which is somewhat similar to the above, but any advice would be appreciated.
Thanks
I believe what you want to do is not possible. In Scala the type of an object has to be known at compile time, so new Foo with A works but new Foo with t1 will not because t1 is resolved only at run time.
A couple of ideas to throw in the mix...
Non-runtime Factory
With the complicated code full of conditionals that you speak of, get those to put together a list of possible combinations. This list can then be used to create all the classes. Very simple example:
(for(a <- Seq("A", "B"); b <- Seq("X", "Y")) yield
s"class Foo$a$b extends $a with $b").mkString("\n")
which'll produce the following:
class FooAX extends A with X
class FooBX extends B with X
class FooAY extends A with Y
class FooBY extends B with Y
Then simply:
val ax = new FooAX()
which is nice because it's very strongly typed and everyone'll know exactly what type it is everywhere without much casting/inference headache.
Delegation
There's an interesting non-reflection/classloader technique here:
https://github.com/b-studios/MixinComposition
which takes in traits as parameters and then simply delegate to the appropriate trait.
Related
I am new to Scala.
I have a Class A that extends a Class C. I also have a Class B that also extends a Class C
I want function objects of type A->B to extend C as well (and also other derived types, such as A->(A->B)). But I read in "Programming in scala" following:
A function literal is compiled into a class that when instantiated at runtime
is a function value.
Is there some way of automatically letting A->B extend C, other then manually having to create a new class that represents the function?
Functions in Scala are modelled via FunctionN trait. For example, simple one-input-one-output functions are all instances of the following trait:
trait Function1[-T1, +R] extends AnyRef
So what you're asking is "how can I make instances of Function also become subclasses of C". This is not doable via standard subtyping / inheritance, because obviously we can't modify the Function1 trait to make it extend your custom class C. Sure, we could make up a new class to represent the function as you suggested, but that will only take us so far and it's not so easy to implement, not to mention that any function you want to use as C will have to be converted to your pseudo-function trait first, which will make things horrible.
What we can do, however, is create a typeclass which then contains an implementation for A -> B, among others.
Let's take the following code as example:
trait A
trait B
trait C[T]
object C {
implicit val fa = new C[A] {}
implicit val fb = new C[B] {}
implicit val fab = new C[Function1[A, B]] {}
}
object Test extends scala.App {
val f: A => B = (a: A) => new B {}
def someMethod[Something: C](s: Something) = {
// uses "s", for example:
println(s)
}
someMethod(f) // Test$$$Lambda$6/1744347043#dfd3711
}
You haven't specified your motivation for making A -> B extend C, but obviously you want to be able to put A, B and A -> B under the "same umbrella" because you have, say, some method (called someMethod) which takes a C so with inheritance you can pass it values of type A, B or A -> B.
With a typeclass you achieve the same thing, with some extra advantages, such as e.g. adding D to the family one day without changing existing code (you would just need to implement an implicit value of type C[D] somewhere in scope).
So instead of having someMethod take instances of C, it simply takes something (let's call it s) of some type (let's call it Something), with the constraint that C[Something] must exist. If you pass something for which an instance of C doesn't exist, you will get an error:
trait NotC
someMethod(new NotC {})
// Error: could not find implicit value for evidence parameter of type C[NotC]
You achieve the same thing - you have a family of C whose members are A, B and A => B, but you go around subtyping problems.
I've seen a couple of time code like this in Scala libraries. What does it mean?
trait SecuredSettings {
this: GlobalSettings =>
def someMethod = {}
...
}
This trick is called "self type annotation".
This actually do two separate things at once:
Introduces a local alias for this reference (may be useful when you introduce nested class, because then you have several this objects in scope).
Enforces that given trait is mixable only into subtypes of some type (this way you assume you have some methods in scope).
Google for "scala self type annotation" for many discussion about this subject.
The scala-lang.org contains a pretty descent explanation of this feature:
http://docs.scala-lang.org/tutorials/tour/explicitly-typed-self-references.html
There are numerous patterns which use this trick in non-obvious way. For a starter, look here:
http://marcus-christie.blogspot.com/2014/03/scala-understanding-self-type.html
trait A {
def foo
}
class B { self: A =>
def bar = foo //compiles
}
val b = new B //fails
val b = new B with A //compiles
It means that any B instances must inherit (mix-in) A. B is not A, but its instances are promised to be so, therefore you can code B as if it were already A.
There are two ways of defining a method for two different classes inheriting the same trait in Scala.
sealed trait Z { def minus: String }
case class A() extends Z { def minus = "a" }
case class B() extends Z { def minus = "b" }
The alternative is the following:
sealed trait Z { def minus: String = this match {
case A() => "a"
case B() => "b"
}
case class A() extends Z
case class B() extends Z
The first method repeats the method name, whereas the second method repeats the class name.
I think that the first method is the best to use because the codes are separated. However, I found myself often using the second one for complicated methods, so that adding additional arguments can be done very easily for example like this:
sealed trait Z {
def minus(word: Boolean = false): String = this match {
case A() => if(word) "ant" else "a"
case B() => if(word) "boat" else "b"
}
case class A() extends Z
case class B() extends Z
What are other differences between those practices? Are there any bugs that are waiting for me if I choose the second approach?
EDIT:
I was quoted the open/closed principle, but sometimes, I need to modify not only the output of the functions depending on new case classes, but also the input because of code refactoring. Is there a better pattern than the first one? If I want to add the previous mentioned functionality in the first example, this would yield the ugly code where the input is repeated:
sealed trait Z { def minus(word: Boolean): String ; def minus = minus(false) }
case class A() extends Z { def minus(word: Boolean) = if(word) "ant" else "a" }
case class B() extends Z { def minus(word: Boolean) = if(word) "boat" else "b" }
I would choose the first one.
Why ? Merely to keep Open/Closed Principle.
Indeed, if you want to add another subclass, let's say case class C, you'll have to modify supertrait/superclass to insert the new condition... ugly
Your scenario has a similar in Java with template/strategy pattern against conditional.
UPDATE:
In your last scenario, you can't avoid the "duplication" of input. Indeed, parameter type in Scala isn't inferable.
It still better to have cohesive methods than blending the whole inside one method presenting as many parameters as the method union expects.
Just Imagine ten conditions in your supertrait method. What if you change inadvertently the behavior of one of each? Each change would be risked and supertrait unit tests should always run each time you modify it ...
Moreover changing inadvertently an input parameter (not a BEHAVIOR) is not "dangerous" at all. Why? because compiler would tell you that a parameter/parameter type isn't relevant any more.
And if you want to change it and do the same for every subclasses...ask to your IDE, it loves refactoring things like this in one click.
As this link explains:
Why open-closed principle matters:
No unit testing required.
No need to understand the sourcecode from an important and huge class.
Since the drawing code is moved to the concrete subclasses, it's a reduced risk to affect old functionallity when new functionality is added.
UPDATE 2:
Here a sample avoiding inputs duplication fitting your expectation:
sealed trait Z {
def minus(word: Boolean): String = if(word) whenWord else whenNotWord
def whenWord: String
def whenNotWord: String
}
case class A() extends Z { def whenWord = "ant"; def whenNotWord = "a"}
Thanks type inference :)
Personally, I'd stay away from the second approach. Each time you add a new sub class of Z you have to touch the shared minus method, potentially putting at risk the behavior tied to the existing implementations. With the first approach adding a new subclass has no potential side effect on the existing structures. There might be a little of the Open/Closed Principle in here and your second approach might violate it.
Open/Closed principle can be violated with both approaches. They are orthogonal to each other. The first one allows to easily add new type and implement required methods, it breaks Open/Closed principle if you need to add new method into hierarchy or refactor method signatures to the point that it breaks any client code. It is after all reason why default methods were added to Java8 interfaces so that old API can be extended without requiring client code to adapt.
This approach is typical for OOP.
The second approach is more typical for FP. In this case it is easy to add methods but it is hard to add new type (it breaks O/C here). It is good approach for closed hierarchies, typical example are Algebraic Data Types (ADT). Standardized protocol which is not meant to be extended by clients could be a candidate.
Languages struggle to allow to design API which would have both benefits - easy to add types as well as adding methods. This problem is called Expression Problem. Scala provides Typeclass pattern to solve this problem which allows to add functionality to existing types in ad-hoc and selective manner.
Which one is better depends on your use case.
Starting in Scala 3, you have the possibility to use trait parameters (just like classes have parameters), which simplifies things quite a lot in this case:
trait Z(x: String) { def minus: String = x }
case class A() extends Z("a")
case class B() extends Z("b")
A().minus // "a"
B().minus // "b"
Here's some sample scala code.
abstract class A(val x: Any) {
abstract def copy(): A
}
class b(i: Int) extends A(i) {
override def copy() = new B(x)
}
class C(s: String) extends A(s) {
override def copy() = new C(x)
}
//here's the tricky part
Trait t1 extends A {
var printCount = 0
def print = {
printCount = printCount + 1
println(x)
}
override def copy = ???
}
Trait t2 extends A {
var doubleCount = 0
def doubleIt = {
doubleCount = doubleCount + 1
x = x+x
}
override def copy = ???
}
val q1 = new C with T1 with T2
val q2 = new B with T2 with T1
OK, as you've likely guessed, here's the question.
How can I implement copy methods in T1 and T2, such that weather they are mixed in with B, C, or t2/t1, I get a copy of the whole ball of wax?
for example, q2.copy should return a new B with T2 with T1, and q1.copy should return a new C with T1 with T2
Thanks!
Compositionality of object construction
The basic problem here is, that, in Scala as well as in all other languages I know, object construction doesn't compose. Consider two abstract operations op1 and op2, where op1 makes property p1 true and where op2 makes property p2 true. These operations are composable with respect to a composition operation ○, if op1 ○ op2 makes both p1 and p2 true. (Simplified, properties also need a composition operation, for example conjunction such as and.)
Let's consider the new operation and the property that new A(): A, that is, an object created by calling new A is of type A. The new operation lacks compositionality, because there is no operation/statement/function f in Scala that allows you to compose new A and new B such that f(new A, new B): A with B. (Simplified, don't think too hard about whether A and B must be classes or traits or interfaces or whatever).
Composing with super-calls
Super-calls can often be used to compose operations. Consider the following example:
abstract class A { def op() {} }
class X extends A {
var x: Int = 0
override def op() { x += 1 }
}
trait T extends A {
var y: String = "y"
override def op() { super.op(); y += "y" }
}
val xt = new X with T
println(s"${xt.x}, ${xt.y}") // 0, y
xt.op()
println(s"${xt.x}, ${xt.y}") // 1, yy
Let X.op's property be "x is increased by one" and let T.op's property be "y's length is increased by one". The composition achieved with the super-call fulfils both properties. Hooooray!
Your problem
Let's assume that you are working with a class A which has a field x, a trait T1 which has a field y and another trait T2 which has a field z. What you want is the following:
val obj: A with T1 with T2
// update obj's fields
val objC: A with T1 with T2 = obj.copy()
assert(obj.x == objC.x && obj.y == objC.y && obj.z == objC.z)
Your problem can be divided into two compositionality-related sub-problems:
Create a new instance of the desired type. This should be achieved by a construct method.
Initialise the newly created object such that all its fields have the same values (for brevity, we'll only work with value-typed fields, not reference-typed ones) as the source object. This should be achieved by a initialise method.
The second problem can be solved via super-calls, the first cannot. We'll consider the easier problem (the second) first.
Object initialisation
Let's assume that the construct method works as desired and yields an object of the right type. Analogous to the composition of the op method in the initial example we could implement initialise such that each class/trait A, T1 and T2 implements initialise(objC) by setting the fields it knows about to the corresponding values from this (individual effects), and by calling super.initialise(objC) in order to compose these individual effects.
Object creation
As far as I can see, there is no way to compose object creation. If an instance of A with T1 with T2 is to be created, then the statement new A with T1 with T2 must be executed somewhere. If super-calls could help here, then something like
val a: A = new A // corresponds to the "deepest" super-call
val at1: A with T1 = a with new T1
would be necessary.
Possible solutions
I implemented a solution (see this gist) based on abstract type members and explicit mixin classes (class AWithT1WithT2 extends A with T1 with T2; val a = new AWithT1WithT2 instead of val a = new A with T1 with T2). It works and it is type-safe, but it is neither particularly nice nor concise. The explicit mixin classes are necessary, because the construct method of new A with T1 with T2 must be able to name the type it creates.
Other, less type-safe solutions are probably possible, for example, casting via asInstanceOf or reflection. I haven't tried something along those lines, though.
Scala macros might also be an option, but I haven't used them yet and thus don't know enough about them. A compiler plugin might be another heavy-weight option.
That is one of the reasons, why extending case classes is deprecated. You should get a compiler warning for that. How should the copy method, that is defined in A, know, that there may also be a T or whatever? By extending the case class,
you break all the assumptions, that the compiler made, when generating methods like equals, copy and toString.
The easiest and simplest answer is to make your concrete types into case classes. Then you get the compiler-supplied copy method which accepts named parameters for all the class's constructor parameters so you can selectively differentiate the new value from the original.
I have the following data model which I'm going to do pattern matching against later:
abstract class A
case class C(s:String) extends A
abstract class B extends A
case class D(i:Int) extends B
case class E(s:Int, e:Int) extends B
A is the abstract super type of the hierarchy. C is a concrete subclass of A. Other concrete subclasses of A are subclasses of B which is in turn a subclass of A.
Now if I write something like this, it works:
def match(a:A) a match {
a:C => println("C")
a:B => println("B")
}
However, in a for loop I cannot match against B. I assume that I need a constructor pattern, but since B is abstract, there is no constructor pattern for B.
val list:List[A] = List(C("a"), D(1), E(2,5), ...)
for (b:B <- list) println(b) // Compile error
for (b#B <- list) println(b) // Compile error
Here, I would like to print only B instances. Any workaround for this case?
You can use collect:
list.collect { case b: B => println(b) }
If you want to better undertand this, I recommend to read about partial functions. Here for example.
Sergey is right; you'll have to give up for if you want to pattern match and filter only B instances. If you still want to use a for comprehension for whatever reason, I think one way is to just resort to using a guard:
for (b <- list if b.isInstanceOf[B]) println(b)
But it's always best to pick pattern-matching instead of isInstanceOf. So I'd go with the collect suggestion (if it made sense in the context of the rest of my code).
Another suggestion would be to define a companion object to B with the same name, and define the unapply method:
abstract class A
case class C(s:String) extends A
abstract class B extends A
object B { def unapply(b: B) = Option(b) } // Added a companion to B
case class D(i:Int) extends B
case class E(s:Int, e:Int) extends B
Then you can do this:
for (B(b) <- list) println(b)
So that's not the 'constructor' of B, but the companion's unapply method.
It works, and that's what friends are for, right?
(See http://www.scala-lang.org/node/112 )
If you ask me, the fact that you can't use pattern matching here is an unfortunate inconsistency of scala. Indeed scala does let you pattern match in for comprehensions, as this example will show:
val list:List[A] = List(C("a"), D(1), E(2,5)
for ((b:B,_) <- list.map(_ -> null)) println(b)
Here I temporarily wrap the elements into pairs (whith a dummy and unused second value) and then pattern match for a pair where the first element is of type B. As the output shows, you get the expected behaviour:
D(1)
E(2,5)
So there you go, scala does support filtering based on pattern matching (even when matching by type), it just seems that the grammar does not handle pattern matching a single element by type.
Obviously I am not advising to use this trick, this was just to illustrate. Using collect is certainly better.
Then again, there is another, more general solution if for some reason you really fancy for comprehensions more than anything:
object Value {
def unapply[T]( value: T ) = Some( value )
}
for ( Value(b:B) <- list ) println(b)
We just introduced a dummy extractor in the Value object which just does nothing, so that Value(b:B) has the same meaning as just b:B, except that the former does compile. And unlike my earlier trick with pairs, it is relatively readable, and Value only has to be written once, you can use it at will then (in particular, no need for writing a new extractor for each type you want to pattern match against, as in #Faiz's answer. I'll let you find a better name than Value though.
Finally, there is another work around that works out of the box (credit goes to Daniel Sobral), but is slightly less readable and requires a dummy identifier (here foo):
for ( b #(foo:B) <- list ) println(b)
// or similarly:
for ( foo #(b:B) <- list ) println(b)
my 2 cents: You can add a condition in the for comprehension checking type but that would NOT be as elegant as using collect which would take only instances of class B.