What is the prefered way to factor out behavior - scala

Let's say I have a couple of classes internally sharing some behavior like
def workspace = Plugin.get.reallyGet.getWorkspace
What is the best way to factor it out? I see two possibilities which can be used equivalently in the using code.
trait WorkspaceProvider {
def workspace = Plugin.get.reallyGet.getWorkspace
}
and mix it in or
object WorkspaceProvider {
def workspace = Plugin.get.reallyGet.getWorkspace
}
and import it. What would you prefer and why?

The former is preferable. The latter is essentially static, un-mockable and hard to test.
Since you're thinking in terms of coupling (a very good thing) you should familiarize yourself with the Cake Pattern (it is much covered on 'Net, starting with the paper in which the concept was first described).

You can define both:
trait WorkspaceProvider {
def workspace = Plugin.get.reallyGet.getWorkspace
}
object WorkspaceProvider extends WorkspaceProvider
The first form is more flexible. For example, it allows to be mixed at instanciation:
trait Foo { this:WorkspaceProvider =>
def bar = workspace.doSomethingRelevantHere
}
val myFoo = new Foo with WorkspaceProvider
But the second form is more convenient if you just want to use the workspace method. For example in tests, prototypes, etc.
EDIT:
For a little more elaboration about this approach, check out "Selfless Trait Pattern", where Bill Venners shows how it is implemented in ScalaTest.

Related

Pattern Matching Design

I recently wrote some code like the block below and it left me with thoughts that the design could be improved if I was more knowledgeable on functional programming abstractions.
sealed trait Foo
case object A extends Foo
case object B extends Foo
case object C extends Foo
.
.
.
object Foo {
private def someFunctionSemanticallyRelatedToA() = { // do stuff }
private def someFunctionSemanticallyRelatedToB() = { // do stuff }
private def someFunctionSemanticallyRelatedToC() = { // do stuff }
.
.
.
def somePublicFunction(x : Foo) = x match {
case A => someFunctionSemanticallyRelatedToA()
case B => someFunctionSemanticallyRelatedToB()
case C => someFunctionSemanticallyRelatedToC()
.
.
.
}
}
My questions are:
Is the somePublicFunction() suffering from code smell or even the whole design? My concern is that the list of value constructors could grow quite big.
Is there a better FP abstraction to handle this type of design more elegantly or even concisely?
You've just run into the expression problem. In your code sample, the problem is that potentially every time you add or remove a case from your Foo algebraic data type, you'll need to modify every single match (like in somePublicFunction) against values of Foo. In Nimrand's answer, the problem is in the opposite end of the spectrum: you can add or remove cases from Foo easily, but every time you want to add or remove a behaviour (a method), you'll need to modify every subclass of Foo.
There are various proposals to solve the expression problem, but one interesting functional way is Oleg Kiselyov's Typed Tagless Final Interpreters, which replaces each case of the algebraic data type with a function that returns some abstract value that's considered to be equivalent to that case. Using generics (i.e. type parameters), these functions can all have compatible types and work with each other no matter when they were implemented. E.g., I've implemented an example of building and evaluating an arithmetic expression tree using TTFI: https://github.com/yawaramin/scala-ttfi
Your explanation is a bit too abstract to give you a confident answer. However, if the list of subclasses of Foo is likely to grow/change in the future, I would be inclined to make it an abstract method of Foo, and then implement the logic for each case in the sub classes. Then you just call Foo.myAbstractMethod() and polymorphism handles everything neatly.
This keeps the code specific to each object with the object itself, which is keeps things more neatly organized. It also means that you can add new subclasses of Foo without having to jump around to multiple places in code to augment the existing match statements elsewhere in the code.
Case classes and pattern-matching work best when the set of sub-classes is relatively small and fixed. For example, Option[T] there are only two sub-classes, Some[T] and None. That will NEVER change, because to change that would be to fundamentally change what Option[T] represents. Therefore, it's a good candidate for pattern-matching.

Drawbacks of using typeclasses in scala

There are some frameworks that fully embraces the typeclass pattern. scalaz and shapeless would be good examples. So there are certainly some cases where typeclasses are preferable over normal java classes and polymorphism.
I awe implicit evidence expression power and I'm curious why this method suffer a shortage of practical applications. What reasons compel scala programmers to use basic classes. The typeclasses obviously cost in verbosity and run-time, but is there any other reason?
I came to scala without prior java experience and wonder if I've missed some essential benefits that classic scala-java classes may give.
I'm searching for some spectacular use cases showing areas where typeclasses are insufficient or ineffective.
Typeclasses and inheritance enable reuse in different ways. Inheritance excels at providing correct functionality for changed internals.
class Foo { def foo: String = "foo" }
def fooUser(foo: Foo) { println(foo.foo) }
class Bar extends Foo {
private var annotation = List.empty[String]
def annotate(s: String) { annotation = s :: annotation }
override def foo = ("bar" :: annotation.map("#" + _)).mkString(" ")
}
Now, everyone who uses Foo will be able to get the correct value if you give them a Bar, even if they only know that the type is a Foo. You don't have to have anticipated that you might want pluggable functionality (except by not labeling foo final). You don't need to keep track of the type or keep passing a witness instance forwards; you just use Bar wherever you want in place of Foo and it does the right thing. This is a big deal. If you want a fixed interface with easily-modifiable functionality under the hood, inheritance is your thing.
In contrast, inheritance is not so great when you have a fixed set of data types with easily-modifiable interface. Sorting is a great example. Suppose you want to sort Foo. If you try
class Foo extends Sortable[Foo] {
def lt(you: Foo) = foo < you.foo
def foo = "foo"
}
you could pass this to anything that could sort a Sortable. But what if you want to sort by length of name not with the standard sort? Well,
class Foo extends LexicallySortable[Foo] with LengthSortable[Foo] {
def lexicalLt(you: Foo) = foo < you.foo
def lengthLt(you: Foo) = foo.length < you.foo.length
def foo = "foo"
}
This rapidly becomes hopeless, especially since you have to hunt down all subclasses of Foo and make sure they are updated properly. You are much better off deferring the less-than computation to a typeclass which you can swap out as needed. (Or to a regular class, which you must always reference explicitly.) This kind of automatically-selected functionality is also a big deal.
You can't really replace one with the other. When you need to easily incorporate new kinds of data to a fixed interface, use inheritance. When you need a few kinds of underlying data but need to easily supply new functionality, use type classes. When you need both, you will have a lot of work to do whichever way you go about it, so use to taste.

Scala magic to make a private/protected member visible?

I am using an API where a trait is given like this:
package pkg
trait Trait {
private[pkg] def f = ...
private[pkg] val content = ...
}
I would like to access the variable content and function f in my code, using the API from a Jar file (so I cannot modify the original code to remove the private definition).
What I was able to come up with as a first solution is to create a new bridge class in the same package, that helps me access the private/protected member functions like this:
package pkg
trait PkgBridge {
def f = Trait.f
def getContent(t : Trait) = t.content;
}
This way I can call the package private members from my code.
I was wondering if there is any sophisticated way or common pattern for this kind of situations (like some magic with implicits or something?).
Thanks!
What you are doing works, is probably as good a way to do it as any, and is discouraged.
If something is package private it is probably an implementation detail for which an interface has not be specified sufficiently well to risk exposing anyone to it or to allow it to be completely private. So be careful! There may be good reason to not do this.
Aside from reflection, the only way within Scala to get at package private content is to be in that package, so your method is an appropriate one.
Note that this alternative might be useful as well:
package pkg {
trait TraitBridge extends Trait {
def fBridge = f
def contentBridge = content
}
}
and then you can
class MyClass extends TraitBridge { ... }
to specifically pick up the extensions that you want to have access to (under alternate names).

Why do compile-time generative techniques for structural typing prevent separate compilation?

I was reading (ok, skimming) Dubochet and Odersky's Compiling Structural Types on the JVM and was confused by the following claim:
Generative techniques create Java interfaces to stand in
for structural types on the JVM. The complexity of such
techniques lies in that all classes that are to be used as
structural types anywhere in the program must implement
the right interfaces. When this is done at compile time, it
prevents separate compilation.
(emphasis added)
Consider the autoclose example from the paper:
type Closeable = Any { def close(): Unit }
def autoclose(t: Closeable)(run: Closeable => Unit): Unit = {
try { run(t) }
finally { t.close }
}
Couldn't we generate an interface for the Closeable type as follows:
public interface AnonymousInterface1 {
public void close();
}
and transform our definition of autoclose to
// UPDATE: using a view bound here, so implicit conversion is applied on-demand
def autoclose[T <% AnonymousInterface1](t: T)(run: T => Unit): Unit = {
try { run(t) }
finally { t.close }
}
Then consider a call-site for autoclose:
val fis = new FileInputStream(new File("f.txt"))
autoclose(fis) { ... }
Since fis is a FileInputStream, which does not implement AnonymousInterface1, we need to generate a wrapper:
class FileInputStreamAnonymousInterface1Proxy(val self: FileInputStream)
extends AnonymousInterface1 {
def close() = self.close();
}
object FileInputStreamAnonymousInterface1Proxy {
implicit def fis2proxy(fis: FileInputStream): FileInputStreamAnonymousInterface1Proxy =
new FileInputStreamAnonymousInterface1Proxy(fis)
}
I must be missing something, but it's unclear to me what it is. Why would this approach prevent separate compilation?
As I recall from a discussion on the Scala-Inernals mailing list, the problem with this is object identity, which is preserved by the current approach to compiling, is lost when you wrap values.
Think about it. Consider class A
class A { def a1(i: Int): String = { ... }; def a2(s: String): Boolean = { ... }
Some place in the program, possibly in a separately compiled library, this structural type is used:
{ def a1(i: Int): String }
and elsewhere, this one is used:
{ def a2(s: String): Boolean }
How, apart from global analysis, is class A to be decorated with the interfaces necessary to allow it to be used where those far-flung structural types are specified?
If every possible structural type that a given class could conform to is used to generate an interface capturing that structural type, there's an explosion of such interfaces. Remember, that a structural type may mention more than one required member, so for a class with N public elements (vals or defs) all the possible subsets of those N are required, and that's the powerset of N whose cardinality is 2^N.
I actually use the implicit approach (using typeclasses) you describe in the Scala ARM library. Remember that this is a hand-coded solution to the problem.
The biggest issue here is implicit resolution. The compiler will not generate wrappers for you on the fly, you must do so ahead of time and make sure they are one the implicit scope. This means (for Scala-ARM) that we provide "common" wrapper for whatever resources we can, and fall back to reflection-based types when we can't find the appropriate wrapper. This gives the advantage of allowing the user to specify their own wrapper using normal implicit rules.
See: The Resource Type-trait and all of it's predefined wrappers.
Also, I blogged about this technique describing the implicit resolution magic in more detail: Monkey Patching, Duck Typing and Type Classes.
In any case, you probably don't want to hand-encode a type class everytime you use structural types. If you actually wanted the compiler to automatically create an interface and do the magic for you, it could get messy. Everytime you define a structural type, the compiler will have to create an interface for it (somewhere in the ether perhaps?). We now need to add namespaces for these things. Also, with every call the compiler will have to generate some kind of a wrapper-implementation class (again with the namespace issue). Finally, if we have two different methods with the same structural type that are compiled separately, we've just exploded the number of interfaces we require.
Not that the hurdle couldn't be overcome, but if you want to have structural typing with "direct" access for particular types the type-trait pattern seems to be your best bet today.

Scala: convert a return type into a custom trait

I've written a custom trait which extends Iterator[A] and I'd like to be able to use the methods I've written on an Iterator[A] which is returned from another method. Is this possible?
trait Increment[+A] extends Iterator[A]{
def foo() = "something"
}
class Bar( source:BufferedSource){
//this ain't working
def getContents():Increment[+A] = source getLines
}
I'm still trying to get my head around the whole implicits thing and not having much like writing a method in the Bar object definition. How would I go about wrapping such an item to work in the way I'd like above?
Figured it out. Took me a few tries to understand:
object Increment{
implicit def convert( input:Iterator[String] ) = new Increment{
def next() = input next
def hasNext() = input hasNext
}
}
and I'm done. So amazingly short.
I don't think that this is possible without playing tricks. A mixin inheritance happens at compile time, when it can be type-checked statically, and it is always targeted at another class, trait, etc. Here you try to tack a trait on an existing object "on the fly" at runtime.
There are workarounds like implicit conversions, or maybe proxies. Probably the "cleanest" way would be to make Increment a wrapper class delegating to an underlying Iterator. Depending on your use case there might be other solutions.