Very many times, I'll want to "replace" a single method of a given object.
foo: Foo
foo.bar(i) // original
foo.baz(s) // replace this implementation
I'll wind up creating a pass-through wrapper class.
class FooWrapper(foo: Foo) extends Foo {
def bar(i: Int) = foo.bar(i)
def baz(s: String) = foo.baz(s)
}
And then
foo: Foo
val foo2 = new FooWrapper(foo) { def baz(s: String) = ... }
foo2.bar(i)
foo2.baz(s)
This works with traits and classes, and works without modifying the source code of the type.
I use this quite a bit, particularly when adapting libraries or other bits of code.
This can get tedious with lots of methods. The other day, I wanted to replace the shutdown method on an ExecutorService instance, and I had to do this for a dozen methods.
Is this a common idiom, or is this normally done another way? I suspect this could be done nicely with a macro, though I haven't found any existing ones that do this.
You can achieve that with AOP (Aspect-oriented programming)
There're many libraries to do so, try this one -
https://github.com/adamw/scala-macro-aop
Note that this library is in POC stage for now so you might look for something more mature. I've put it here because I think it shows the concept very clearly.
For your example you'll have to do something like
class FooWrapper(#delegate wrapped: Foo) extends Foo {
def baz(i: Int) = ???
}
Related
I've got a class from a library (specifically, com.twitter.finagle.mdns.MDNSResolver). I'd like to extend the class (I want it to return a Future[Set], rather than a Try[Group]).
I know, of course, that I could sub-class it and add my method there. However, I'm trying to learn Scala as I go, and this seems like an opportunity to try something new.
The reason I think this might be possible is the behavior of JavaConverters. The following code:
class Test {
var lst:Buffer[Nothing] = (new java.util.ArrayList()).asScala
}
does not compile, because there is no asScala method on Java's ArrayList. But if I import some new definitions:
class Test {
import collection.JavaConverters._
var lst:Buffer[Nothing] = (new java.util.ArrayList()).asScala
}
then suddenly there is an asScala method. So that looks like the ArrayList class is being extended transparently.
Am I understanding the behavior of JavaConverters correctly? Can I (and should I) duplicate that methodology?
Scala supports something called implicit conversions. Look at the following:
val x: Int = 1
val y: String = x
The second assignment does not work, because String is expected, but Int is found. However, if you add the following into scope (just into scope, can come from anywhere), it works:
implicit def int2String(x: Int): String = "asdf"
Note that the name of the method does not matter.
So what usually is done, is called the pimp-my-library-pattern:
class BetterFoo(x: Foo) {
def coolMethod() = { ... }
}
implicit def foo2Better(x: Foo) = new BetterFoo(x)
That allows you to call coolMethod on Foo. This is used so often, that since Scala 2.10, you can write:
implicit class BetterFoo(x: Foo) {
def coolMethod() = { ... }
}
which does the same thing but is obviously shorter and nicer.
So you can do:
implicit class MyMDNSResolver(x: com.twitter.finagle.mdns.MDNSResolver) = {
def awesomeMethod = { ... }
}
And you'll be able to call awesomeMethod on any MDNSResolver, if MyMDNSResolver is in scope.
This is achieved using implicit conversions; this feature allows you to automatically convert one type to another when a method that's not recognised is called.
The pattern you're describing in particular is referred to as "enrich my library", after an article Martin Odersky wrote in 2006. It's still an okay introduction to what you want to do: http://www.artima.com/weblogs/viewpost.jsp?thread=179766
The way to do this is with an implicit conversion. These can be used to define views, and their use to enrich an existing library is called "pimp my library".
I'm not sure if you need to write a conversion from Try[Group] to Future[Set], or you can write one from Try to Future and another from Group to Set, and have them compose.
General style question.
As I become better at writing functional code, more of my methods are becoming pure functions. I find that lots of my "classes" (in the loose sense of a container of code) are becoming state free. Therefore I make them objects instead of classes as there is no need to instantiate them.
Now in the Java world, having a class full of "static" methods would seem rather odd, and is generally only used for "helper" classes, like you see with Guava and Commons-* and so on.
So my question is, in the Scala world, is having lots of logic inside "objects" and not "classes" quite normal, or is there another preferred idiom.
As you mention in your title, objects are singleton classes, not classes with static methods as you mention in the text of your question.
And there are a few things that make scala objects better than both static AND singletons in java-world, so it is quite "normal" to use them in scala.
For one thing, unlike static methods, object methods are polymorphic, so you can easily inject objects as dependencies:
scala> trait Quack {def quack="quack"}
defined trait Quack
scala> class Duck extends Quack
defined class Duck
scala> object Quacker extends Quack {override def quack="QUAACK"}
defined module Quacker
// MakeItQuack expects something implementing Quack
scala> def MakeItQuack(q: Quack) = q.quack
MakeItQuack: (q: Quack)java.lang.String
// ...it can be a class
scala> MakeItQuack(new Duck)
res0: java.lang.String = quack
// ...or it can be an object
scala> MakeItQuack(Quacker)
res1: java.lang.String = QUAACK
This makes them usable without tight coupling and without promoting global state (which are two of the issues generally attributed to both static methods and singletons).
Then there's the fact that they do away with all the boilerplate that makes singletons so ugly and unidiomatic-looking in java. This is an often overlooked point, in my opinion, and part of what makes singletons so frowned upon in java even when they are stateless and not used as global state.
Also, the boilerplate you have to repeat in all java singletons gives the class two responsibilities: ensuring there's only one instance of itself and doing whatever it's supposed to do. The fact that scala has a declarative way of specifying that something is a singleton relieves the class and the programmer from breaking the single responsibility principle. In scala you know an object is a singleton and you can just reason about what it does.
You can also use package objects e.g. take a look at the scala.math package object here
https://lampsvn.epfl.ch/trac/scala/browser/scala/tags/R_2_9_1_final/src//library/scala/math/package.scala
Yes, I would say it is normal.
For most of my classes I create a companion object to handle some initialization/validation logic there. For example instead of throwing an exception if validation of parameters fails in a constructor it is possible to return an Option or an Either in the companion objects apply-method:
class X(val x: Int) {
require(x >= 0)
}
// ==>
object X {
def apply(x: Int): Option[X] =
if (x < 0) None else Some(new X(x))
}
class X private (val x: Int)
In the companion object one can add a lot of additional logic, such as a cache for immutable objects.
objects are also good for sending signals between instances if there is no need to also send messages:
object X {
def doSomething(s: String) = ???
}
case class C(s: String)
class A extends Actor {
var calculateLater: String = ""
def receive = {
case X => X.doSomething(s)
case C(s) => calculateLater = s
}
}
Another use case for objects is to reduce the scope of elements:
// traits with lots of members
trait A
trait B
trait C
trait Trait {
def validate(s: String) = {
import validator._
// use logic of validator
}
private object validator extends A with B with C {
// all members of A, B and C are visible here
}
}
class Class extends Trait {
// no unnecessary members and name conflicts here
}
I want to create a method that generates an implementation of a trait. For example:
trait Foo {
def a
def b(i:Int):String
}
object Processor {
def exec(instance: AnyRef, method: String, params: AnyRef*) = {
//whatever
}
}
class Bar {
def wrap[T] = {
// Here create a new instance of the implementing class, i.e. if T is Foo,
// generate a new FooImpl(this)
}
}
I would like to dynamically generate the FooImpl class like so:
class FooImpl(val wrapped:AnyRef) extends Foo {
def a = Processor.exec(wrapped, "a")
def b(i:Int) = Processor.exec(wrapped, "b", i)
}
Manually implementing each of the traits is not something we would like (lots of boilerplate) so I'd like to be able to generate the Impl classes at compile time. I was thinking of annotating the classes and perhaps writing a compiler plugin, but perhaps there's an easier way? Any pointers will be appreciated.
java.lang.reflect.Proxy could do something quite close to what you want :
import java.lang.reflect.{InvocationHandler, Method, Proxy}
class Bar {
def wrap[T : ClassManifest] : T = {
val theClass = classManifest[T].erasure.asInstanceOf[Class[T]]
theClass.cast(
Proxy.newProxyInstance(
theClass.getClassLoader(),
Array(theClass),
new InvocationHandler {
def invoke(target: AnyRef, method: Method, params: Array[AnyRef])
= Processor.exec(this, method.getName, params: _*)
}))
}
}
With that, you have no need to generate FooImpl.
A limitation is that it will work only for trait where no methods are implemented. More precisely, if a method is implemented in the trait, calling it will still route to the processor, and ignore the implementation.
You can write a macro (macros are officially a part of Scala since 2.10.0-M3), something along the lines of Mixing in a trait dynamically. Unfortunately now I don't have time to compose an example for you, but feel free to ask questions on our mailing list at http://groups.google.com/group/scala-internals.
You can see three different ways to do this in ScalaMock.
ScalaMock 2 (the current release version, which supports Scala 2.8.x and 2.9.x) uses java.lang.reflect.Proxy to support dynamically typed mocks and a compiler plugin to generate statically typed mocks.
ScalaMock 3 (currently available as a preview release for Scala 2.10.x) uses macros to support statically typed mocks.
Assuming that you can use Scala 2.10.x, I would strongly recommend the macro-based approach over a compiler plugin. You can certainly make the compiler plugin work (as ScalaMock demonstrates) but it's not easy and macros are a dramatically superior approach.
I just found it in the API and would like to see one or two examples along with an explanation what it is good for.
The Proxy trait provides a useful basis for creating delegates, but note that it only provides implementations of the methods in Any (equals, hashCode, and toString). You will have to implement any additional forwarding methods yourself. Proxy is often used with the pimp-my-library pattern:
class RichFoo(val self: Foo) extends Proxy {
def newMethod = "do something cool"
}
object RichFoo {
def apply(foo: Foo) = new RichFoo(foo)
implicit def foo2richFoo(foo: Foo): RichFoo = RichFoo(foo)
implicit def richFoo2foo(richFoo: RichFoo): Foo = richFoo.self
}
The standard library also contains a set of traits that are useful for creating collection proxies (SeqProxy, SetProxy, MapProxy, etc).
Finally, there is a compiler plugin in the scala-incubator (the AutoProxy plugin) that will automatically implement forwarding methods. See also this question.
It looks like you'd use it when you need Haskell's newtype like functionality.
For example, the following Haskell code:
newtype Natural = MakeNatural Integer
deriving (Eq, Show)
may roughly correspond to following Scala code:
case class Natural(value: Int) extends Proxy {
def self = value
}
I've been reading a book about Scala and there's mention of stackable modifications using traits. What are stackable modifications and for what purposes are they meant to be used?
The fundamental quality which distinguishes stackable modifications (as the terminology is used in scala anyway) is that "super" is influenced dynamically based on how the trait is mixed in, whereas in general super is a statically determined target.
If you write
abstract class Bar { def bar(x: Int): Int }
class Foo extends Bar { def bar(x: Int) = x }
then for Foo "super" will always be Bar.
If you write
trait Foo1 extends Foo { abstract override def bar(x: Int) = x + super.bar(x) }
Then for that method super remains unknown until the class is made.
trait Foo2 extends Foo { abstract override def bar(x: Int) = x * super.bar(x) }
scala> (new Foo with Foo2 with Foo1).bar(5)
res0: Int = 30
scala> (new Foo with Foo1 with Foo2).bar(5)
res1: Int = 50
Why is this interesting? An illustrative example might be some data which you want to compress, encrypt, and digitally sign. You might want to compress then encrypt then sign, or you might want to encrypt then sign then compress, etc. If you design your components in this way, you can instantiate a customized object with exactly the bits you want organized the way you want.
I looked at Real-World Scala presentation where the term stackable modifications is also used. Apparently it's traits that call the super method when overriding, essentially adding functionality and not replacing it. So you accumulate functionality with traits, and it can be used where in Java we often use aspects. Trait plays the role of an aspect, overriding the "interesting" methods and adding the specific functionality such as logging etc. and then calling super and "passing the ball" to the next trait in the chain. HTH.