I'm new to Scala (and functional programming as well) and I'm developing a plugin based application to learn and study.
I've cretead a trait to be the interface of a plugin. So when my app starts, it will load all the classes that implement this trait.
trait Plugin {
def init(config: Properties)
def execute(parameters: Map[String, Array[String]])
}
In my learning of Scala, I've read that if I want to program in functional way, I should avoid using var. Here's my problem:
The init method will be called after the class being loaded. And probably I will want to use the values from the config parameter in the execute method.
How to store this without using a var? Is there a better practice to do what I want here?
Thanks
There is more to programming in a functional way than just avoiding vars. One key concept is also to prefer immutable objects. In that respect your Plugin API is already breaking functional principles as both methods are only executed for their side-effects. With such an API using vars inside the implementation does not make a difference.
For an immutable plugin instance you could split plugin creation:
trait PluginFactory {
def createPlugin (config: Properties): Plugin
}
trait Plugin {
def execute ...
}
Example:
class MyPluginFactory extends MyPlugin {
def createPlugin (config: Properties): Plugin = {
val someValue = ... // extract from config
new MyPlugin(someValue)
}
}
class MyPlugin (someValue: String) extends Plugin {
def execute ... // using someConfig
}
You can use a val! It's basically the same thing, but the value of a val field cannot be modified later on. If you were using a class, you could write:
For example:
class Plugin(val config: Properties) {
def init {
// do init stuff...
}
def execute = // ...
}
Unfortunately, a trait cannot have class parameters. If you want to have a config field in your trait, you wont be able to set its value immediately, so it will have to be a var.
Related
I tried to use cake pattern in my project and liked it very much, but there is one problem which bothers me.
Cake pattern is easy to use when all your components have the same lifetime. You just define multiple traits-components, extend them by traits-implementation and then combine these implementations within one object, and via self-types all dependencies are automatically resolved.
But suppose you have a component (with its own dependencies) which can be created as a consequence of user action. This component cannot be created at the application startup because there is no data for it yet, but it should have automatic dependency resolution when it is created. An example of such components relationship is main GUI window and its complex subitems (e.g. a tab in notebook pane) which are created on user request. Main window is created on application startup, and some subpane in it is created when user performs some action.
This is easily done in DI frameworks like Guice: if I want multiple instances of some class I just inject a Provider<MyClass>; then I call get() method on that provider, and all dependencies of MyClass are automatically resolved. If MyClass requires some dynamically calculated data, I can use assisted inject extension, but the resulting code still boils down to a provider/factory. Related concept, scopes, also helps.
But I cannot think of a good way to do this using cake pattern. Currently I'm using something like this:
trait ModelContainerComponent { // Globally scoped dependency
def model: Model
}
trait SubpaneViewComponent { // A part of dynamically created cake
...
}
trait SubpaneControllerComponent { // Another part of dynamically created cake
...
}
trait DefaultSubpaneViewComponent { // Implementation
self: SubpaneControllerComponent with ModelContainerComponent =>
...
}
trait DefaultSubpaneControllerComponent { // Implementation
self: SubpaneViewComponent with ModelContainerComponent =>
...
}
trait SubpaneProvider { // A component which aids in dynamic subpane creation
def newSubpane(): Subpane
}
object SubpaneProvider {
type Subpane = SubpaneControllerComponent with SubpaneViewComponent
}
trait DefaultSubpaneProvider { // Provider component implementation
self: ModelContainerComponent =>
def newSubpane() = new DefaultSubpaneControllerComponent with DefaultSubpaneViewController with ModelContainerComponent {
val model = self.model // Pass global dependency to the dynamic cake
}.asInstanceOf[Subpane]
}
Then I mix DefaultSubpaneProvider in my top-level cake and inject SubpaneProvider in all components which need to create subpanes.
The problem in this approach is that I have to manually pass dependencies (model in ModelContainerComponent) down from the top-level cake to the dynamically created cake. This is only a trivial example, but there can be more dependencies, and also there can be more types of dynamically created cakes. They all require manual passing of dependencies; moreover, simple change in some component interface can lead to massive amount of fixes in multiple providers.
Is there a simpler/cleaner way to do this? How is this problem resolved within cake pattern?
Have you considered the following alternatives:
Use inner classes in Scala, as they automatically have access to their parent class member variables.
Restructuring your application in an actor based one, because you will immediately benefit of:
Hierarchy / supervision
Listening for creation / death of components
Proper synchronization when it comes to access mutable state
It will probably be helpful having some more code to provide a better solution, can you share a compiling subset of your code?
Let's say we have a program that has only two components: one contains the business logic of our program and the other one contains the dependency of this program, namely printing functionality.
we have:
trait FooBarInterface {
def printFoo: Unit
def printBar: Unit
}
trait PrinterInterface {
//def color: RGB
def print(s: String): Unit
}
For injecting the fooBar logic, the cake-pattern defines:
trait FooBarComponent {
//The components being used in this component:
self: PrinterComponent =>
//Ways for other components accessing this dependency.
def fooBarComp: FooBarInterface
//The implementation of FooBarInterface
class FooBarImpl extends FooBarInterface {
def printFoo = printComp.print("fOo")
def printBar = printComp.print("BaR")
}
}
Note that this implementation does not leave any field unimplemented and when it comes to mixing all these components together, we would have:
val fooBarComp = new FooBarImpl. For the cases where we only have one implementation, we don't have to leave fooBarComp unimplemented. we can have instead:
trait FooBarComponent {
//The components being used in this component:
self: PrinterComponent =>
//Ways for other components accessing this dependency.
def fooBarComp: new FooBarInterface {
def printFoo = printComp.print("fOo")
def printBar = printComp.print("BaR")
}
}
Not all components are like this. For example Printer, the dependency used for printing foo or bar needs to be configured and you want to be able to print text in different colours. So the dependency might be needed to change dynamically, or set at some point in the program.
trait PrintComponent {
def printComp: PrinterInterface
class PrinterImpl(val color: RGB) extends PrinterInterface {
def print(s:String) = ...
}
}
For a static configuration, when mixing this component, we could for example have, say:
val printComp = PrinterImpl(Blue)
Now, the fields for accessing the dependencies do not have to be simple values. They can be functions that take some of the constructor parameters of the dependency implementation to return an instance of it. For instance, we could have Baz with the interface:
trait BazInterface {
def appendString: String
def printBar(s: String): Unit
}
and a component of the form:
trait BazComponent {
//The components being used in this component:
self: PrinterComponent =>
//Ways for other components accessing this dependency.
def bazComp(appendString: String) : Baz = new BazImpl(appendString)
//The implementation of BazInterface
class BazImpl(val appendString: String) extends BazInterface {
def printBaz = printComp.print("baZ" + appendString)
}
}
Now, if we had the FooBarBaz component, we could define:
trait FooBarBazComponent {
//The components being used in this component:
self: BazComponent with FooBarComponent =>
val baz = bazComp("***")
val fooBar = fooBarComp
//The implementation of BazInterface
class BazImpl(val appendString: String) extends BazInterface {
def PrintFooBarBaz = {
baz.printBaz()
fooBar.printFooBar()
}
}
}
So we have seen how a component can be configured:
statically. (mostly the very low level dependencies)
from inside another component. (usually it's one business layer configuring another business layer, see "DEPENDENCIES THAT NEED USER DATA
" in here)
What differed in these two cases is simply the place where the configuration is taking place. One is for the low level dependencies at the very top level of the program, the other is for an intermediate component being configured inside another component. Question is, where should the configuration for a service like Print take place? The two options we have explored so far are out of the question. The way I see it, the only options we have is adding a Components-Configurer that mixes in all the components to be configured and returns the dependency components by mutating the implementations. Here is a simple version:
trait UICustomiserComponent {
this: PrintComponent =>
private var printCompCache: PrintInterface = ???
def printComp: PrintInterface = printCompCache
}
obviously we can have multiple such configurer components and do not have to have only one.
I have a number of use cases for this, all around the idea of interop between existing Java libraries and new Scala Code. The use case I've selected is the easiest I think.
Use Case:
I working on providing a JUnit Runner for some scala tests (so that I can get my lovely red / green bar in Eclipse)
The runner needs to have a constructor with a java class as a parameter. So in Scala I can do the following:
class MyRunner(val clazz: Class[Any]) extends Runner {
def getDescription(): Description
def run(notifier: RunNotifier)
}
When I use either
#RunWith(MyRunner)
object MyTestObject
or
#RunWith(MyRunner)
class MyTestClass
then the runner is indeed instantiated correctly, and is passed a suitable class object
Unfortunately what i want to do now is to "get hold of" the object MyTestObject, or create a MyTestClass, which are both Scala entities. I would prefer to use Scala Reflection, but I also want to use the standard Junit jar.
What I have done
The following Stackover flow questions were educational, but not the same problem. There were the nearest questions I could find
How to create a TypeTag manually?
Any way to obtain a Java class from a Scala (2.10) type tag or symbol?
Using Scala reflection with Java reflection
The discussion on Environments, Universes and Mirrors in http://docs.scala-lang.org/overviews/reflection/environment-universes-mirrors.html was good, and the similar documents on other scala reflection also helped. Mostly through it is about the Scala reflection.
I browsed the Scaladocs, but my knowledge of Scala reflection wasn't enough (yet) to let me get what I wanted out of them.
Edit:
As asked here is the code of the class that is being created by reflection
#RunWith(classOf[MyRunner])
object Hello2 extends App {
println("starting")
val x= "xxx"
}
So the interesting thing is that the solution proposed below using the field called MODULE$ doesn't print anything and the value of x is null
This solution works fine if you want to use plan old java reflection. Not sure if you can use scala reflection given all you will have is a Class[_] to work with:
object ReflectTest {
import collection.JavaConversions._
def main(args: Array[String]) {
val fooObj = instantiate(MyTestObject.getClass())
println(fooObj.foo)
val fooClass = instantiate(classOf[MyTestClass])
println(fooClass.foo)
}
def instantiate(clazz:Class[_]):Foo = {
val rm = ru.runtimeMirror(clazz.getClassLoader())
val declaredFields = clazz.getDeclaredFields().toList
val obj = declaredFields.find(field => field.getName() == "MODULE$") match{
case Some(modField) => modField.get(clazz)
case None => clazz.newInstance()
}
obj.asInstanceOf[Foo]
}
}
trait Foo{
def foo:String
}
object MyTestObject extends Foo{
def foo = "bar"
}
class MyTestClass extends Foo{
def foo = "baz"
}
I want to create a method that generates an implementation of a trait. For example:
trait Foo {
def a
def b(i:Int):String
}
object Processor {
def exec(instance: AnyRef, method: String, params: AnyRef*) = {
//whatever
}
}
class Bar {
def wrap[T] = {
// Here create a new instance of the implementing class, i.e. if T is Foo,
// generate a new FooImpl(this)
}
}
I would like to dynamically generate the FooImpl class like so:
class FooImpl(val wrapped:AnyRef) extends Foo {
def a = Processor.exec(wrapped, "a")
def b(i:Int) = Processor.exec(wrapped, "b", i)
}
Manually implementing each of the traits is not something we would like (lots of boilerplate) so I'd like to be able to generate the Impl classes at compile time. I was thinking of annotating the classes and perhaps writing a compiler plugin, but perhaps there's an easier way? Any pointers will be appreciated.
java.lang.reflect.Proxy could do something quite close to what you want :
import java.lang.reflect.{InvocationHandler, Method, Proxy}
class Bar {
def wrap[T : ClassManifest] : T = {
val theClass = classManifest[T].erasure.asInstanceOf[Class[T]]
theClass.cast(
Proxy.newProxyInstance(
theClass.getClassLoader(),
Array(theClass),
new InvocationHandler {
def invoke(target: AnyRef, method: Method, params: Array[AnyRef])
= Processor.exec(this, method.getName, params: _*)
}))
}
}
With that, you have no need to generate FooImpl.
A limitation is that it will work only for trait where no methods are implemented. More precisely, if a method is implemented in the trait, calling it will still route to the processor, and ignore the implementation.
You can write a macro (macros are officially a part of Scala since 2.10.0-M3), something along the lines of Mixing in a trait dynamically. Unfortunately now I don't have time to compose an example for you, but feel free to ask questions on our mailing list at http://groups.google.com/group/scala-internals.
You can see three different ways to do this in ScalaMock.
ScalaMock 2 (the current release version, which supports Scala 2.8.x and 2.9.x) uses java.lang.reflect.Proxy to support dynamically typed mocks and a compiler plugin to generate statically typed mocks.
ScalaMock 3 (currently available as a preview release for Scala 2.10.x) uses macros to support statically typed mocks.
Assuming that you can use Scala 2.10.x, I would strongly recommend the macro-based approach over a compiler plugin. You can certainly make the compiler plugin work (as ScalaMock demonstrates) but it's not easy and macros are a dramatically superior approach.
I have been reading about doing Dependency Injection in scala via the cake pattern. I think I understand it but I must have missed something because I still can't see the point in it! Why is it preferable to declare dependencies via self types rather than just abstract fields?
Given the example in Programming Scala TwitterClientComponent declares dependencies like this using the cake pattern:
//other trait declarations elided for clarity
...
trait TwitterClientComponent {
self: TwitterClientUIComponent with
TwitterLocalCacheComponent with
TwitterServiceComponent =>
val client: TwitterClient
class TwitterClient(val user: TwitterUserProfile) extends Tweeter {
def tweet(msg: String) = {
val twt = new Tweet(user, msg, new Date)
if (service.sendTweet(twt)) {
localCache.saveTweet(twt)
ui.showTweet(twt)
}
}
}
}
How is this better than declaring dependencies as abstract fields as below?
trait TwitterClient(val user: TwitterUserProfile) extends Tweeter {
//abstract fields instead of cake pattern self types
val service: TwitterService
val localCache: TwitterLocalCache
val ui: TwitterClientUI
def tweet(msg: String) = {
val twt = new Tweet(user, msg, new Date)
if (service.sendTweet(twt)) {
localCache.saveTweet(twt)
ui.showTweet(twt)
}
}
}
Looking at instantiation time, which is when DI actually happens (as I understand it), I am struggling to see the advantages of cake, especially when you consider the extra keyboard typing you need to do for the cake declarations (enclosing trait)
//Please note, I have stripped out some implementation details from the
//referenced example to clarify the injection of implemented dependencies
//Cake dependencies injected:
trait TextClient
extends TwitterClientComponent
with TwitterClientUIComponent
with TwitterLocalCacheComponent
with TwitterServiceComponent {
// Dependency from TwitterClientComponent:
val client = new TwitterClient
// Dependency from TwitterClientUIComponent:
val ui = new TwitterClientUI
// Dependency from TwitterLocalCacheComponent:
val localCache = new TwitterLocalCache
// Dependency from TwitterServiceComponent
val service = new TwitterService
}
Now again with abstract fields, more or less the same!:
trait TextClient {
//first of all no need to mixin the components
// Dependency on TwitterClient:
val client = new TwitterClient
// Dependency on TwitterClientUI:
val ui = new TwitterClientUI
// Dependency on TwitterLocalCache:
val localCache = new TwitterLocalCache
// Dependency on TwitterService
val service = new TwitterService
}
I'm sure I must be missing something about cake's superiority! However, at the moment I can't see what it offers over declaring dependencies in any other way (constructor, abstract fields).
Traits with self-type annotation is far more composable than old-fasioned beans with field injection, which you probably had in mind in your second snippet.
Let's look how you will instansiate this trait:
val productionTwitter = new TwitterClientComponent with TwitterUI with FSTwitterCache with TwitterConnection
If you need to test this trait you probably write:
val testTwitter = new TwitterClientComponent with TwitterUI with FSTwitterCache with MockConnection
Hmm, a little DRY violation. Let's improve.
trait TwitterSetup extends TwitterClientComponent with TwitterUI with FSTwitterCache
val productionTwitter = new TwitterSetup with TwitterConnection
val testTwitter = new TwitterSetup with MockConnection
Furthermore if you have a dependency between services in your component (say UI depends on TwitterService) they will be resolved automatically by the compiler.
Think about what happens if TwitterService uses TwitterLocalCache. It would be a lot easier if TwitterService self-typed to TwitterLocalCache because TwitterService has no access to the val localCache you've declared. The Cake pattern (and self-typing) allows for us to inject in a much more universal and flexible manner (among other things, of course).
I was unsure how the actual wiring would work, so I've adapted the simple example in the blog entry you linked to using abstract properties like you suggested.
// =======================
// service interfaces
trait OnOffDevice {
def on: Unit
def off: Unit
}
trait SensorDevice {
def isCoffeePresent: Boolean
}
// =======================
// service implementations
class Heater extends OnOffDevice {
def on = println("heater.on")
def off = println("heater.off")
}
class PotSensor extends SensorDevice {
def isCoffeePresent = true
}
// =======================
// service declaring two dependencies that it wants injected
// via abstract fields
abstract class Warmer() {
val sensor: SensorDevice
val onOff: OnOffDevice
def trigger = {
if (sensor.isCoffeePresent) onOff.on
else onOff.off
}
}
trait PotSensorMixin {
val sensor = new PotSensor
}
trait HeaterMixin {
val onOff = new Heater
}
val warmer = new Warmer with PotSensorMixin with HeaterMixin
warmer.trigger
in this simple case it does work (so the technique you suggest is indeed usable).
However, the same blog shows at least other three methods to achieve the same result; I think the choice is mostly about readability and personal preference. In the case of the technique you suggest IMHO the Warmer class communicates poorly its intent to have dependencies injected. Also to wire up the dependencies, I had to create two more traits (PotSensorMixin and HeaterMixin), but maybe you had a better way in mind to do it.
In this example I think there is no big difference. Self-types can potentially bring more clarity in cases when a trait declares several abstract values, like
trait ThreadPool {
val minThreads: Int
val maxThreads: Int
}
Then instead of depending on several abstract values you just declare dependency on a ThreadPool.
Self-types (as used in Cake pattern) for me are just a way to declare several abstract members at once, giving those a convenient name.
I have a trait, named Init:
package test
trait Init {
def init(): Any
}
There are some classes and an object, extends this trait:
package test
object Config extends Init {
def init() = { loadFromFile(...) }
}
class InitDb extends Init {
def init() = { initdb() }
}
When app has started, I will find all classes and objects which extends Init, and invoke their init method.
package test
object App {
def main(args: Array[String]) {
val classNames: List[String] = findAllNamesOfSubclassOf[Init]
println(classNames) // -> List(test.Config$, test.InitDb)
classNames foreach { name =>
Class.forName(name).newInstance().asInstanceOf[Init].init() // ***
}
}
}
Please note the "*" line. For test.InitDb, it's OK. But for test.Config$, when newInstance(), it throws an exception said we can't access its private method.
My problem is, how to get that object, and run its init method?
There's usually little point to doing this in Scala. Just put some code in the body of any object and it'll be executed when that object is first initialised, saving you the nasty performance hit of pre-initialising everything.
In general though, finding all subclasses of a particular type requires a full classpath scan. There are a few libraries to do this, but one of the more common is Apache's commons-discover
However... This is dynamic code, it uses reflection, and it's really NOT idiomatic. Scala has sharper tools than that, so please don't try and swing the blunt ones with such violence!
I don't totally agree with Kevin. There are some exceptions. For example I wrote a Scala desktop app. I split the core and the modules into two parts. At startup time the core loads all modules into GUI. At that time the core just gets the name of modules, it doesn't need to initialize something. Then I put all module's init code in an init() function. That function will be called when user executes the module.
#Freewind: About reflection in Scala, it's absolutely the same in Java. Just please note that the methods from Java which are used with reflection, are used for Java objects - not Scala. I'm sorry for my English. I mean those methods can not work with Scala object, trait.
For example:
var classLoader = new java.net.URLClassLoader(
Array(new File("module.jar").toURI.toURL),
/*
* need to specify parent, so we have all class instances
* in current context
*/
this.getClass.getClassLoader)
var clazz = classLoader.loadClass("test.InitDb")
if (classOf[Init].isAssignableFrom(clazz))
var an_init = clazz.newInstance.asInstanceOf[Init];
But you can not do it the opposite way:
if (clazz.isAssignableFrom(classOf[Init]))
Because Init is a trait, and Java method isAssignableFrom(Class) doesn't know trait.
I'm not sure if my question is useful for you, but here it is.