Dynamic mixin in Scala - is it possible? - scala

What I'd like to achieve is having a proper implementation for
def dynamix[A, B](a: A): A with B
I may know what B is, but don't know what A is (but if B has a self type then I could add some constraints on A).
The scala compiler is happy with the above signature, but I could not yet figure out how the implementation would look like - if it is possible at all.
Some options that came to my mind:
Using reflection/dynamic proxy.
Simplest case: A is an interface on Java level + I can instantiate B and it has no self type. I guess it would not be too hard (unless I run into some nasty, unexpected problems):
create a new B (b), and also a proxy implementing both A and B and using an invocation handler delegating to either a or b.
If B can not be instantiated I could still create a subclass of it, and do as it was described above. If it also has a self type I would probably need some delegation here and there, but it may still work.
But what if A is a concrete type and I can't find a proper interface for it?
Would I run into more problems (e.g. something related to linearization, or special constructs helping Java interoperability)?
Using a kind of wrapping instead of a mixin and return B[A], a is accessible from b.
Unfortunately in this case the caller would need to know how the nesting is done, which could be quite inconvenient if the mixing in/wrapping is done several times (D[C[B[A]]]) as it would need to find the right level of nesting to access the needed functionality, so I don't consider it a solution.
Implementing a compiler plugin. I have zero experience with it but my gut feeling is that it would not be trivial. I think Kevin Wright's autoproxy plugin has a bit similar goal, but it would not be enough for my problem (yet?).
Do you have any other ideas that might work? Which way would you recommend? What kind of "challenges" to expect?
Or should I forget it, because it is not possible with the current Scala constraints?
Intention behind my problem:
Say I have a business workflow, but it's not too strict. Some steps have fixed order, but others do not, but at the end all of them has to be done (or some of them required for further processing).
A bit more concrete example: I have an A, I can add B and C to it. I don't care which is done first, but at the end I'll need an A with B with C.
Comment: I don't know too much about Groovy but SO popped up this question and I guess it's more or less the same as what I'd like, at least conceptional.

I believe this is impossible to do strictly at runtime, because traits are mixed in at compile-time into new Java classes. If you mix a trait with an existing class anonymously you can see, looking at the classfiles and using javap, that an anonymous, name-mangled class is created by scalac:
class Foo {
def bar = 5
}
trait Spam {
def eggs = 10
}
object Main {
def main(args: Array[String]) = {
println((new Foo with Spam).eggs)
}
}
scalac Mixin.scala; ls *.class returns
Foo.class Main$.class Spam$class.class
Main$$anon$1.class Main.class Spam.class
While javap Main\$\$anon\$1 returns
Compiled from "mixin.scala"
public final class Main$$anon$1 extends Foo implements Spam{
public int eggs();
public Main$$anon$1();
}
As you can see, scalac creates a new anonymous class that is loaded at runtime; presumably the method eggs in this anonymous class creates an instance of Spam$class and calls eggs on it, but I'm not completely sure.
However, we can do a pretty hacky trick here:
import scala.tools.nsc._;
import scala.reflect.Manifest
object DynamicClassLoader {
private var id = 0
def uniqueId = synchronized { id += 1; "Klass" + id.toString }
}
class DynamicClassLoader extends
java.lang.ClassLoader(getClass.getClassLoader) {
def buildClass[T, V](implicit t: Manifest[T], v: Manifest[V]) = {
// Create a unique ID
val id = DynamicClassLoader.uniqueId
// what's the Scala code we need to generate this class?
val classDef = "class %s extends %s with %s".
format(id, t.toString, v.toString)
println(classDef)
// fire up a new Scala interpreter/compiler
val settings = new Settings(null)
val interpreter = new Interpreter(settings)
// define this class
interpreter.compileAndSaveRun("<anon>", classDef)
// get the bytecode for this new class
val bytes = interpreter.classLoader.getBytesForClass(id)
// define the bytecode using this classloader; cast it to what we expect
defineClass(id, bytes, 0, bytes.length).asInstanceOf[Class[T with V]]
}
}
val loader = new DynamicClassLoader
val instance = loader.buildClass[Foo, Spam].newInstance
instance.bar
// Int = 5
instance.eggs
// Int = 10
Since you need to use the Scala compiler, AFAIK, this is probably close to the cleanest solution you could do to get this. It's quite slow, but memoization would probably help greatly.
This approach is pretty ridiculous, hacky, and goes against the grain of the language. I imagine all sorts of weirdo bugs could creep in; people who have used Java longer than me warn of the insanity that comes with messing around with classloaders.

I wanted to be able to construct Scala beans in my Spring application context, but I also wanted to be able to specify the mixins to be included in the constructed bean:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:scala="http://www.springframework.org/schema/scala"
xsi:schemaLocation=...>
<scala:bean class="org.cakesolutions.scala.services.UserService" >
<scala:with trait="org.cakesolutions.scala.services.Mixin1" />
<scala:with trait="org.cakesolutions.scala.services.Mixin2" />
<scala:property name="dependency" value="Injected" />
<scala:bean>
</beans>
The difficulty is that Class.forName function does not allow me to specify the mixins. In the end, I extended the above hacky solution to Scala 2.9.1. So, here it is in its full gory; including bits of Spring.
class ScalaBeanFactory(private val beanType: Class[_ <: AnyRef],
private val mixinTypes: Seq[Class[_ <: AnyRef]]) {
val loader = new DynamicClassLoader
val clazz = loader.buildClass(beanType, mixinTypes)
def getTypedObject[T] = getObject.asInstanceOf[T]
def getObject = {
clazz.newInstance()
}
def getObjectType = null
def isSingleton = true
object DynamicClassLoader {
private var id = 0
def uniqueId = synchronized { id += 1; "Klass" + id.toString }
}
class DynamicClassLoader extends java.lang.ClassLoader(getClass.getClassLoader) {
def buildClass(t: Class[_ <: AnyRef], vs: Seq[Class[_ <: AnyRef]]) = {
val id = DynamicClassLoader.uniqueId
val classDef = new StringBuilder
classDef.append("class ").append(id)
classDef.append(" extends ").append(t.getCanonicalName)
vs.foreach(c => classDef.append(" with %s".format(c.getCanonicalName)))
val settings = new Settings(null)
settings.usejavacp.value = true
val interpreter = new IMain(settings)
interpreter.compileString(classDef.toString())
val r = interpreter.classLoader.getResourceAsStream(id)
val o = new ByteArrayOutputStream
val b = new Array[Byte](16384)
Stream.continually(r.read(b)).takeWhile(_ > 0).foreach(o.write(b, 0, _))
val bytes = o.toByteArray
defineClass(id, bytes, 0, bytes.length)
}
}
The code cannot yet deal with constructors with parameters and does not copy annotations from the parent class’s constructor (should it do that?). However, it gives us a good starting point that is usable in the scala Spring namespace. Of course, don’t just take my word for it, verify it in a Specs2 specification:
class ScalaBeanFactorySpec extends Specification {
"getTypedObject mixes-in the specified traits" in {
val f1 = new ScalaBeanFactory(classOf[Cat],
Seq(classOf[Speaking], classOf[Eating]))
val c1 = f1.getTypedObject[Cat with Eating with Speaking]
c1.isInstanceOf[Cat with Eating with Speaking] must_==(true)
c1.speak // in trait Speaking
c1.eat // in trait Eating
c1.meow // in class Cat
}
}

Related

Scala Cake Pattern and Dependency Collisions

I'm trying to implement dependency injection in Scala with the Cake Pattern, but am running into dependency collisions. Since I could not find a detailed example with such dependencies, here's my problem:
Suppose we have the following trait (with 2 implementations):
trait HttpClient {
def get(url: String)
}
class DefaultHttpClient1 extends HttpClient {
def get(url: String) = ???
}
class DefaultHttpClient2 extends HttpClient {
def get(url: String) = ???
}
And the following two cake pattern modules (which in this example are both APIs that depend on our HttpClient for their functionality):
trait FooApiModule {
def httpClient: HttpClient // dependency
lazy val fooApi = new FooApi() // providing the module's service
class FooApi {
def foo(url: String): String = {
val res = httpClient.get(url)
// ... something foo specific
???
}
}
}
and
trait BarApiModule {
def httpClient: HttpClient // dependency
lazy val barApi = new BarApi() // providing the module's service
class BarApi {
def bar(url: String): String = {
val res = httpClient.get(url)
// ... something bar specific
???
}
}
}
Now when creating the final app that uses both modules, we need to provide the httpClient dependency for both of the modules. But what if we want to provide a different implementation of it for each of the modules? Or simply provide different instances of the dependency configured differently (say with a different ExecutionContext for example)?
object MyApp extends FooApiModule with BarApiModule {
// the same dependency supplied to both modules
val httpClient = new DefaultHttpClient1()
def run() = {
val r1 = fooApi.foo("http://...")
val r2 = barApi.bar("http://...")
// ...
}
}
We could name the dependencies differently in each module, prefixing them with the module name, but that would be cumbersome and inelegant, and also won't work if we don't have full control of the modules ourselves.
Any ideas? Am I misinterpreting the Cake Pattern?
You get the pattern correctly and you've just discovered its important limitation. If two modules depend on some object (say HttpClient) and happen to declare it under the same name (like httpClient), the game is over - you won't configure them separately inside one Cake. Either have two Cakes, like Daniel advises or change modules' sources if you can (as Tomer Gabel is hinting).
Each of those solutions has its problems.
Having two Cakes (Daniel's advice) looks well as long they don't need some common dependencies.
Renaming some dependencies (provided it's possible) forces you to adjust all code that uses those.
Therefore some people (including me) prefer solutions immune to those problems, like using plain old constructors and avoid Cake altogether. If you measured it, they don't add much bloat to the code (Cake is already pretty verbose) and they're much more flexible.
"You're doing it wrong" (TM). You'd have the exact same problem with Spring, Guice or any IoC container: you're treating types as names (or symbols); you're saying "Give me an HTTP client" instead of "Give me an HTTP client suitable for communicating with fooApi".
In other words, you have multiple HTTP clients all named httpClient, which does not allow you to make any distinction between different instances. It's kind of like taking an #Autowired HttpClient without some way to qualify the reference (in Spring's case, usually by bean ID with external wiring).
In the cake pattern, one way to resolve this is to qualify that distinction with a different name: FooApiModule requires e.g. a def http10HttpClient: HttpClient and BarApiModule requires def connectionPooledHttpClient: HttpClient. When "filling in" the different modules, the different names both reference two different instances but are also indicative of the constraints the two modules place on their dependencies.
An alternative (workable albeit not as clean in my opinion) is to simply require a module-specific named dependency, i.e. def fooHttpClient: HttpClient, which simply forces an explicit external wiring on whomever mixes your module in.
Instead of extending FooApiModule and BarApiModule in a single place -- which would mean they share dependencies -- make them both separate objects, each with their dependencies solved accordingly.
Seems to be the known "robot legs" problem. You need to construct two legs of a robot, however you need to supply two different feet to them.
How to use the cake pattern to have both common dependencies and separate?
Let's have L1 <- A, B1; L2 <- A, B2. And you want to have Main <- L1, L2, A.
To have separate dependencies we need two instances of smaller cakes, parameterized with common dependencies.
trait LegCommon { def a:A}
trait Bdep { def b:B }
class L(val common:LegCommon) extends Bdep {
import common._
// declarations of Leg. Have both A and B.
}
trait B1module extends Bdep {
val b = new B1
}
trait B2module extends Bdep {
def b = new B2
}
In Main we'll have common part in cake and two legs:
trait Main extends LegCommon {
val l1 = new L(this) with B1module
val l2 = new L(this) with B2module
val a = new A
}
Your final app should look like this:
object MyApp {
val fooApi = new FooApiModule {
val httpClient = new DefaultHttpClient1()
}.fooApi
val barApi = new BarApiModule {
val httpClient = new DefaultHttpClient2()
}.barApi
...
def run() = {
val r1 = fooApi.foo("http://...")
val r2 = barApi.bar("http://...")
// ...
}
}
That should work. (Adapted from this blog post: http://www.cakesolutions.net/teamblogs/2011/12/19/cake-pattern-in-depth/)

Implementing '.clone' in Scala

I'm trying to figure out how to .clone my own objects, in Scala.
This is for a simulation so mutable state is a must, and from that arises the whole need for cloning. I'll clone a whole state structure before moving the simulation time ahead.
This is my current try:
abstract trait Cloneable[A] {
// Seems we cannot declare the prototype of a copy constructor
//protected def this(o: A) // to be defined by the class itself
def myClone= new A(this)
}
class S(var x: String) extends Cloneable[S] {
def this(o:S)= this(o.x) // for 'Cloneable'
def toString= x
}
object TestX {
val s1= new S("say, aaa")
println( s1.myClone )
}
a. Why does the above not compile. Gives:
error: class type required but A found
def myClone= new A(this)
^
b. Is there a way to declare the copy constructor (def this(o:A)) in the trait, so that classes using the trait would be shown to need to provide one.
c. Is there any benefit from saying abstract trait?
Finally, is there a way better, standard solution for all this?
I've looked into Java cloning. Does not seem to be for this. Also Scala copy is not - it's only for case classes and they shouldn't have mutable state.
Thanks for help and any opinions.
Traits can't define constructors (and I don't think abstract has any effect on a trait).
Is there any reason it needs to use a copy constructor rather than just implementing a clone method? It might be possible to get out of having to declare the [A] type on the class, but I've at least declared a self type so the compiler will make sure that the type matches the class.
trait DeepCloneable[A] { self: A =>
def deepClone: A
}
class Egg(size: Int) extends DeepCloneable[Egg] {
def deepClone = new Egg(size)
}
object Main extends App {
val e = new Egg(3)
println(e)
println(e.deepClone)
}
http://ideone.com/CS9HTW
It would suggest a typeclass based approach. With this it is possible to also let existing classes be cloneable:
class Foo(var x: Int)
trait Copyable[A] {
def copy(a: A): A
}
implicit object FooCloneable extends Copyable[Foo] {
def copy(foo: Foo) = new Foo(foo.x)
}
implicit def any2Copyable[A: Copyable](a: A) = new {
def copy = implicitly[Copyable[A]].copy(a)
}
scala> val x = new Foo(2)
x: Foo = Foo#8d86328
scala> val y = x.copy
y: Foo = Foo#245e7588
scala> x eq y
res2: Boolean = false
a. When you define a type parameter like the A it gets erased after the compilation phase.
This means that the compiler uses type parameters to check that you use the correct types, but the resulting bytecode retains no information of A.
This also implies that you cannot use A as a real class in code but only as a "type reference", because at runtime this information is lost.
b & c. traits cannot define constructor parameters or auxiliary constructors by definition, they're also abstract by definition.
What you can do is define a trait body that gets called upon instantiation of the concrete implementation
One alternative solution is to define a Cloneable typeclass. For more on this you can find lots of blogs on the subject, but I have no suggestion for a specific one.
scalaz has a huge part built using this pattern, maybe you can find inspiration there: you can look at Order, Equal or Show to get the gist of it.

Using Reader Monad for Dependency Injection

I recently saw the talks Dead-Simple Dependency Injection and Dependency Injection Without the Gymnastics about DI with Monads and was impressed. I tried to apply it on a simple problem, but failed as soon as it got non-trivial. I really would like to see a running version of dependency injection where
a class that depends on more than one value that has to be injected
a class that depends on a class that depends on something to be injected
as in the following example
trait FlyBehaviour { def fly() }
trait QuackBehaviour { def quack() }
trait Animal { def makeSound() }
// needs two behaviours injected
class Duck(val flyBehaviour: FlyBehaviour, val quackBehaviour: QuackBehaviour) extends Animal
{
def quack() = quackBehaviour.quack()
def fly() = flyBehaviour.fly()
def makeSound() = quack()
}
// needs an Animal injected (e.g. a Duck)
class Zoo(val animal: Animal)
// Spring for example would be able to provide a Zoo instance
// assuming a Zoo in configured to get a Duck injected and
// a Duck is configured to get impl. of FlyBehaviour and QuackBehaviour injected
val zoo: Zoo = InjectionFramework.get("Zoo")
zoo.animal.makeSound()
It would be really helpful to see a sample implementation using the reader Monad since I just feel that I am missing a push in the right direction.
Thanks!
The "reader monad" is just Function1, so all you need to do is accept an argument containing all the things you need. For example:
trait Config {
def fly: FlyBehaviour
def quack: QuackBehaviour
}
type Env[A] = Config => A
Now if you want to construct a Duck based on this environment:
val a: Env[Animal] = c => new Duck(c.fly, c.quack)
And then constructing a Zoo based on that is easy:
val z: Env[Zoo] = a andThen (new Zoo(_))
Using Scalaz (or with a bit of work on your own) you can make use of some syntax niceties to "ask" for the config c:
val z: Env[Zoo] = for {
c <- ask
} yield new Zoo(Duck(c.fly, c.quack))

Why use scala's cake pattern rather than abstract fields?

I have been reading about doing Dependency Injection in scala via the cake pattern. I think I understand it but I must have missed something because I still can't see the point in it! Why is it preferable to declare dependencies via self types rather than just abstract fields?
Given the example in Programming Scala TwitterClientComponent declares dependencies like this using the cake pattern:
//other trait declarations elided for clarity
...
trait TwitterClientComponent {
self: TwitterClientUIComponent with
TwitterLocalCacheComponent with
TwitterServiceComponent =>
val client: TwitterClient
class TwitterClient(val user: TwitterUserProfile) extends Tweeter {
def tweet(msg: String) = {
val twt = new Tweet(user, msg, new Date)
if (service.sendTweet(twt)) {
localCache.saveTweet(twt)
ui.showTweet(twt)
}
}
}
}
How is this better than declaring dependencies as abstract fields as below?
trait TwitterClient(val user: TwitterUserProfile) extends Tweeter {
//abstract fields instead of cake pattern self types
val service: TwitterService
val localCache: TwitterLocalCache
val ui: TwitterClientUI
def tweet(msg: String) = {
val twt = new Tweet(user, msg, new Date)
if (service.sendTweet(twt)) {
localCache.saveTweet(twt)
ui.showTweet(twt)
}
}
}
Looking at instantiation time, which is when DI actually happens (as I understand it), I am struggling to see the advantages of cake, especially when you consider the extra keyboard typing you need to do for the cake declarations (enclosing trait)
//Please note, I have stripped out some implementation details from the
//referenced example to clarify the injection of implemented dependencies
//Cake dependencies injected:
trait TextClient
extends TwitterClientComponent
with TwitterClientUIComponent
with TwitterLocalCacheComponent
with TwitterServiceComponent {
// Dependency from TwitterClientComponent:
val client = new TwitterClient
// Dependency from TwitterClientUIComponent:
val ui = new TwitterClientUI
// Dependency from TwitterLocalCacheComponent:
val localCache = new TwitterLocalCache
// Dependency from TwitterServiceComponent
val service = new TwitterService
}
Now again with abstract fields, more or less the same!:
trait TextClient {
//first of all no need to mixin the components
// Dependency on TwitterClient:
val client = new TwitterClient
// Dependency on TwitterClientUI:
val ui = new TwitterClientUI
// Dependency on TwitterLocalCache:
val localCache = new TwitterLocalCache
// Dependency on TwitterService
val service = new TwitterService
}
I'm sure I must be missing something about cake's superiority! However, at the moment I can't see what it offers over declaring dependencies in any other way (constructor, abstract fields).
Traits with self-type annotation is far more composable than old-fasioned beans with field injection, which you probably had in mind in your second snippet.
Let's look how you will instansiate this trait:
val productionTwitter = new TwitterClientComponent with TwitterUI with FSTwitterCache with TwitterConnection
If you need to test this trait you probably write:
val testTwitter = new TwitterClientComponent with TwitterUI with FSTwitterCache with MockConnection
Hmm, a little DRY violation. Let's improve.
trait TwitterSetup extends TwitterClientComponent with TwitterUI with FSTwitterCache
val productionTwitter = new TwitterSetup with TwitterConnection
val testTwitter = new TwitterSetup with MockConnection
Furthermore if you have a dependency between services in your component (say UI depends on TwitterService) they will be resolved automatically by the compiler.
Think about what happens if TwitterService uses TwitterLocalCache. It would be a lot easier if TwitterService self-typed to TwitterLocalCache because TwitterService has no access to the val localCache you've declared. The Cake pattern (and self-typing) allows for us to inject in a much more universal and flexible manner (among other things, of course).
I was unsure how the actual wiring would work, so I've adapted the simple example in the blog entry you linked to using abstract properties like you suggested.
// =======================
// service interfaces
trait OnOffDevice {
def on: Unit
def off: Unit
}
trait SensorDevice {
def isCoffeePresent: Boolean
}
// =======================
// service implementations
class Heater extends OnOffDevice {
def on = println("heater.on")
def off = println("heater.off")
}
class PotSensor extends SensorDevice {
def isCoffeePresent = true
}
// =======================
// service declaring two dependencies that it wants injected
// via abstract fields
abstract class Warmer() {
val sensor: SensorDevice
val onOff: OnOffDevice
def trigger = {
if (sensor.isCoffeePresent) onOff.on
else onOff.off
}
}
trait PotSensorMixin {
val sensor = new PotSensor
}
trait HeaterMixin {
val onOff = new Heater
}
val warmer = new Warmer with PotSensorMixin with HeaterMixin
warmer.trigger
in this simple case it does work (so the technique you suggest is indeed usable).
However, the same blog shows at least other three methods to achieve the same result; I think the choice is mostly about readability and personal preference. In the case of the technique you suggest IMHO the Warmer class communicates poorly its intent to have dependencies injected. Also to wire up the dependencies, I had to create two more traits (PotSensorMixin and HeaterMixin), but maybe you had a better way in mind to do it.
In this example I think there is no big difference. Self-types can potentially bring more clarity in cases when a trait declares several abstract values, like
trait ThreadPool {
val minThreads: Int
val maxThreads: Int
}
Then instead of depending on several abstract values you just declare dependency on a ThreadPool.
Self-types (as used in Cake pattern) for me are just a way to declare several abstract members at once, giving those a convenient name.

How to log in Scala *without* a reference to the logger in *every instance*?

I've looked at example of logging in Scala, and it usually looks like this:
import org.slf4j.LoggerFactory
trait Loggable {
private lazy val logger = LoggerFactory.getLogger(getClass)
protected def debug(msg: => AnyRef, t: => Throwable = null): Unit =
{...}
}
This seems independent of the concrete logging framework. While this does the job, it also introduces an extraneous lazy val in every instance that wants to do logging, which might well be every instance of the whole application. This seems much too heavy to me, in particular if you have many "small instances" of some specific type.
Is there a way of putting the logger in the object of the concrete class instead, just by using inheritance? If I have to explicitly declare the logger in the object of the class, and explicitly refer to it from the class/trait, then I have written almost as much code as if I had done no reuse at all.
Expressed in a non-logging specific context, the problem would be:
How do I declare in a trait that the implementing class must have a singleton object of type X, and that this singleton object must be accessible through method def x: X ?
I can't simply define an abstract method, because there could only be a single implementation in the class. I want that logging in a super-class gets me the super-class singleton, and logging in the sub-class gets me the sub-class singleton. Or put more simply, I want logging in Scala to work like traditional logging in Java, using static loggers specific to the class doing the logging. My current knowledge of Scala tells me that this is simply not possible without doing it exactly the same way you do in Java, without much if any benefits from using the "better" Scala.
Premature Optimization is the root of all evil
Let's be clear first about one thing: if your trait looks something like this:
trait Logger { lazy val log = Logger.getLogger }
Then what you have not done is as follows:
You have NOT created a logger instance per instance of your type
You have neither given yourself a memory nor a performance problem (unless you have)
What you have done is as follows:
You have an extra reference in each instance of your type
When you access the logger for the first time, you are probably doing some map lookup
Note that, even if you did create a separate logger for each instance of your type (which I frequently do, even if my program contains hundreds of thousands of these, so that I have very fine-grained control over my logging), you almost certainly still will neither have a performance nor a memory problem!
One "solution" is (of course), to make the companion object implement the logger interface:
object MyType extends Logger
class MyType {
import MyType._
log.info("Yay")
}
How do I declare in a trait that the
implementing class must have a
singleton object of type X, and that
this singleton object must be
accessible through method def x: X ?
Declare a trait that must be implemented by your companion objects.
trait Meta[Base] {
val logger = LoggerFactory.getLogger(getClass)
}
Create a base trait for your classes, sub-classes have to overwrite the meta method.
trait Base {
def meta: Meta[Base]
def logger = meta.logger
}
A class Whatever with a companion object:
object Whatever extends Meta[Base]
class Whatever extends Base {
def meta = Whatever
def doSomething = {
logger.log("oops")
}
}
In this way you only need to have a reference to the meta object.
We can use the Whatever class like this.
object Sample {
def main(args: Array[String]) {
val whatever = new Whatever
whatever.doSomething
}
}
I'm not sure I understand your question completely. So I apologize up front if this is not the answer you are looking for.
Define an object were you put your logger into, then create a companion trait.
object Loggable {
private val logger = "I'm a logger"
}
trait Loggable {
import Loggable._
def debug(msg: String) {
println(logger + ": " + msg)
}
}
So now you can use it like this:
scala> abstract class Abstraction
scala> class Implementation extends Abstraction with Loggable
scala> val test = new Implementation
scala> test.debug("error message")
I'm a logger: error message
Does this answer your question?
I think you cannot automatically get the corresponding singleton object of a class or require that such a singleton exists.
One reason is that you cannot know the type of the singleton before it is defined. Not sure, if this helps or if it is the best solution to your problem, but if you want to require some meta object to be defined with a specific trait, you could define something like:
trait HasSingleton[Traits] {
def meta: Traits
}
trait Log {
def classname: String
def log { println(classname) }
}
trait Debug {
def debug { print("Debug") }
}
class A extends HasSingleton[Log] {
def meta = A // Needs to be defined with a Singleton (or any object which inherits from Log}
def f {
meta.log
}
}
object A extends Log {
def classname = "A"
}
class B extends HasSingleton[Log with Debug] { // we want to use Log and Debug here
def meta = B
def g {
meta.log
meta.debug
}
}
object B extends Log with Debug {
def classname = "B"
}
(new A).f
// A
(new B).g
// B
// Debug