I can extend my Scala class Foo with additional methods via an implicit class:
trait Foo {
def bar: String
}
object FooExtensions {
object implicits {
implicit class FooOps(foo: Foo) {
def baz: String = "baz"
}
}
}
But can I mock out those methods?
import org.mockito.Mockito
import org.scalatest.WordSpec
import org.scalatest.mockito.MockitoSugar
class MySpec extends WordSpec with MockitoSugar {
"My mock" should {
"handle methods from implicit classes" in {
import FooExtensions.implicits._
val foo = mock[Foo]
Mockito.when(foo.baz).thenReturn("bix") // fails at runtime
}
}
}
This compiles, but fails with
when() requires an argument which has to be 'a method call on a mock'.
For example:
when(mock.getArticles()).thenReturn(articles);
Also, this error might show up because:
1. you stub either of: final/private/equals()/hashCode() methods.
Those methods *cannot* be stubbed/verified.
Mocking methods declared on non-public parent classes is not supported.
2. inside when() you don't call method on mock but on some other object.
org.mockito.exceptions.misusing.MissingMethodInvocationException:
when() requires an argument which has to be 'a method call on a mock'.
For example:
when(mock.getArticles()).thenReturn(articles);
Also, this error might show up because:
1. you stub either of: final/private/equals()/hashCode() methods.
Those methods *cannot* be stubbed/verified.
Mocking methods declared on non-public parent classes is not supported.
2. inside when() you don't call method on mock but on some other object.
Is it possible to mock methods added via implicit classes? Hopefully with Mockito (or mockito-scala) but I'm interested in any approach that works.
Thing about extension methods, is that they are basically a syntactic sugar:
trait Foo
implicit class ExtensionMethods(foo: Foo) {
def bar: String = "bar
}
foo.bar
is equal to
new ExtensionMethods(foo).bar
So mocking:
Mockito.when(foo.bar).thenReturn("bix")
becomes:
Mockito.when(new ExtensionMethods(foo).bar).thenReturn("bix")
I think there is no workaround - perhaps PowerMock could let you change class constructor..., but with normal Mockito it is impossible.
Usually, it is not a problem though. That is because either:
you put into extension methods behavior, that only depends on extended value and passed parameters (and extended method is quite often pure function that doesn't require mocking) - if you want to change something there, you change input,
if behavior should change, you implement it inside a type class, and make extension method use that type class to inject behavior
trait Bar {
def bar: String
}
object Bar {
implicit val defaultBar: Bar = new Bar { def bar = "bar" }
}
implicit class ExtensionMethods(foo: Foo) {
def bar(implicit bar: Bar): String = bar.bar
}
// in test
implicit var overridenBar: Bar = ...
assert(foo.bar === "sth")
On a side note: the more functional you'll get the less you'll need to mock things as everything will depend only on input passed inside, and a cascade of mocks will become just a code smell - too tight coupling, too large interfaces, etc. Problem is that many Java libraries do not even follow SOLID principles, which makes them both hard to use/test with FP as well as bad OOP on its own. I'm telling this in case you feel mocking is the only way to go in your case.
The only way to achieve that is to use implicit conversions rather than implicit classes
This is a hack intended to show how this could be achieved, but I'd urge to take a look at the code and see why you actually need to do this
So, following your example, you could modify the code to look like this
trait Foo {
def bar: String
}
object FooExtensions {
object implicits {
implicit fooToOps(foo: Foo): FooOps = new FooOps(foo)
class FooOps(foo: Foo) {
def baz: String = "baz"
}
}
}
and your test
import org.scalatest.WordSpec
import org.mockito.MockitoSugar
class MySpec extends WordSpec with MockitoSugar {
"My mock" should {
"handle methods from implicit classes" in {
val fooOps = mock[FooOps]
implicit fooToOps(foo: Foo): FooOps = fooOps
val foo = mock[Foo]
when(foo.baz) thenReturn "bix" // works
}
}
}
the other thing to consider is that in your production you need to get an implicit parameter of the shape Foo => FooOps so when you call that method from the test the actual implicit mock is provided...
As I said, you can make it work like this, but I agree with Mateusz that you shouldn't need to
Related
I've got the following helper method in my project:
def close(c: Closeable) {
Option(c).foreach(s => Try(s.close))
}
I've got some classes that have a close method but do not implement Closeable. If I change the helper method to use structural types I can still use it on these classes:
def close(c: {def close()}) {
Option(c).foreach(s => Try(s.close))
}
However this introduces use of reflection which is something that I'd like to avoid in runtime.
Is there a way to use something similar to structural typing without inducing runtime reflection?
I.e in the same way Shapeless allows generic access to fields, maybe implicit parameters + macros could be used to access methods in the same way?
Use traits to implement the typeclass pattern.
When I wrote my original solution it was a bit rough round the edges as I assumed a quick search for convert structural bounds to context bounds would pull up better explanations than I could write. That doesn't seem to be the case. Below is a compiling solution.
object Closeables {
trait MyCloseable[A] {
def myClose(a: A): Unit
}
object MyCloseable {
implicit object MyCanBeClosed extends MyCloseable[CanBeClosed] {
def myClose(c: CanBeClosed) = c.nonStandardClose()
}
}
}
class CanBeClosed {
def nonStandardClose(): Unit = println("Closing")
}
import Closeables._
object Test extends App {
def functionThatCloses[A: MyCloseable](a: A) {
implicitly[MyCloseable[A]].myClose(a)
}
def functionThatClosesExplicit[A](a: A)(implicit ev: MyCloseable[A]) {
ev.myClose(a)
}
val c = new CanBeClosed
functionThatCloses(c)
functionThatClosesExplicit(c)
functionThatCloses(c)(MyCloseable.MyCanBeClosed)
functionThatClosesExplicit(c)(MyCloseable.MyCanBeClosed)
}
For each type of class that can be accepted by functionThatCloses you must define and implicit object in MyCloseables.
The compiler looks at the context bound in functionThatCloses and converts it to a function with the definition of functionThatClosesExplicitly.
The compiler than 'finds' the implicit 'evidence' in the definition from the MyCloseables object and uses that.
Imagine I have a service:
class ServiceA(serviceB: ServiceB) {
import Extractor._
def send(typeA: A) = serviceB.send(typeA.extract)
}
object Extractor {
implicit class Extractor(type: A) {
def extract = ???
}
}
I want the extract method to be an implicitly defined because it doesn't directly relate to A type/domain and is a solution specific adhoc extension.
Now I would like to write a very simple unit test that confirms that serviceB.send is called.
For that, I mock service and pass a mocked A to send. Then I could just assert that serviceB.send was called with the mocked A.
As seen in the example, the send method also does some transformation on typeA parameter so I would need to mock extract method to return my specified value. However, A doesn't have extract method - it comes from the implicit class.
So the question is - how do I mock out the implicit class as in the example above as imports are not first class citizens.
If you want to specify a bunch of customised extract methods, you can do something like this:
sealed trait Extractor[T] {
// or whatever signature you want
def extract(obj: T): String
}
object Extractor {
implicit case object IntExtractor extends Extractor[Int] {
def extract(obj: Int): String = s"I am an int and my value is $obj"
}
implicit case object StringExtractor extends Extractor[String] {
def extract(obj: String): String = s"I am "
}
def apply[A : Extractor](obj: A): String = {
implicitly[Extractor[A]].extract(obj)
}
}
So you have basically a sealed type family that's pre-materialised through case objects, which are arguably only useful in a match. But that would let you decouple everything.
If you don't want to mix this with Extractor, call them something else and follow the same approach, you can then mix it all in with a context bound.
Then you can use this with:
println(Extractor(5))
For testing, simply override the available implicits if you need to. A bit of work, but not impossible, you can simply control the scope and you can spy on whatever method calls you want.
e.g instead of import Extractor._ have some other object with test only logic where you can use mocks or an alternative implementation.
I'd like to make a test for a method which calls error() in it.
IntEmptyStack.top is what I want to test with specs2:
abstract class IntStack {
def push(x: Int): IntStack = new IntNonEmptyStack(x, this)
def isEmpty: Boolean
def top: Int
def pop: IntStack
}
class IntEmptyStack extends IntStack {
def isEmpty = true
def top = error("EmptyStack.top")
def pop = error("EmptyStack.pop")
}
And here's the specs I wrote so far:
import org.junit.runner.RunWith
import org.specs2.runner.JUnitRunner
import org.specs2.mutable.Specification
#RunWith(classOf[JUnitRunner])
class IntStackSpec extends Specification {
"IntEmptyStack" should {
val s = new IntEmptyStack
"be empty" in {
s.isEmpty must equalTo(true)
}
"raise error when top called" in {
s.top must throwA[RuntimeException]
}
}
}
The error occurs in line 13, "raise error when top called" in {. The error message is value must is not a member of Nothing. I think Scala infers s.top as Nothing, not an Int as defined in the abstract class. In this case, how can I write a test without any errors?
Thanks for any comments/corrections to this question.
Example Reference: Scala By Example
The problem here is that scala (and Java) allow subclasses to return more-specific types than superclasses in overridden methods. In this case, your method IntEmptyStack.top's return-type is Nothing (which is a sub-type of Int because Nothing is at the bottom of the type hierarchy.
Evidently spec's implicit conversions necessary for you to write code like a must throwA[X] do not apply when the type of a is Nothing
Change your declarations in IntEmptyStackas follows:
def top: Int = error("EmptyStack.top")
def pop: Int = error("EmptyStack.pop")
Alternatively, of course, you could allow the fact that the correctness of your logic is being proven by the type system. That is, it is not possible to get an element which is at the top of an empty stack: the return type is Nothing! No tests are necessary.
Consider the following typical Scala 'pimp' code:
class PimpedA(a:A){
def pimp() = "hi"
}
implicit def pimpA(a:A) = new PimpedA(a)
new A(){
pimp() //<--- does not compile
}
However, changing it to:
new A(){
this.pimp()
}
Makes it work.
Shouldn't it be the same to the Scala compiler?
EDIT : Is there any solution that can make it work without having to add the this.?
Not at all. For it to work, pimp needs to be either an object or an imported member of a value, and it is neither. A class has an "implicit" import this._. It has not a mechanism that auto-prepends this to stuff to see if it compiles.
In this case you should give compiler a hint that pimp() is not a random function. When you write
this.pimp()
compiler know there isn't pimp function on class A so it's an error and before giving up it searches implicit conversion in scope and finds it.
pimpA(this).pimp()
And when you just call pimp() compiler doesn't know what object to pass to the pimpA(a: A) implicit function.
UPDATE
It is hard to understand what is your goal. I can only suggest to make PimpedA a typeclass (Pimp[T] in this example).
trait Pimp[T] {
def action(p: T): String
}
implicit object PimpA extends Pimp[A] {
override def action(p: A) = "some actions related to A"
}
def pimp[T: Pimp](p: T) = implicitly[Pimp[T]].action(p)
class A {
val foo = pimp(this)
}
scala> new A foo
res2: String = some actions related to A
I've looked at example of logging in Scala, and it usually looks like this:
import org.slf4j.LoggerFactory
trait Loggable {
private lazy val logger = LoggerFactory.getLogger(getClass)
protected def debug(msg: => AnyRef, t: => Throwable = null): Unit =
{...}
}
This seems independent of the concrete logging framework. While this does the job, it also introduces an extraneous lazy val in every instance that wants to do logging, which might well be every instance of the whole application. This seems much too heavy to me, in particular if you have many "small instances" of some specific type.
Is there a way of putting the logger in the object of the concrete class instead, just by using inheritance? If I have to explicitly declare the logger in the object of the class, and explicitly refer to it from the class/trait, then I have written almost as much code as if I had done no reuse at all.
Expressed in a non-logging specific context, the problem would be:
How do I declare in a trait that the implementing class must have a singleton object of type X, and that this singleton object must be accessible through method def x: X ?
I can't simply define an abstract method, because there could only be a single implementation in the class. I want that logging in a super-class gets me the super-class singleton, and logging in the sub-class gets me the sub-class singleton. Or put more simply, I want logging in Scala to work like traditional logging in Java, using static loggers specific to the class doing the logging. My current knowledge of Scala tells me that this is simply not possible without doing it exactly the same way you do in Java, without much if any benefits from using the "better" Scala.
Premature Optimization is the root of all evil
Let's be clear first about one thing: if your trait looks something like this:
trait Logger { lazy val log = Logger.getLogger }
Then what you have not done is as follows:
You have NOT created a logger instance per instance of your type
You have neither given yourself a memory nor a performance problem (unless you have)
What you have done is as follows:
You have an extra reference in each instance of your type
When you access the logger for the first time, you are probably doing some map lookup
Note that, even if you did create a separate logger for each instance of your type (which I frequently do, even if my program contains hundreds of thousands of these, so that I have very fine-grained control over my logging), you almost certainly still will neither have a performance nor a memory problem!
One "solution" is (of course), to make the companion object implement the logger interface:
object MyType extends Logger
class MyType {
import MyType._
log.info("Yay")
}
How do I declare in a trait that the
implementing class must have a
singleton object of type X, and that
this singleton object must be
accessible through method def x: X ?
Declare a trait that must be implemented by your companion objects.
trait Meta[Base] {
val logger = LoggerFactory.getLogger(getClass)
}
Create a base trait for your classes, sub-classes have to overwrite the meta method.
trait Base {
def meta: Meta[Base]
def logger = meta.logger
}
A class Whatever with a companion object:
object Whatever extends Meta[Base]
class Whatever extends Base {
def meta = Whatever
def doSomething = {
logger.log("oops")
}
}
In this way you only need to have a reference to the meta object.
We can use the Whatever class like this.
object Sample {
def main(args: Array[String]) {
val whatever = new Whatever
whatever.doSomething
}
}
I'm not sure I understand your question completely. So I apologize up front if this is not the answer you are looking for.
Define an object were you put your logger into, then create a companion trait.
object Loggable {
private val logger = "I'm a logger"
}
trait Loggable {
import Loggable._
def debug(msg: String) {
println(logger + ": " + msg)
}
}
So now you can use it like this:
scala> abstract class Abstraction
scala> class Implementation extends Abstraction with Loggable
scala> val test = new Implementation
scala> test.debug("error message")
I'm a logger: error message
Does this answer your question?
I think you cannot automatically get the corresponding singleton object of a class or require that such a singleton exists.
One reason is that you cannot know the type of the singleton before it is defined. Not sure, if this helps or if it is the best solution to your problem, but if you want to require some meta object to be defined with a specific trait, you could define something like:
trait HasSingleton[Traits] {
def meta: Traits
}
trait Log {
def classname: String
def log { println(classname) }
}
trait Debug {
def debug { print("Debug") }
}
class A extends HasSingleton[Log] {
def meta = A // Needs to be defined with a Singleton (or any object which inherits from Log}
def f {
meta.log
}
}
object A extends Log {
def classname = "A"
}
class B extends HasSingleton[Log with Debug] { // we want to use Log and Debug here
def meta = B
def g {
meta.log
meta.debug
}
}
object B extends Log with Debug {
def classname = "B"
}
(new A).f
// A
(new B).g
// B
// Debug