Mocking case classes as properties of an object - scala

Consider a case class:
case class configuredThing[A, B](param: string) {
val ...
def ...
}
Test code was able to be partially written for configuredThing which has some methods on it that make some external calls to other services.
This case class is used elsewhere:
object actionObject {
private lazy val thingA = configuredThing[A, B]("A")
private lazy val thingB = configuredThing[C, D]("B")
def ...
}
Here the types A, B, C, and D are actually specified and are not native scala types but defined in a third party package we are leveraging to interface with some external services.
In trying to write tests for this object the requirement is to not have the external calls made so as to test the logic in actionObject. This lead to looking into how to mock out configuredThing within actionObject to be able to make assertions on the different interactions with the configuredThing instances. However, it is unclear how to do this.
In looking at scalatest's scalamock documentation this would need to be done with the Generated mocks system which states "rely on the ScalaMock compiler plugin". However, this seems to have not been released since scala 2.9.2, see here.
So, my question is this: how can this be tested?

Related

Scala: Dependency Injection via Reader and compatibility

When we implement DI via Reader, we make a dependency a part of our method signature. Assume we have (without implementations):
trait Service1 { def f1:Int = ??? }
trait Service2 { def f2:Reader[Service1, Int] = ??? }
type Env= (Service1, Service2)
def c:Reader[Env, Int] = ??? //use Service2.f2 here
Now, f2 needs additional service for implementation, say:
trait Service3
type Service2Env = (Service1, Service3)
//new dependecies on both:
trait Service2 { def f2:Reader[Service2Env, Int] = ??? }
It will break existing clients, they cannot any longer use Service2.f2 without providing Service3 additionally.
With DI via injection (via constructor or setters), which is common in OOP, I would use as a dependency of c only Service2. How it is constructed and what is its list of dependencies, I do not care. From this point, any new dependencies in Service2 will keep the signature of c function unchanged.
How is it solved in FP way? Are there options? Is there a way to inject new dependencies, but somehow protect customers from the change?
Is there a way to inject new dependencies, but somehow protect customers from the change?
That would kind of defeat the purpose, as using Reader (or alternatively Final Tagless or ZIO Environment) is a way to explicitly declare (direct and indirect) dependencies in the type signature of each function. You are doing this to be able to track where in your code these dependencies are used -- just by looking at a function signature you can tell if this code might have a dramatic side-effect such as, say, sending an email (or maybe you are doing this for other reasons, but the result is the same).
You probably want to mix-and-match this with constructor-injection for the dependencies/effects that do not need that level of static checking.

How to keep single responsibility when using self-type in Scala?

Using self-type for dependency injections, cause to expose public method of other traits ,which break the single responsibility principal. Let's me talk with example
trait Output {
def output(str: String): Unit
}
trait ConsoleOutput extends Output {
override def output(str: String): Unit = println(str)
}
class Sample {
self: Output =>
def doSomething() = {
// I do something stupid here!
output("Output goes here!")
}
}
val obj = new Sample with ConsoleOutput
obj.output("Hey there")
My Sample class dependes on Output trait and of course I would like to use Output trait methods in my Sample class. But with above code example my Sample class expose output method which doesn't come from its functionality and break single responsibility of Sample.
How can I avoid it and keep using self-type and cake-pattern?
Responsibility of providing the output still lies with the component implementing Output. The fact that your class provides access to it is no different from something like:
class Foo(val out: Output)
new Foo(new ConsoleOutput{}).out.output
Sure, you can make out private here, but you could also have .output protected in ConsoleOutput if you don't want it accessible from outside as well.
(The answer to your comment in the other answer is that if you also want to use it "stand-alone", then you subclass it, and make output public in the subclass).
The self type is not really relevant here. Inheriting from another class exposes the public methods of that class regardless of any self type. So any inheritance from a class with public methods can be said to break the single responsibility principle.
If the trait is intended to be use for dependency injection then it should make its methods protected so that they are not exposed.
trait Output {
protected def output(str: String): Unit
}
trait ConsoleOutput extends Output {
protected override def output(str: String): Unit = println(str)
}
Comment on the accepted answer
The accepted answer claims that the "Responsibility of providing the output still lies with the component implementing Output". This is incorrect, and shows confusion between a type and an implementation.
The behaviour of an object is specified by its type, not its implementation (Liskov substitution principle). The type is the contract that tells the user what the object can do. Therefore it is the type that specifies the responsibilities, not the implementation.
The type Sample with ConsoleOutput has the output method from the Object type and the doSomething method from the Sample type. Therefore it has the responsibility of providing an implementation of both of those methods. The fact that the implementation of output is in ConsoleOuput is irrelevant to the type and is therefore irrelevant to who is responsible for it.
The Sample with ConsoleOutput object could easily override the implementation of output in which case it would clearly be responsible for that method, not ConsoleOutput. The fact that Sample with ConsoleOutput chooses not to change the implementation of output does not mean that it is not responsible for it. The responsibilities of an object do not change when the implementation changes.
Explanation of the Single Responsibility Principle
This principle is the first of the five SOLID principles of software engineering.
As Wikipedia explains, "The single responsibility principle [] states that every module or class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class."
In other words, don't do this:
class MultipleResponsibilities {
def computePi(places: Int): List[Int]
def countVowels(text: String): Int
}
but do this instead:
class PiComputer {
def computePi(places: Int): List[Int]
}
class VowelCounter {
def countVowels(text: String): Int
}
computePi and countVowels are different parts of the functionality of the program and therefore they they should be encapsulated in different classes.
The third SOLID principle is the Liskov Substitution Principle which says that the functionality of an object should depend solely on the type and should not be affected by the implementation. You should be able to change the implementation and still use the object in the same way with the same results.
Since the functionality of an object is fully defined by the type of an object, the responsibilities of an object are also fully defined by the type. Changing the implementation does not change the responsibilities.

Is exporting third party library dependencies good programming practice?

I am using Intellij 14 and in the module settings, there is an option to export dependencies.
I noticed when I write objects that extend traits, I need to select exportin the module settings when other modules try to use these objects.
For example,
object SomeObj extends FileIO
would require me to export the FileIO dependency.
However, if I write a companion class that creates a new instance when the object is called, the exporting is no longer necessary.
object SomeObject {
private val someObject = new SomeObject()
def apply() = someObject
}
private[objectPkg] class SomeObject() extends FileIO {}
This code is more verbose and kind of a hack to the singleton pattern for Scala. Is it good to export third party dependencies with your module? If not, is my pattern the typical solution with Scala?
It all deal with code design principles in general. Basically, if you may switch underlying third party library later, or you system must be flexible to be ported over some other libs - then hiding implementation behind some facade is a must.
Often there is a ready-made set of interfaces in java/scala, which are implemented in third-party and you may just use those ones as a part of your facade to the rest of the system, and overall it is a java way. If this is not the case - you will need to derive interfaces by yourself. The worthiness of this everyone estimates by oneself in context.
As per your case: keep in mind that in java/scala you export names, and if you will just use your class (which extends FileIO) in any way outside your defining code, this means that class is accessible publicly and its type is exported/leaked outside as well. Scala should throw compile error, if some private class escapes its visibility scope (so in your second version of SomeObject it may be the case).
Consider this example: I use typesafe config library often in my applications. It has convenient methods, but I typically leave the space for possible separation (or rather my own extension):
package object config {
object Config {
private val conf: TypeSafeConfig = ConfigFactory.load()
def toTypeSafeConfig: TypeSafeConfig = conf
}
#inline implicit def confToTypeSafeConfig(conf: Config.type): TypeSafeConfig = conf.toTypeSafeConfig
}
Implicit conversion just allows me to call all TypeSafeConfig methods on my Config, and it has a bunch of convenient methods. Theoretically in future I could remove my implicit and implement methods I used right in the Config object. But I can hardly imagine why I would spend the time on this, though. This is some example of leaked implementation, that I don't consider problematic.

Scala Testing: Replace function implementation

Using ScalaTest, I want to replace a function implementation in a test case. My use case:
object Module {
private def currentYear() = DateTime.now().year.get
def doSomething(): Unit = {
val year = currentYear()
// do something with the year
}
}
I want to write a unit test testing Module.doSomething, but I don't want this test case to depend on the actual year the test is run in.
In dynamic languages I often used a construct that can replace a functions implementation to return a fixed value.
I'd like my test case to change the implementation of Module.currentYear to always return 2014, no matter what the actual year is.
I found several mocking libraries (Mockito, ScalaMock, ...), but they all only could create new mock objects. None of them seemed to be able to replace the implementation of a method.
Is there a way to do that? If not, how could I test code like that while not being dependent on the year the test is run in?
One way would be to just make do_something_with_the_year accessible to your test case (make it package protected for example). This is also nice because it separates looking up dependencies and actually using them.
Another way would be to put your logic in a trait, make the currentYear method protected and let the object be an instance of that trait. That way you could just create a custom object out of the trait it in a test and wouldn't need any mock library.
ScalaMock can mock singleton objects it says it right here: http://paulbutcher.com/2011/11/06/scalamock-step-by-step/
As well as traits (interfaces) and functions, it can also mock:
Classes
Singleton and companion objects (static methods) ...

Self-type traits with abstract classes

I'm facing a design issue that my poor Scala level cannot handle.
I have an abstract class, let's say:
abstract class AbstractJob {
def run: Long
implicit val someVal: String
}
Then, I have "platforms" on which jobs can be ran.
I want to declare subclasses containing real "jobs" (no need to know what it should actually do). Ideally, I want them to be declared this way:
class OneJob with Platform1 with Platform2 {
override def run = {
... some code returning a long result ...
}
override implicit val someVal = "foo"
}
Indeed, jobs can have multiple platforms on which they can be ran.
However, the run method on jobs must be launched by the platforms. Therefore I tried to use a self-type:
trait Platform1 { self: AbstractJob =>
def runOnPlatform = someFunctionRelatedToPlatform(run(someVal))
}
Unfortunately when calling someVal value (in any job extending AbstractJob), I get a null value. I went to the conclusion that self-type in traits are directly related to the defined class (and not the actual class, which is a subtype in my case)
I tried to define another type type Jobs :> AbstractJob for the self-type in the trait but that didn't work.
I have a limited number of backup solutions, but I want to use the "full power" of Scala and avoid developers to write a lot of redundant and "plumber" code (related to platforms and AbstractJob class in my example).
AbstractJob calls directly runOnPlatform with an "abstract" function to be implemented in concrete jobs. In this case my users will only write the business code they need but I'm quite sure I can do better using Scala concepts. I'm feeling that I'm just using Java (and generally OOP) concepts in Scala...
Let the users write a hell lot of redundant code... Obviously I'm avoiding this as much as possible!!
I hope I'm clear enough!