I am writing unit tests for a class(class A) that takes another class(class B) as its parameter which in turn takes another class(class C) as its parameter. I need to mock class B in order to test class A. But, mock doesn't take class parameters.
So, I created a trait(trait BsTrait) and my class B extends BsTrait. According to this answer - (ScalaMock. Mock a class that takes arguments).
Class I want to test - class A(b: BsTrait){}
class B - class B(c: C){}
class C - class C{}
trait of class B - trait BsTrait{}
My Unit Test -
val mockFactory = mockFunction[C, B]
val mockClient = mock[BsTrait]
mockFactory.expects(new C).returning(mockClient)```
Error: /path/to/file/Test.scala:63: type mismatch; found : com.example.BsTrait required: com.example.B
mockFactory.expects(new C).returning(mockClient)
I tried to add return type for mockFunction as well, but it threw the same error.
I know this is a not directly a solution, and maybe it actually is.
In my experience, ditch the mocking framework in 99% of the cases where you want it, and just write mock implementations of traits (or interfaces in other languages). Yes it will feel like boilerplate, but:
Often it turns out you will write less code - the mock code is short to begin with, but almost always starts having all sorts of work arounds to allow all sorts of calls, sometimes even calls your test does not care about, but has to be there. That you will end up having in a lot of the tests, just for the sake of the test
Once you in your framework mock a function/method with some testing behaviour people often don't discover when it isn't needed for the test any more, and forget to delete it. Auch dead code that nothing warns us is dead
Code you control is easier to extend or modify to your needs. Your own problem as a good example. Had your mock just been a class you wouldn't need to ask a question. Now your mock is hidden i all sorts of reflections, perhaps byte code manipulations (I don't know if ScalaMock does this, but a lot of frameworks do), and most likely other nasty things, that is impossible to debug.
Fameworked mocks are often hard and unwieldy to maintain. If you add a method to your trait you can end needing to update a lot of tests, even though it should not affect them. Just because those tests in some hidden way use them
You don't fall into the mocking framework pitfall with testing how you implemented your function/method rather than what it does. (E.g. is it important if you rest client specifically calls the method named get, or is it important that it performs a get-request, no matter if it calls exchange, get or something else?)
So even though it feels like overkill, heavily consider doing it anyway. You Will most likely write less code in the long run. The code you write is more clear and readable, if a function or method is no longer used you remove it from your trait and get a compiler warning on your mock. Your question would automatically have been solved, because you'd just extend your mock's behavior just enough to get it going. And your are more likely to actually write good tests.
Lastly, if you follow this suggestion, just one thing: keep your mocks independent. Do not let B take a parameter C in your mock. Make your B work with lists, maps, or whatever you need. The point is to test A, not B and C :)
Related
I am not sure the keywords for this pattern, sorry if the question is not clear.
If you have:
case class MyFancyWrapper(
somethingElse: Any,
heavyComplexObject: CrazyThing
)
val w = MyFancyWrapper(???, complexThing)
I want to be able to call w.method with the method coming from complexThing. I tried to extends CrazyThing but it is a trait and I don't want to implement all the method that would be very tedious. I also don't want to have to do:
def method1 = heavyComplexObject.method1
...
for all of them.
Any solution ?
Thanks.
You can do this with macros but I agree with Luis that this is an overkill. Macros are intended to repetitive boring things, not one time boring things. Also this is not as trivial as it sounds, because you probably don't want to pass through all the methods (you probably still want your own hashCode and equals). Finally macros have bad IDE support so most probably no auto-completion for all those methods. On the other hand if you do use a good IDE (like IDEA) there is most probably an action like "Delegate methods" that will generate most of the code for you. You still will have to change the return type from Unit to MyFancyWrapper and add returning this at the end of each method but this can easily be done with mass replace operations (hint: replace "}" with "this }" and the automatically re-formatting code should do the trick)
Here are some screenshots of the process from JetBrains IDEA:
You can use an implicit conversion to make all the methods of heavyComplexThing directly available on MyFancyWrapper:
implicit def toHeavy(fancy: MyFancyWrapper): CrazyThing = fancy.heavyComplexObject
This needs to be in scope when the method is called.
In the comments you indicate that you want to return this so that you can chain multiple calls on the same object:
w.method1.method2.method3
Don't do this
While this is a common pattern in non-functional languages, it is bad practice is Scala for two reasons:
This pattern inherently relies on side-effects, which is the antithesis of functional programming.
It is confusing, because in Scala chaining calls in this way is used to implement a data pipeline, where the output of one function is passed as the input to the next.
It is much clearer to write separate statements so that it is obvious that the methods are being called on the same object:
w.method1()
w.method2()
w.method3()
(It is also conventional to use () when calling methods with side effects)
I'm new to BDD, even to whole testing world.
I'm trying to take BDD practices when writing a simple linear algebra library in swift. So there would be many value object types like Matrix, Vector etc. When writing code, I suppose I still need to stick to the TDD principle (am I right?):
Not writing any single line of code without a failing test
To implement a value object type, I need to make it conform to Equatable protocol and implement its == operator. This is adding code, so I need a failing test. How to write spec for this kinda scenarios ?
One may suggest some approach like:
describe("Matrix") {
it("should be value object") {
let aMatrix = Matrix<Double>(rows: 3, cols:2)
let sameMatrix = Matrix<Double>(rows: 3, cols:2)
expect(sameMatrix) == aMatrix
let differentMatrix = Matrix<Double>(rows: 4, cols: 2)
expect(differentMatrix) != aMatrix
}
}
This would be an ugly boilerplate for two reasons:
There may be plenty of value object types and I need to repeat it for all of them
There may be plenty of cases that would cause two objects being not equal. Taking the spec above for example, an implementation of == like return lhs.rows == rhs.rows would pass the test. In order to reveal this "bug", I need to add another expectation like expect(matrixWithDifferentColmunCount) != aMatrix. And again, this kinda repetition happens for all value object types.
So, how should I test this "isEqual" ( or operator== ) method elegantly ? or shouldn't I test it at all ?
I'm using swift and Quick for testing framework. Quick provides a mechanism called SharedExample to reduce boilerplates. But since swift is a static typing language and Quick's shared example doesn't support generics, I can't directly use a shared example to test value objects.
I came up with a workaround but don't consider it as an elegant one.
Elegance is the enemy of test suites. Test suites are boring. Test suites are repetitive. Test suites are not "DRY." The more clever you make your test suites, the more you try to avoid boilerplate, the more you are testing your test infrastructure instead of your code.
The rule of BDD is to write the test before you write the code. It's a small amount of test code because it's testing a small amount of live code; you just write it.
Yes, that can be taken too far, and I'm not saying you never use a helper function. But when you find yourself refactoring your test suite, you need to ask yourself what the test suite was for.
As a side note, your test doesn't test what it says it does. You're testing that identical constructors create Equal objects and non-identical constructors create non-Equal objects (which in principle is two tests). This doesn't test at all that it's a value object (though it's a perfectly fine thing to test). This isn't a huge deal. Don't let testing philosophy get in the way of useful testing; but it's good to have your titles match your test intent.
Let's say I have a non-sealed trait, Foo, and throughout my code I define some Objects that extend Foo.
Is there a way that I can, at compile time, look up all objects that extend Foo and print out some information about them (like printing out a string literal I have in a val?)
If so, how? If not, why not?
Seems like an obscure feature to have and it has problems since a lot of times only a few pieces of your code are re-compiled (and it only get's worse with libraries and dependencies). If I where you, I would just "find in files" for extends Foo or with Foo and check the results. You can even do a small script with regex to extract your val for each result if you are willing to do that work.
We are pretty familiar with implicits in Scala for now, but macros are pretty undiscovered area (at least for me) and, despite the presence of some great articles by Eugene Burmako, it is still not an easy material to just dive in.
In this particular question I'd like to find out if there is a possibility to achieve the analogous to the following code functionality using just macros:
implicit class Nonsense(val s: String) {
def ##(i:Int) = s.charAt(i)
}
So "asd" ## 0 will return 'a', for example. Can I implement macros that use infix notation? The reason to this is I'm writing a DSL for some already existing project and implicits allow making the API clear and concise, but whenever I write a new implicit class, I feel like introducing a new speed-reducing factor. And yes, I do know about value classes and stuff, I just think it would be really great if my DSL transformed into the underlying library API calls during compilation rather than in runtime.
TL;DR: can I replace implicits with macros while not changing the API? Can I write macros in infix form? Is there something even more suitable for this case? Is the trouble worth it?
UPD. To those advocating the value classes: in my case I have a little more than just a simple wrapper - they are often stacked. For example, I have an implicit class that takes some parameters, returns a lambda wrapping this parameters (i.e. partial function), and the second implicit class that is made specifically for wrapping this type of functions. I can achieve something like this:
a --> x ==> b
where first class wraps a and adds --> method, and the second one wraps the return type of a --> x and defines ==>(b). Plus it may really be the case when user creates considerable amount of objects in this fashion. I just don't know if this will be efficient, so if you could tell me that value classes cover this case - I'd be really glad to know that.
Back in the day (2.10.0-RC1) I had trouble using implicit classes for macros (sorry, I don't recollect why exactly) but the solution was to use:
an implicit def macro to convert to a class
define the infix operator as a def macro in that class
So something like the following might work for you:
implicit def toNonsense(s:String): Nonsense = macro ...
...
class Nonsense(...){
...
def ##(...):... = macro ...
...
}
That was pretty painful to implement. That being said, macro have become easier to implement since.
If you want to check what I did, because I'm not sure that applies to what you want to do, refer to this excerpt of my code (non-idiomatic style).
I won't address the relevance of that here, as it's been commented by others.
I've noticed that the Scala standard library uses two different strategies for organizing classes, traits, and singleton objects.
Using packages whose members are them imported. This is, for example, how you get access to scala.collection.mutable.ListBuffer. This technique is familiar coming from Java, Python, etc.
Using type members of traits. This is, for example, how you get access to the Parser type. You first need to mix in scala.util.parsing.combinator.Parsers. This technique is not familiar coming from Java, Python, etc, and isn't much used in third-party libraries.
I guess one advantage of (2) is that it organizes both methods and types, but in light of Scala 2.8's package objects the same can be done using (1). Why have both these strategies? When should each be used?
The nomenclature of note here is path-dependent types. That's the option number 2 you talk of, and I'll speak only of it. Unless you happen to have a problem solved by it, you should always take option number 1.
What you miss is that the Parser class makes reference to things defined in the Parsers class. In fact, the Parser class itself depends on what input has been defined on Parsers:
abstract class Parser[+T] extends (Input => ParseResult[T])
The type Input is defined like this:
type Input = Reader[Elem]
And Elem is abstract. Consider, for instance, RegexParsers and TokenParsers. The former defines Elem as Char, while the latter defines it as Token. That means the Parser for the each is different. More importantly, because Parser is a subclass of Parsers, the Scala compiler will make sure at compile time you aren't passing the RegexParsers's Parser to TokenParsers or vice versa. As a matter of fact, you won't even be able to pass the Parser of one instance of RegexParsers to another instance of it.
The second is also known as the Cake pattern.
It has the benefit that the code inside the class that has a trait mixed in becomes independent of the particular implementation of the methods and types in that trait. It allows to use the members of the trait without knowing what's their concrete implementation.
trait Logging {
def log(msg: String)
}
trait App extends Logging {
log("My app started.")
}
Above, the Logging trait is the requirement for the App (requirements can also be expressed with self-types). Then, at some point in your application you can decide what the implementation will be and mix the implementation trait into the concrete class.
trait ConsoleLogging extends Logging {
def log(msg: String) = println(msg)
}
object MyApp extends App with ConsoleLogging
This has an advantage over imports, in the sense that the requirements of your piece of code aren't bound to the implementation defined by the import statement. Furthermore, it allows you to build and distribute an API which can be used in a different build somewhere else provided that its requirements are met by mixing in a concrete implementation.
However, there are a few things to be careful with when using this pattern.
All of the classes defined inside the trait will have a reference to the outer class. This can be an issue where performance is concerned, or when you're using serialization (when the outer class is not serializable, or worse, if it is, but you don't want it to be serialized).
If your 'module' gets really large, you will either have a very big trait and a very big source file, or will have to distribute the module trait code across several files. This can lead to some boilerplate.
It can force you to have to write your entire application using this paradigm. Before you know it, every class will have to have its requirements mixed in.
The concrete implementation must be known at compile time, unless you use some sort of hand-written delegation. You cannot mix in an implementation trait dynamically based on a value available at runtime.
I guess the library designers didn't regard any of the above as an issue where Parsers are concerned.