Use mocked function return value in real function call - scala

Is it possible to use a mocked function inside a real function call? Both functions are in the same object. So for example, if I have
obj A {
def mockThis(value: Int): Int = {
value*5
}
def realFuncIWantToTest(value: Int): Int = {
val v = mockThis(value)
v
}
}
Obviously this is an extremely simple case and this isn't what my code is doing (v is actually a complicated object). Essentially I want realFuncIWantToTest to use the mocked function return value that I define.
Thanks!

You might be able to do this using Mockito's spies; see here for an example on that.
Spies basically work by having that spy wrapping around a real object of your class under test.
But one word here: even when it is possible, please consider changing your design instead. This "partial mocking" is often a good indication that your class is violating the single responsibility principle. Meaning: a class should be responsible for "one" thing. But the idea that you can / have to partially mock things within your class indicates that your class is responsible for at least two, somehow disconnect aspects.
In that sense: the better approach would be that mockThis() would be a call on another object; which could be inserted via dependency injection into this class.
Long story short: at least on a Java level your idea should work fine (where I have certain doubts that Mockito will work nicely with your scala objects) from a technical perspective; but from a conceptual point point; you should rather avoid doing it this way.

Related

Pattern Matching Design

I recently wrote some code like the block below and it left me with thoughts that the design could be improved if I was more knowledgeable on functional programming abstractions.
sealed trait Foo
case object A extends Foo
case object B extends Foo
case object C extends Foo
.
.
.
object Foo {
private def someFunctionSemanticallyRelatedToA() = { // do stuff }
private def someFunctionSemanticallyRelatedToB() = { // do stuff }
private def someFunctionSemanticallyRelatedToC() = { // do stuff }
.
.
.
def somePublicFunction(x : Foo) = x match {
case A => someFunctionSemanticallyRelatedToA()
case B => someFunctionSemanticallyRelatedToB()
case C => someFunctionSemanticallyRelatedToC()
.
.
.
}
}
My questions are:
Is the somePublicFunction() suffering from code smell or even the whole design? My concern is that the list of value constructors could grow quite big.
Is there a better FP abstraction to handle this type of design more elegantly or even concisely?
You've just run into the expression problem. In your code sample, the problem is that potentially every time you add or remove a case from your Foo algebraic data type, you'll need to modify every single match (like in somePublicFunction) against values of Foo. In Nimrand's answer, the problem is in the opposite end of the spectrum: you can add or remove cases from Foo easily, but every time you want to add or remove a behaviour (a method), you'll need to modify every subclass of Foo.
There are various proposals to solve the expression problem, but one interesting functional way is Oleg Kiselyov's Typed Tagless Final Interpreters, which replaces each case of the algebraic data type with a function that returns some abstract value that's considered to be equivalent to that case. Using generics (i.e. type parameters), these functions can all have compatible types and work with each other no matter when they were implemented. E.g., I've implemented an example of building and evaluating an arithmetic expression tree using TTFI: https://github.com/yawaramin/scala-ttfi
Your explanation is a bit too abstract to give you a confident answer. However, if the list of subclasses of Foo is likely to grow/change in the future, I would be inclined to make it an abstract method of Foo, and then implement the logic for each case in the sub classes. Then you just call Foo.myAbstractMethod() and polymorphism handles everything neatly.
This keeps the code specific to each object with the object itself, which is keeps things more neatly organized. It also means that you can add new subclasses of Foo without having to jump around to multiple places in code to augment the existing match statements elsewhere in the code.
Case classes and pattern-matching work best when the set of sub-classes is relatively small and fixed. For example, Option[T] there are only two sub-classes, Some[T] and None. That will NEVER change, because to change that would be to fundamentally change what Option[T] represents. Therefore, it's a good candidate for pattern-matching.

Scala Testing: Replace function implementation

Using ScalaTest, I want to replace a function implementation in a test case. My use case:
object Module {
private def currentYear() = DateTime.now().year.get
def doSomething(): Unit = {
val year = currentYear()
// do something with the year
}
}
I want to write a unit test testing Module.doSomething, but I don't want this test case to depend on the actual year the test is run in.
In dynamic languages I often used a construct that can replace a functions implementation to return a fixed value.
I'd like my test case to change the implementation of Module.currentYear to always return 2014, no matter what the actual year is.
I found several mocking libraries (Mockito, ScalaMock, ...), but they all only could create new mock objects. None of them seemed to be able to replace the implementation of a method.
Is there a way to do that? If not, how could I test code like that while not being dependent on the year the test is run in?
One way would be to just make do_something_with_the_year accessible to your test case (make it package protected for example). This is also nice because it separates looking up dependencies and actually using them.
Another way would be to put your logic in a trait, make the currentYear method protected and let the object be an instance of that trait. That way you could just create a custom object out of the trait it in a test and wouldn't need any mock library.
ScalaMock can mock singleton objects it says it right here: http://paulbutcher.com/2011/11/06/scalamock-step-by-step/
As well as traits (interfaces) and functions, it can also mock:
Classes
Singleton and companion objects (static methods) ...

Should I avoid defining 'object' in Scala?

I think 'object' in Scala is pretty similar to Singleton in Java which is not considered to be a good design practice. Singleton to me is like another way to define global variables which is BAD. I wrote some Scala code like this because it's easy and it works but the code looks ugly:
object HttpServer { // I'm the only HttpServer instance in this program.
var someGlobalState: State
def run() {
// do something
}
}
I'm trying to avoid doing this. When is it good to define Scala object?
No. Many Scala-Libraries heavily rely on object.
The main goal of the Singleton-Pattern is that just one instance of the Object can exist. The same holds true for Object.
You may misuse it as global variable but that is not the point.
Object are for example a great place for Factory Methods or a replacement for Modules to hold functions.
Why do you assume that you only want global variables? Global values and methods are really useful. This is most of what you'll use object for in Scala.
object NumericConstant {
val Pi = 3.1415926535897932385 // I probably will not change....
}
object NumericFunctions {
def squared(x: Double) = x*x // This is probably always what we mean...
}
Now, you do have to be careful using global variables, and if you want to you can implement them in objects. Then you need to figure out whether you are being careless (note: passing the same instance of a class to every single class and method in your program is equally problematic), or whether the logic of what you are doing really is best reflected by a single global value.
Here's a really, really bad idea:
object UserCache {
var newPasswordField: String = "foo bar"
}
Two users change their password simultaneously and...well...you will have some unhappy users.
On the other hand,
object UserIDProvider {
private[this] var maxID = 1
def getNewID() = this.synchronized {
var id = maxID
maxID += 1
id
}
}
if you don't do something like this, again, you're going to have some unhappy users. (Of course, you'd really need to read some state on disk regarding user ID number on startup...or keep all that stuff in a database...but you get the point.)
Global variables are not inherently bad. You just need to understand when it's appropriate. And so it follows that object is not inherently bad. For example:
object HelloWorld {
def main(args:Array[String]){
println("Hello World")
}
}
Without going into a long discussion of the topic, I like to think of it this way: "Do I want 'only one' of these things because that best reflects reality? Or is this a lazy shortcut to get things to 'just work'?"
Don't just blindly and broadly apply the "Singleton is bad" rule. There are plenty of cases where "just one" of something makes sense. In your particular case, I'd need more context to give a more specific recommendation.

Scala: Do classes that extend a trait always take the traits properties?

Given the following:
class TestClass extends TestTrait {
def doesSomething() = methodValue + intValue
}
trait TestTrait {
val intValue = 4
val unusedValue = 5
def methodValue = "method"
def unusedMethod = "unused method"
}
When the above code runs, will TestClass actually have memory allocated to unusedValue or unusedMethod? I've used javap and I know that there exists an unusedValue and an unusedMethod, but I cannot determine if they are actually populated with any sort of state or memory allocation.
Basically, I'm trying to understand if a class ALWAYS gets all that a trait provides, or if the compiler is smart enough to only provide what the class actually uses from the trait?
If a trait always imposes itself on a class, it seems like it could be inefficient, since I expect many programmers will use traits as mixins and therefore wasting memory everywhere.
Thanks to all who read and help me get to the bottom of this!
Generally speaking, in languages like Scala and Java and C++, each class has a table of pointers to its instance methods. If your question is whether the Scala compiler will allocate slots in the method table for unusedMethod then I would say yes it should.
I think your question is whether the Scala compiler will look at the body of TestClass and say "whoa, I only see uses of methodValue and intValue, so being a good compiler I'm going to refrain from allocating space in TestClass's method table for unusedMethod. But it can't really do this in general. The reason is, TestClass will be compiled into a class file TestClass.class and this class may be used in a library by programmers that you don't even know.
And what will they want to do with your class? This:
var x = new TestClass();
print(x.unusedMethod)
See, the thing is the compiler can't predict who is going to use this class in the future, so it puts all methods into its method table, even the ones not called by other methods in the class. This applies to methods declared in the class or picked up via an implemented trait.
If you expect the compiler to do global system-wide static analysis and optimization over a fixed, closed system then I suppose in theory it could whittle away such things, but I suspect that would be a very expensive optimization and not really worth it. If you need this kind of memory savings you would be better off writing smaller traits on your own. :)
It may be easiest to think about how Scala implements traits at the JVM level:
An interface is generated with the same name as the trait, containing all the trait's method signatures
If the trait contains only abstract methods, then nothing more is needed
If the trait contains any concrete methods, then the definition of these will be copied into any class that mixes in the trait
Any vals/vars will also get copied verbatim
It's also worth noting how a hypothetical var bippy: Int is implemented in equivalent java:
private int bippy; //backing field
public int bippy() { return this.bippy; } //getter
public void bippy_$eq(int x) { this.bippy = x; } //setter
For a val, the backing field is final and no setter is generated
When mixing-in a trait, the compiler doesn't analyse usage. For one thing, this would break the contract made by the interface. It would also take an unacceptably long time to perform such an analysis. This means that you will always inherit the cost of the backing fields from any vals/vars that get mixed in.
As you already hinted, if this is a problem then the solution is just use defs in your traits.
There are several other benefits to such an approach and, thanks to the uniform access principle, you can always override such a method with a val further down in the inheritance hierarchy if you need to.

Why people define class, trait, object inside another object in Scala?

Ok, I'll explain why I ask this question. I begin to read Lift 2.2 source code these days.
It's good if you happened to read lift source code before.
In Lift, I found that, define inner class and inner trait are very heavily used.
object Menu has 2 inner traits and 4 inner classes. object Loc has 18 inner classes, 5 inner traits, 7 inner objects.
There're tons of codes write like this. I wanna to know why the author write like this.
Is it because it's the author's
personal taste or a powerful use of
language feature?
Is there any trade-off for this kind
of usage?
Before 2.8, you had to choose between packages and objects. The problem with packages is that they cannot contain methods or vals on their own. So you have to put all those inside another object, which can get awkward. Observe:
object Encrypt {
private val magicConstant = 0x12345678
def encryptInt(i: Int) = i ^ magicConstant
class EncryptIterator(ii: Iterator[Int]) extends Iterator[Int] {
def hasNext = ii.hasNext
def next = encryptInt(ii.next)
}
}
Now you can import Encrypt._ and gain access to the method encryptInt as well as the class EncryptIterator. Handy!
In contrast,
package encrypt {
object Encrypt {
private[encrypt] val magicConstant = 0x12345678
def encryptInt(i: Int) = i ^ magicConstant
}
class EncryptIterator(ii: Iterator[Int]) extends Iterator[Int] {
def hasNext = ii.hasNext
def next = Encrypt.encryptInt(ii.next)
}
}
It's not a huge difference, but it makes the user import both encrypt._ and encrypt.Encrypt._ or have to keep writing Encrypt.encryptInt over and over. Why not just use an object instead, as in the first pattern? (There's really no performance penalty, since nested classes aren't actually Java inner classes under the hood; they're just regular classes as far as the JVM knows, but with fancy names that tell you that they're nested.)
In 2.8, you can have your cake and eat it too: call the thing a package object, and the compiler will rewrite the code for you so it actually looks like the second example under the hood (except the object Encrypt is actually called package internally), but behaves like the first example in terms of namespace--the vals and defs are right there without needing an extra import.
Thus, projects that were started pre-2.8 often use objects to enclose lots of stuff as if they were a package. Post-2.8, one of the main motivations has been removed. (But just to be clear, using an object still doesn't hurt; it's more that it's conceptually misleading than that it has a negative impact on performance or whatnot.)
(P.S. Please, please don't try to actually encrypt anything that way except as an example or a joke!)
Putting classes, traits and objects in an object is sometimes required when you want to use abstract type variables, see e.g. http://programming-scala.labs.oreilly.com/ch12.html#_parameterized_types_vs_abstract_types
It can be both. Among other things, an instance of an inner class/trait has access to the variables of its parent. Inner classes have to be created with a parent instance, which is an instance of the outer type.
In other cases, it's probably just a way of grouping closely related things, as in your object example. Note that the trait LocParam is sealed, which means that all subclasses have to be in the same compile unit/file.
sblundy has a decent answer. One thing to add is that only with Scala 2.8 do you have package objects which let you group similar things in a package namespace without making a completely separate object. For that reason I will be updating my Lift Modules proposal to use a package object instead of a simple object.